input
stringlengths
6.82k
29k
Instruction: Do TNF inhibitors reduce the risk of myocardial infarction in psoriasis patients? Abstracts: abstract_id: PUBMED:23677316 Do TNF inhibitors reduce the risk of myocardial infarction in psoriasis patients? Objective: To assess whether patients with psoriasis treated with tumor necrosis factor (TNF) inhibitors have a decreased risk of myocardial infarction (MI) compared with those not treated with TNF inhibitors. Design: Retrospective cohort study. Setting: Kaiser Permanente Southern California health plan. Patients: Patients with at least 3 International Classification of Diseases, Ninth Revision, Clinical Modification, codes for psoriasis (696.1) or psoriatic arthritis (696.0) (without antecedent MI) between January 1, 2004, and November 30, 2010. Main Outcome Measure: Incident MI. Results: Of 8845 patients included, 1673 received a TNF inhibitor for at least 2 months (TNF inhibitor cohort), 2097 were TNF inhibitor naive and received other systemic agents or phototherapy (oral/phototherapy cohort), and 5075 were not treated with TNF inhibitors, other systemic therapies, or phototherapy (topical cohort). The median duration of follow-up was 4.3 years (interquartile range, 2.9, 5.5 years), and the median duration of TNF inhibitor therapy was 685 days (interquartile range, 215, 1312 days). After adjusting for MI risk factors, the TNF inhibitor cohort had a significantly lower hazard of MI compared with the topical cohort (adjusted hazard ratio, 0.50; 95% CI, 0.32-0.79). The incidence of MI in the TNF inhibitor, oral/phototherapy, and topical cohorts were 3.05, 3.85, and 6.73 per 1000 patient-years, respectively. Conclusions: Use of TNF inhibitors for psoriasis was associated with a significant reduction in MI risk and incident rate compared with treatment with topical agents. Use of TNF inhibitors for psoriasis was associated with a non-statistically significant lower MI incident rate compared with treatment with oral agents/phototherapy. abstract_id: PUBMED:29499292 The risk of cardiovascular events in psoriasis patients treated with tumor necrosis factor-α inhibitors versus phototherapy: An observational cohort study. Background: Psoriasis is a risk factor for cardiovascular events. Objective: To assess the risk of major cardiovascular events and the effect of cumulative treatment exposure on cardiovascular event risk in patients with psoriasis treated with tumor necrosis factor-α inhibitors (TNFis) versus phototherapy. Methods: Adult patients with psoriasis were selected from a large US administrative claims database (from the first quarter of 2000 through the third quarter of 2014) and classified in 2 mutually exclusive cohorts based on whether they were treated with TNFis or phototherapy. Cardiovascular event risk was compared between cohorts using multivariate Cox proportional hazards models. Cumulative exposure was defined based on treatment persistence. Results: A total of 11,410 TNFi and 12,433 phototherapy patients (psoralen plus ultraviolet A light phototherapy, n = 1117; ultraviolet B light phototherapy, n = 11,316) were included in this study. TNFi patients had a lower risk of cardiovascular events compared to phototherapy patients (adjusted hazard ratio 0.77, P &lt; .05). The risk reduction associated with 6 months of cumulative exposure was 11.2% larger for patients treated with TNFis compared to phototherapy (P &lt; .05). Limitations: Information on psoriasis severity and mortality was limited/not available. Conclusions: Patients with psoriasis who were treated with TNFis exhibited a lower cardiovascular event risk than patients treated with phototherapy. Cumulative exposure to TNFis was associated with an incremental cardiovascular risk reduction compared to phototherapy. abstract_id: PUBMED:27300248 The Effect of TNF Inhibitors on Cardiovascular Events in Psoriasis and Psoriatic Arthritis: an Updated Meta-Analysis. TNF inhibitors have been used in psoriasis (Pso) and psoriatic arthritis (PsA), which were associated with increased risk of cardiac and cerebrovascular events. However, whether TNF inhibitors reduce cardiovascular event is still unclear. Therefore, we aimed to evaluate the effect of TNF inhibitors on adverse cardiovascular events (CVEs) in Pso with or without PsA. We undertook a meta-analysis of clinical trials identified in systematic searches of MEDLINE, EMBASE, Wanfang database, Cochrane Database, and Google scholar through December 31, 2015. Five studies (49,795 patients) were included. Overall, compared with topical/photo treatment, TNF inhibitors were associated with a significant lower risk of CVE (RR, 0.58; 95 % CI, 0.43 to 0.77; P &lt; 0.001; I (2) = 66.2 %). Additionally, compared with methotrexate (MTX) treatment, risk of CVE was also markedly decreased in the TNF inhibitor group (RR, 0.67; 95 % CI, 0.52 to 0.88; P = 0.003; I (2) = 9.3 %). Meanwhile, TNF inhibitors were linked to reduced incidence of myocardial infarction compared with topical/photo or MTX treatment (RR, 0.73; 95 % CI, 0.59 to 0.90; P = 0.003; I (2) = 56.2 % and RR, 0.65; 95 % CI, 0.48 to 0.89; P = 0.007; I (2) = 0.0 %, respectively). Furthermore, there was a trend of decreased rate of mortality in the TNF inhibitor group compared with other therapy (RR, 0.90; 95 % CI, 0.54 to 1.50; P = 0.68; I (2) = 70.9 %). TNF inhibitors appear to have net clinical benefits with regard to adverse cardiovascular events in Pso and/or PsA. Rigorous randomized controlled trials will need to evaluate whether TNF inhibitors truly result in reduction of cardiovascular diseases. abstract_id: PUBMED:27894789 Cardiovascular event risk assessment in psoriasis patients treated with tumor necrosis factor-α inhibitors versus methotrexate. Background: Psoriasis is associated with increased risk for cardiovascular disease. Objective: To compare major cardiovascular event risk in psoriasis patients receiving methotrexate or tumor necrosis factor-α inhibitor (TNFi) and to assess TNFi treatment duration impact on major cardiovascular event risk. Methods: Adult psoriasis patients with ≥2 TNFi or methotrexate prescriptions in the Truven MarketScan Databases (Q1 2000-Q3 2011) were classified as TNFi or methotrexate users. The index date for each of these drugs was the TNFi initiation date or a randomly selected methotrexate dispensing date, respectively. Cardiovascular event risks and cumulative TNFi effect were analyzed by using multivariate Cox proportional-hazards models. Results: By 12 months, TNFi users (N = 9148) had fewer cardiovascular events than methotrexate users (N = 8581) (Kaplan-Meier rates: 1.45% vs 4.09%: P &lt; .01). TNFi users had overall lower cardiovascular event hazards than methotrexate users (hazard ratio = 0.55; P &lt; .01). Over 24 months' median follow-up, every 6 months of cumulative exposure to TNFis were associated with an 11% cardiovascular event risk reduction (P = .02). Limitations: Lack of clinical assessment measures. Conclusions: Psoriasis patients receiving TNFis had a lower major cardiovascular event risk compared to those receiving methotrexate. Cumulative exposure to TNFis was associated with a reduced risk for major cardiovascular events. abstract_id: PUBMED:30916734 Association of Ustekinumab vs TNF Inhibitor Therapy With Risk of Atrial Fibrillation and Cardiovascular Events in Patients With Psoriasis or Psoriatic Arthritis. Importance: Accumulating evidence indicates that there is an increased risk of cardiovascular disease among patients with psoriatic disease. Although an emerging concern that the risk of atrial fibrillation (AF) may also be higher in this patient population adds to the growing support of initiating early interventions to control systemic inflammation, evidence on the comparative cardiovascular safety of current biologic treatments remains limited. Objective: To evaluate the risk of AF and major adverse cardiovascular events (MACE) associated with use of ustekinumab vs tumor necrosis factor inhibitors (TNFi) in patients with psoriasis or psoriatic arthritis. Design, Setting, And Participants: This cohort study included data from a nationwide sample of 78 162 commercially insured patients in 2 US commercial insurance databases (Optum and MarketScan) from September 25, 2009, through September 30, 2015. Patients were included if they were 18 years or older, had psoriasis or psoriatic arthritis, and initiated ustekinumab or a TNFi therapy. Exclusion criteria included history of AF or receipt of antiarrhythmic or anticoagulant therapy during the baseline period. Exposures: Initiation of ustekinumab vs TNFi therapy. Main Outcomes And Measures: Incident AF and MACE, including myocardial infarction, stroke, or coronary revascularization. Results: A total of 60 028 patients with psoriasis or psoriatic arthritis (9071 ustekinumab initiators and 50 957 TNFi initiators) were included in the analyses. The mean (SD) age was 46 (13) years in Optum and 47 (13) in MarketScan, and 29 495 (49.1%) were male. Overall crude incidence rates (reported per 1000 person-years) for AF were 5.0 (95% CI, 3.8-6.5) for ustekinumab initiators and 4.7 (95% CI, 4.2-5.2) for TNFi initiators, and for MACE were 6.2 (95% CI, 4.9-7.8) for ustekinumab initiators and 6.1 (95% CI, 5.5-6.7) for TNFi initiators. The combined adjusted hazard ratio for incident AF among ustekinumab initiators was 1.08 (95% CI, 0.76-1.54) and for MACE among ustekinumab initiators was 1.10 (95% CI, 0.80-1.52) compared with TNFi initiators. Conclusions And Relevance: No substantially different risk of incident AF or MACE after initiation of ustekinumab vs TNFi was observed in this study. This information may be helpful when weighing the risks and benefits of various systemic treatment strategies for psoriatic disease. abstract_id: PUBMED:27881030 The effect of tumor necrosis factor inhibitor therapy on the incidence of myocardial infarction in patients with psoriasis: a retrospective study. Background: Psoriasis has been shown to be associated with increased incidence of myocardial infarction (MI). The data on the effect of tumor necrosis factor (TNF) inhibitors on MI in psoriasis are scarce. Objective: To evaluate the effect of TNF inhibitors on the risk of MI in psoriasis patients compared with methotrexate (MTX) and topical agents. Methods: Data were obtained from the Electronic Health Records database of Farwaniya Hospital from psoriasis patients seen from January 2008 to December 2014. Patients were categorized into TNF inhibitor, MTX and topical cohorts. Results: The study included 4762 psoriasis patients. Both TNF inhibitor and MTX cohorts showed a statistically lower rate of MI compared with topical cohort. However, there was no statistically significant difference in MI rate between TNF inhibitor and MTX cohorts (P = .32). The probability of MI was lower in TNF inhibitor responders compared with non-responders (p = .001). Conclusions: The use of TNF inhibitors in psoriasis showed a significant reduction in the risk of MI compared with topical agents and a non-significant reduction compared with MTX. Responders to TNF inhibitor therapy showed a reduction in MI rate compared with non-responders. abstract_id: PUBMED:24281789 Effect of treating psoriasis on cardiovascular co-morbidities: focus on TNF inhibitors. Psoriasis patients are at increased risk for cardiovascular disease. Literature on rheumatoid arthritis has shown the association of treatment with tumor necrosis factor (TNF) inhibitors and improvement of cardiovascular disease. Recent literature has also shown similar findings in psoriasis patients. We present a review of the literature on the effect of TNF inhibitors for psoriasis treatment on cardiovascular disease, cardiovascular biomarkers, and insulin resistance. We conclude that TNF inhibitors may be especially beneficial in preventing myocardial infarction, to a degree greater than methotrexate, especially in the Caucasian population. The effects of TNF inhibitors in altering insulin sensitivity or preventing new onset diabetes have been contradictory. Case reports of both hyperglycemia and hypoglycemia developing in patients under TNF inhibitor treatment teach us to warn patients about these side effects. More robust clinical studies are needed to evaluate the true effect of TNF inhibitors in diabetic psoriasis patients. More studies are also needed to assess the effect of TNF inhibitors on hypertension, dyslipidemia, and stroke. abstract_id: PUBMED:27458624 Cardiovascular risk in patients with psoriatic arthritis Psoriatic arthritis (PsA) is a chronic.immune-mediated disease that is observed in 8-30% of psoriatic patients. It has been recently established that PsA and psoriasis are closely associated with the high prevalence of metabolic syndrome, hypertension; abdominal obesity, and a risk for cardiovascular diseases (CVD), including fatal myocardial infarction (Ml) and acute cerebrovascular accidents, which shortens lifespan in the patients compared to the general population. The authors state their belief that the synergic effect of traditional risk factors (RFs) for CYD and systemic inflammation underlie the development of atherosclerosis in PsA. It is pointed out that the risk of CYD may be reduced not only provided that the traditional RFs for CVD are monitored, but also systemic inflammation is validly suppressed. The cardioprotective abilities of methotrexate and tumor necrosis factor-a (TNF-a) inhibitors are considered; the data of investigations showing that the treatment of PsA patients with TNF-a inhibitors results in a reduction in carotid artery intima-media thickness are given. lt is noted that there is a need for the early monitoring of traditional RFs for CVD in patients with PsA and for the elaboration of interdisciplinary national guidelines. abstract_id: PUBMED:25116971 Tumor necrosis factor inhibitor therapy and myocardial infarction risk in patients with psoriasis, psoriatic arthritis, or both. Objectives: To stratify MI risk reduction in those treated with a TNF inhibitor for psoriasis only, psoriatic arthritis only, or both psoriasis and psoriatic arthritis. Design: Retrospective cohort study. Setting: Between January 1, 2004 and November 30, 2010. Participants: At least 3 ICD9 codes for psoriasis (696.1) or psoriatic arthritis (696.0) (without antecedent MI. Intervention: None Main Outcome Measure: Incident MI. Results: When comparing to those not treated with TNF inhibitors (reference group), of those treated with TNF inhibitors: those with psoriasis only (N= 846) had a significant decrease in MI risk (hazard ratio (HR), 0.26; 95% CI, 0.12-0.56); those with psoriatic arthritis only (N= 112) had a non-significant decrease in MI risk (HR, 0.86; 95% CI, 0.28-2.70); those with both psoriasis and psoriatic arthritis (N= 715) had a non-significant decrease in MI risk (HR, 0.76; 95% CI, 0.47-1.24). Conclusions: In the TNF inhibitor cohort, those with psoriasis only have the strongest association with MI risk reduction, followed by those with psoriatic arthritis only, and then followed by those with both psoriasis and psoriatic arthritis. abstract_id: PUBMED:22911151 Association between tumor necrosis factor inhibitor therapy and myocardial infarction risk in patients with psoriasis. OBJECTIVE To assess whether patients with psoriasis treated with tumor necrosis factor (TNF) inhibitors have a decreased risk of myocardial infarction (MI) compared with those not treated with TNF inhibitors. DESIGN Retrospective cohort study. SETTING Kaiser Permanente Southern California health plan. PATIENTS Patients with at least 3 International Classification of Diseases, Ninth Revision, Clinical Modification, codes for psoriasis (696.1) or psoriatic arthritis (696.0) (without antecedent MI) between January 1, 2004, and November 30, 2010. MAIN OUTCOME MEASURE Incident MI. RESULTS Of 8845 patients included, 1673 received a TNF inhibitor for at least 2 months (TNF inhibitor cohort), 2097 were TNF inhibitor naive and received other systemic agents or phototherapy (oral/phototherapy cohort), and 5075 were not treated with TNF inhibitors, other systemic therapies, or phototherapy (topical cohort). The median duration of follow-up was 4.3 years (interquartile range, 2.9, 5.5 years), and the median duration of TNF inhibitor therapy was 685 days (interquartile range, 215, 1312 days). After adjusting for MI risk factors, the TNF inhibitor cohort had a significantly lower hazard of MI compared with the topical cohort (adjusted hazard ratio, 0.50; 95% CI, 0.32-0.79). The incidence of MI in the TNF inhibitor, oral/phototherapy, and topical cohorts were 3.05, 3.85, and 6.73 per 1000 patient-years, respectively. CONCLUSIONS Use of TNF inhibitors for psoriasis was associated with a significant reduction in MI risk and incident rate compared with treatment with topical agents. Use of TNF inhibitors for psoriasis was associated with a non-statistically significant lower MI incident rate compared with treatment with oral agents/phototherapy. Answer: Yes, TNF inhibitors appear to reduce the risk of myocardial infarction (MI) in psoriasis patients. A retrospective cohort study found that patients with psoriasis treated with TNF inhibitors had a significantly lower hazard of MI compared with those treated with topical agents (adjusted hazard ratio, 0.50; 95% CI, 0.32-0.79) (PUBMED:23677316). Another observational cohort study showed that psoriasis patients treated with TNF inhibitors exhibited a lower cardiovascular event risk than patients treated with phototherapy, with cumulative exposure to TNF inhibitors associated with an incremental cardiovascular risk reduction compared to phototherapy (PUBMED:29499292). A meta-analysis also supported these findings, indicating that compared with topical/photo treatment, TNF inhibitors were associated with a significantly lower risk of cardiovascular events (RR, 0.58; 95% CI, 0.43 to 0.77; P < 0.001) and myocardial infarction (RR, 0.73; 95% CI, 0.59 to 0.90; P = 0.003) (PUBMED:27300248). Furthermore, a study comparing major cardiovascular event risk in psoriasis patients receiving methotrexate or TNF inhibitors found that TNFi users had overall lower cardiovascular event hazards than methotrexate users (hazard ratio = 0.55; P < .01), with every 6 months of cumulative exposure to TNFis associated with an 11% cardiovascular event risk reduction (P = .02) (PUBMED:27894789). Another retrospective study showed that the use of TNF inhibitors in psoriasis was associated with a significant reduction in the risk of MI compared with topical agents and a non-significant reduction compared with methotrexate (PUBMED:27881030). Additionally, a review of the literature suggested that TNF inhibitors may be especially beneficial in preventing myocardial infarction, potentially to a greater degree than methotrexate, particularly in the Caucasian population (PUBMED:24281789). In summary, the evidence from multiple studies suggests that TNF inhibitors do reduce the risk of myocardial infarction in patients with psoriasis.
Instruction: Are the Affordable Care Act Restrictions Warranted? Abstracts: abstract_id: PUBMED:27017203 Are the Affordable Care Act Restrictions Warranted? A Contemporary Statewide Analysis of Physician-Owned Hospitals. Background: The Affordable Care Act placed a moratorium on physician-owned hospital (POH) expansion. Concern exists that POHs increase costs and target healthier patients. However, limited historical data support these claims and are not weighed against contemporary measures of quality and patient satisfaction. The purpose of this study was to investigate the quality, costs, and efficiency across hospital types. Methods: One hundred forty-five hospitals in a single state were analyzed: 8 POHs; 16 proprietary hospitals (PHs); and 121 general, full-service acute care hospitals (ACHs). Multiyear data from the Centers for Medicare and Medicaid Services Medicare Cost Report and the statewide Health Care Cost Containment Council were analyzed. Results: ACHs had a higher percentage of Medicare patients as a share of net patient revenue, with similar Medicare volume. POHs garnered significantly higher patient satisfaction: mean Hospital Consumer Assessment of Healthcare Providers and Systems summary rating was 4.86 (vs PHs: 2.88, ACHs: 3.10; P = .002). POHs had higher average total episode spending ($22,799 vs PHs: $18,284, ACHs: $18,856), with only $1435 of total spending on post-acute care (vs PHs: $3867, ACHs: $3378). Medicare spending per beneficiary and Medicare spending per beneficiary performance rates were similar across all hospital types, as were complication and readmission rates related to hip or knee surgery. Conclusion: POHs had better patient satisfaction, with higher total costs compared to PHs and ACHs. A focus on efficiency, patient satisfaction, and ratio of inpatient-to-post-acute care spending should be weighted carefully in policy decisions that might impact access to quality health care. abstract_id: PUBMED:26565630 Critical Care Implications of the Affordable Care Act. Objectives: To provide an overview of key elements of the Affordable Care Act. To evaluate ways in which the Affordable Care Act will likely impact the practice of critical care medicine. To describe strategies that may help health systems and providers effectively adapt to changes brought about by the Affordable Care Act. Data Sources And Synthesis: Data sources for this concise review include search results from the PubMed and Embase databases, as well as sources relevant to public policy such as the text of the Patient Protection and Affordable Care Act and reports of the Congressional Budget Office. As all of the Affordable Care Act's provisions will not be fully implemented until 2019, we also drew upon cost, population, and utilization projections, as well as the experience of existing state-based healthcare reforms. Conclusions: The Affordable Care Act represents the furthest reaching regulatory changes in the U.S. healthcare system since the 1965 Medicare and Medicaid provisions of the Social Security Act. The Affordable Care Act aims to expand health insurance coverage to millions of Americans and place an emphasis on quality and cost-effectiveness of care. From models which link pay and performance to those which center on episodic care, the Affordable Care Act outlines sweeping changes to health systems, reimbursement structures, and the delivery of critical care. Staffing models that include daily rounding by an intensivist, palliative care integration, and expansion of the role of telemedicine in areas where intensivists are inaccessible are potential strategies that may improve quality and profitability of ICU care in the post-Affordable Care Act era. abstract_id: PUBMED:26742594 The Affordable Care Act and Abortion Comparing the U.S. and Western Europe. The 2010 Affordable Health Care Act (ACA) treats abortion differently than any other health service, precluding public funding for abortion and imposing other restrictions on American states. To determine whether the ACA's abortion restrictions are uniquely American or have counterparts in other national health systems, this study employs a cross-sectional design comparing abortion restrictions in the ACA with those in 17 Western European countries. Using a six-item scale, the intensity of abortion restrictions is compared across Western European nations. A similar scale is employed for a five-state sample of state-level abortion restrictions. Although the United States is not alone in having abortion restrictions, how abortion is proscribed in the ACA has no counterpart in Western Europe. Unlike many Western European countries, the ACA's restrictions focus on abortion funding, not the length of gestation or the health of the pregnant woman. abstract_id: PUBMED:24927108 Diabetes and the Affordable Care Act. The Affordable Care Act--"Obamacare"--is the most important federal medical legislation to be enacted since Medicare. Although the goal of the Affordable Care Act is to improve healthcare coverage, access, and quality for all Americans, people with diabetes are especially poised to benefit from the comprehensive reforms included in the act. Signed into law in 2010, this massive legislation will slowly be enacted over the next 10 years. In the making for at least a decade, it will affect every person in the United States, either directly or indirectly. In this review, we discuss the major changes in healthcare that will take place in the next several years, including (1) who needs to purchase insurance on the Web-based exchange, (2) the cost to individuals and the rebates that they may expect, (3) the rules and regulations for purchasing insurance, (4) the characteristics of the different "metallic" insurance plans that are available, and (5) the states that have agreed to participate. With both tables and figures, we have tried to make the Affordable Care Act both understandable and appreciated. The goal of this comprehensive review is to highlight aspects of the Affordable Care Act that are of importance to practitioners who care for people with diabetes by discussing both the positive and the potentially negative aspects of the program as they relate to diabetes care. abstract_id: PUBMED:32186340 The Affordable Care Act's Enduring Resilience. The Affordable Care Act (ACA) has taken numerous blows, both from the courts and from opponents seeking to undermine it. Yet, due to its policy design and the political forces the ACA has unleashed, the law has shown remarkable resilience. While there remain ongoing efforts to undo the ACA, the smart money has to be on its continued existence. abstract_id: PUBMED:24902505 The Affordable Care Act: opportunities for collaboration between doctors and lawyers. Background: In 2010, the Affordable Care Act (ACA) was signed into law. The Act seeks to improve the access of Americans to improved quality health care, while controlling the nation's escalating health care expenditures. The Act is scheduled for further implementation in 2014. Objective: This article elucidates the opportunities and challenges that the ACA presents for constructive, innovative collaboration between the legal and medical professions in contributing to the quest for a more affordable and accessible high quality health care system. Methods: The author analyzed the text of the Act, as well as secondary sources in the areas of law, medicine, and public health. This allowed for the creation of a comprehensive conceptual and empirical framework through which the Act could be properly analyzed and understood. Results: The research described the pitfalls inherent in the Act, but demonstrated that the ACA presents more opportunities than challenges if lawyers and doctors are willing to work together to bring about needed social change. Conclusion: The article qualified these findings by emphasizing that doctors must learn to advocate on behalf of their profession if the potential benefits of the ACA are to be realized. abstract_id: PUBMED:25229683 The Affordable Care Act and orthopaedic trauma. The Affordable Care Act has resulted in a dramatic governmental restructuring of the healthcare insurance market and delivery system. Orthopaedic traumatologists must be aware of the law's impact on their clinical practice, finances, and overall business model. This includes the effect of accountable care organizations, the Independent Payment Advisory Board, and the Physician Value-Based Payment Modifier program, as well as the impact of the Affordable Care Act's grace period provision, medical device excise tax, and cuts to funding for the Disproportionate Share Hospital program. abstract_id: PUBMED:32475253 The Politics of Health Care: Health Disparities, the Affordable Care Act, and Solutions for Success. This article explores why racial and income health disparities continue to exist in the United States. Poverty and racism are the primary drivers of the social problem which impact social determinants of health for vulnerable populations. The theoretical frameworks of conflict theory and critical race theory contextualize the causes and provide direction for overhauling public health policy in general and the Patient Protection and Affordable Care Act (Affordable Care Act) in particular. Although the Affordable Care Act was implemented to increase health coverage, economic and racial health inequities still exist in the United States. Policy recommendations for improving the health and welfare of low-income minorities include: 1) impacting poverty itself by desegregating urban areas of concentrated poverty, and 2) impacting racial discrimination in health care by putting the voices of African-American patients in the forefront to inform culturally relevant policy and practice. abstract_id: PUBMED:21539012 Will the Affordable Care Act make health insurance affordable? Using a budget-based approach to measuring affordability, this issue brief explores whether the subsidies available through the Affordable Care Act are enough to make health insurance affordable for low-income families. Drawing from the Consumer Expenditure Survey, the authors assess how much "room" people have in their budget, after paying for other necessities, to pay for health care needs. The results show that an overwhelming majority of households have room in their budgets for the necessities, health insurance premiums, and moderate levels of out-of-pocket costs established by the Affordable Care Act. Fewer than 10 percent of families above the federal poverty level do not have the resources to pay for premiums and typical out-of-pocket costs, even with the subsidies provided by the health reform law. Affordability remains a concern for some families with high out-of-pocket spending, suggesting that this is the major risk to insurance affordability. abstract_id: PUBMED:23767130 Affordable Care Act. The Patient Protection and Affordable Care Act of 2010 (PPACA), although a subject of much debate in the Unites States, was enacted on March 23, 2010, and upheld by the Supreme Court on June 28, 2012. This act advocates that "healthcare is a right, not a privilege." The main goals of PPACA are to minimize the number of uninsured Americans and make healthcare available to everyone at an affordable price. The Congressional Budget Office has determined that 94% of Americans will have healthcare coverage while staying under the $900 billion limit that President Barack Obama established by bending the healthcare cost curve and reducing the deficit over the next 10 years. Answer: The Affordable Care Act (ACA) has been a subject of debate regarding its restrictions and their implications. The ACA placed a moratorium on physician-owned hospital (POH) expansion due to concerns that POHs might increase costs and selectively treat healthier patients. However, a statewide analysis found that while POHs had higher total costs compared to proprietary hospitals (PHs) and general, full-service acute care hospitals (ACHs), they also had significantly higher patient satisfaction. The study suggested that policy decisions should carefully weigh efficiency, patient satisfaction, and the ratio of inpatient-to-post-acute care spending (PUBMED:27017203). The ACA's impact extends beyond POHs, aiming to expand health insurance coverage and emphasize the quality and cost-effectiveness of care. It introduces regulatory changes to health systems, reimbursement structures, and the delivery of critical care, with potential strategies for adaptation including staffing models with daily intensivist rounding, palliative care integration, and telemedicine expansion (PUBMED:26565630). In terms of abortion, the ACA treats it differently than other health services, focusing on funding restrictions rather than gestational age or the health of the pregnant woman, which is distinct from Western European approaches (PUBMED:26742594). For individuals with diabetes, the ACA is expected to bring significant benefits due to comprehensive reforms, although the full impact will unfold over a decade (PUBMED:24927108). Despite challenges, the ACA has shown resilience due to its policy design and the political forces it has unleashed, suggesting that it is likely to endure (PUBMED:32186340). Collaboration between the legal and medical professions is highlighted as a means to leverage the opportunities presented by the ACA for a more affordable and accessible high-quality health care system (PUBMED:24902505). Orthopaedic traumatologists must also be aware of the ACA's impact on clinical practice, finances, and business models, including the effects of accountable care organizations, the Independent Payment Advisory Board, and the Physician Value-Based Payment Modifier program (PUBMED:25229683). The ACA has been implemented to increase health coverage, yet economic and racial health inequities persist. Policy recommendations include addressing poverty and racial discrimination in health care to improve the health and welfare of low-income minorities (PUBMED:32475253).
Instruction: Is splanchnic perfusion pressure more predictive of outcome than intragastric pressure in neonates with gastroschisis? Abstracts: abstract_id: PUBMED:15135675 Is splanchnic perfusion pressure more predictive of outcome than intragastric pressure in neonates with gastroschisis? Background: The purpose of this study is to determine whether calculated splanchnic perfusion pressure (SPP) is more predictive of outcome than measured intragastric pressure (IGP) in patients with gastroschisis. Methods: Retrospective chart review from 1997 through 2003 of 12 patients with gastroschisis. Results: Eight total patients with gastroschisis underwent reduction and had adequate data for analysis. One patient underwent reduction on day of life (DOL) 6; the remainder underwent reduction on DOL 1. All patients had postreduction IGP &lt;20 mm Hg. The correlation coefficient of IGP and date of extubation was 0.20 and of SPP and date of extubation was -0.51. The correlation coefficient of IGP and return of bowel function was -0.06 and of SPP and return of bowel function was -0.50. Conclusion: SPP may be more predictive of outcome than IGP after gastroschisis repair. abstract_id: PUBMED:16677879 Splanchnic perfusion pressure: a better predictor of safe primary closure than intraabdominal pressure in neonatal gastroschisis. Background/purpose: Both measured intraabdominal pressure (IAP) and calculated splanchnic perfusion pressure (SPP) have been advocated for use in operative management of gastroschisis. We directly compared these 2 clinical indices. Methods: Institutional review board-approved multi-institutional retrospective review from 3 centers with 112 subjects. Splanchnic perfusion pressure was recorded as mean arterial pressure-IAP. We compared the clinical utility of IAP and SPP using univariate and multivariate regression analyses. Results: Calculated mean SPP was higher among neonates requiring silo placement compared to those without (39.0 +/- 1.9 vs 33.7 mm Hg, P &lt; .01). Measured IAP levels were similar between groups (11.5 +/- 1.1 vs 10.0 +/- 0.5, mm Hg, P &lt; .4). On a receiver operating characteristic curve, the inflection point for more than 90% specificity for silo placement was at an SPP of 44. In multivariate regression analysis adjusting for all factors below, SPP was independently associated with silo placement (odds ratio 1.2, 95% confidence interval 1.1-1.3, P &lt; .01), and IAP was not (odds ratio 1.2, 95% confidence interval &lt;1.0-1.5, P &lt; .1). Conclusions: These data suggest that SPP is a stronger predictor than IAP for the ability to achieve primary closure in the management of neonatal gastroschisis. We infer from these data that intraoperative SPP of more than 43 mm Hg may obviate the need for silo placement. abstract_id: PUBMED:34660479 High Abdominal Perfusion Pressure Using Umbilical Cord Flap in the Management of Gastroschisis. Background: Gastroschisis management remains a controversy. Most surgeons prefer reduction and fascial closure. Others advise staged reduction to avoid a sudden rise in intra-abdominal pressure (IAP). This study aims to evaluate the feasibility of using the umbilical cord as a flap (without skin on the top) for tension-free repair of gastroschisis. Methods: In a prospective study of neonates with gastroschisis repaired between January 2018 to October 2020 in Tanta University Hospital, we used the umbilical cord as a flap after the evacuation of all its blood vessels and suturing the edges of the cord with the skin edges of the defect. They were guided by monitoring abdominal perfusion pressure (APP), peak inspiratory pressure (PIP), central venous pressure (CVP), and urine output during 24 and 48 h postoperatively. The umbilical cord flap is used for tension-free closure of gastroschisis if PIP &gt; 24 mmHg, IAP &gt; 20 cmH2O (15 mmHg), APP &lt;50 mmHg, and CVP &gt; 15cmH2O. Results: In 20 cases that had gastroschisis with a median age of 24 h, we applied the umbilical cord flap in all cases and then purse string (Prolene Zero) with daily tightening till complete closure in seven cases, secondary suturing after 10 days in four cases, and leaving skin creeping until complete closure in nine cases. During the trials of closure, the range of APP was 49-52 mmHg. The range of IAP (IVP) was 15-20 cmH2O (11-15 mmHg), the range of PIP was 22-25 cmH2O, the range of CVP was 13-15 cmH2O, and the range of urine output was 1-1.5 ml/kg/h. Conclusion: The umbilical cord flap is an easy, feasible, and cheap method for tension-free closure of gastroschisis with limiting the PIP ≤ 24 mmHg, IAP ≤ 20 cmH2O (15 mmHg), APP &gt; 50 mmHg, and CVP ≤ 15cmH2O. abstract_id: PUBMED:15937815 Gastroschisis revisited: role of intraoperative measurement of abdominal pressure. Background: Animal studies have shown that visceral circulation is well preserved when intraabdominal pressure does not exceed 20 mm Hg. Our aim was to analyze the outcomes of a series of infants with gastroschisis whose surgical management was directed by the intraoperative measurement of bladder pressure. Methods: Forty-two neonates with gastroschisis were surgically managed using intraoperative measurement of bladder pressure at a tertiary care center between July 31, 1992, and March 20, 2004, and their outcome was evaluated. Primary closure with or without prosthetic material was performed when pressures measured 20 mm Hg or less. Delayed closure using a silon pouch was performed when pressures measured more than 20 mm Hg. Categorical variables were analyzed including mode of delivery, associated anomalies, type of closure, complications, and mortality. Continuous variables were analyzed including gestational age, birth weight, bladder pressure, time to full feeds, and length of hospital stay. Categorical and continuous variables for both groups were compared using Fisher's Exact and Wilcoxon's rank-sum tests, respectively, and a significance level of .05 was used. Preapproval of this study was obtained from the Institutional Review Board (No. 6690). Results: Thirty-three (79%) neonates with a mean bladder pressure of 16 mm Hg underwent primary closure and 9 neonates with a mean bladder pressure of 27 mm Hg underwent delayed closure with a silon pouch that was not spring loaded ( P &lt; .03). Patients treated with primary closure had faster return to full feeds and significantly shorter hospital length of stay compared with patients treated by delayed closure ( P = .04). Surgical morbidity and mortality was nil in patients after primary closure. One patient with total abdominal evisceration died during attempted delayed closure and another patient required reoperation for bowel necrosis after delayed closure. Conclusion: Primary closure was safely accomplished in 100% of neonates with gastroschisis whose bladder pressure measured 20 mm Hg or less. Further, this group of patients had a faster return to full feeds and a significantly shorter hospital length of stay compared with neonates who required delayed closure. abstract_id: PUBMED:6454777 Intragastric pressure measurement: a guide for reduction and closure of the silastic chimney in omphalocele and gastroschisis. In newborn infants with omphalocele or gastroschisis, traditional criteria for reduction of the herniated viscera either primarily or after application of a Silastic chimney have been the baby's color, respiratory rate, and lower extremity turgor. These are not always accurate or immediately apparent. In order to define more objective guidelines for reduction, measurements of intragastric pressure through a gastrostomy tube using a water manometer were carried out. The validity of this pressure measurement was demonstrated in five puppies where intra-abdominal pressure correlated well with inferior vena cava pressure and intragastric pressure measured through a gastrostomy tube (R = .98 and .99, respectively). Over a 3.5-yr period, 25 newborn infants with omphalocele (9) or gastroschisis (16) were treated. Ten underwent primary closure, and 15 were treated by placement of a Silastic chimney with serial reduction and closure. Manual reductions were performed once or twice daily to a maximum intragastric pressure of 20 cm water. Greater pressures demonstrated cardiovascular and respiratory comprise both experimentally and clinically. The mean time required for removal of the Silastic chimney was 4.7 days. There were no infections related to the chimney. There were 2 early and 5 late deaths, a 28% mortality rate. The remaining patients are alive and well. Intragastric pressure measurement in patients with omphalocele or gastroschisis provides objective criteria for safe primary closure and Silastic chimney reduction, shortens the time of reduction, and reduces the number of associated circulatory, respiratory, and septic complications. abstract_id: PUBMED:7858453 Prediction of outcome in omphalocele and gastroschisis by intraoperative measurement of intravesical pressure. A simple and accurate measurement of intraabdominal pressure is essential to predict a successful closure of defects in omphalocele and gastroschisis. Intravesical pressure (IVP) is a close estimation of intraabdominal pressure and can be measured safely by placing a catheter in the urinary bladder during surgery. Three neonates with gastroschisis and four with omphalocele were studied. Pressure-related complications such as ascites leakage, ventral hernia, impaired venous return of the lower extremities, and oliguria developed only in the patients with IVP &gt; 20 mmHg after fascial closure. Prolonged hospitalization, ventilation support and intensive care were required for these patients. abstract_id: PUBMED:21336610 Negative pressure wound therapy in the management of neonates with complex gastroschisis. Negative pressure wound therapy (NPWT) is an accepted form of treatment in managing complex wounds in adults and children. It is not as widely used in the neonatal population due to limited clinical experience. We describe the application of the RENASYS™ system (Smith and Nephew, UK) in delivering NPWT to four neonates with complex gastroschisis, all of whom achieved successful outcomes. abstract_id: PUBMED:16685611 Respiratory pressure monitoring as an indirect method of intra-abdominal pressure measurement in gastroschisis closure. Aim Of Study: Abdominal compartment syndrome (ACS) is a rare but potentially fatal complication of gastroschisis closure. The liberal use of a staged reduction technique has become a well-established method of avoiding this problem. Unfortunately the use of silos is associated with a high rate of sepsis, prolonged ileus, and ventilation. A method of predicting an impending ACS would help surgeons to decide more objectively which patients would benefit from a staged reduction. A new simple method is presented here which predicts intra-abdominal pressure based on airway pressure readings. Method: Over a four-year period, 34 neonates with gastroschisis underwent measurement of Pplateau respiratory pressures and simultaneous intra-vesical pressures. Result: The Pplateau pressures were approximately 10 cmH2O higher than any concurrent intra-vesical pressure readings. ACS occurred, in one patient, when pressure measurements were above 15 cmH2O (intra-vesical) or 25 cmH2O (Pplateau). Conclusion: By measuring Pplateau pressures, it is possible to predict the intra-abdominal pressure and hence avoid the development of an abdominal compartment syndrome on closing the abdominal wall in gastroschisis. abstract_id: PUBMED:22325384 Gastroschisis with intestinal atresia--predictive value of antenatal diagnosis and outcome of postnatal treatment. Purpose: The purpose of this study is to evaluate (1) the predictive value of fetal bowel dilatation (FBD) for intestinal atresia in gastroschisis and (2) the postnatal management and outcome of this condition. Methods: A retrospective review of all gastroschisis cases diagnosed in our fetal medicine unit between 1992 and 2010 and treated postnatally in our center was performed. Results: One hundred thirty cases had full postnatal data available. Intestinal atresia was found at surgery in 14 neonates (jejunum, n = 6; ileum, n = 3; ascending colon, n = 3; multiple, n = 2). Polyhydramnios and FBD were more likely in the atresia group compared with infants with no atresia (P = .0003 and P = .005, respectively). Fetal bowel dilatation had 99% negative predictive value (95% confidence interval, 0.9-0.99) and 17% positive predictive value (95% confidence interval, 0.1-0.3) for atresia. Treatment of intestinal atresia included primary anastomosis (n = 5), delayed anastomosis (n = 2), and stoma formation followed by anastomosis (n = 7). Infants with atresia had longer duration of parenteral nutrition, higher incidence of sepsis, and cholestasis compared with infants with no atresia (P = .0003). However, the presence of atresia did not increase mortality. Conclusions: Polyhydramnios and FBD are associated with atresia. Absence of FBD in gastroschisis excludes intestinal atresia. In our experience, atresia is associated with a longer duration of parenteral nutrition but does not influence mortality. These findings may be relevant for antenatal counseling. abstract_id: PUBMED:22075340 Predictors of postnatal outcome in neonates with gastroschisis. Background/purpose: The optimal management of neonates with gastroschisis is unclear, and there is a significant morbidity. We performed a review of neonates with gastroschisis treated at our center of pediatric surgery over the last 21 years to determine predictive factors of outcome. Methods: Single-center retrospective analysis of 79 neonates with gastroschisis (1989-2009) was done. Length of hospital stay (LOS), days of parenteral nutrition (PN), and survival were outcome measures. Univariate and multiple regression analyses were used. Results: Overall survival was 92%, and primary closure was achieved in 80%. Median LOS was 25 days, and median duration on PN, 17 days. Intestinal atresia, closed gastroschisis, secondary closure, and sepsis were the primary variables associated with poor outcome independent of other variables, but prematurity also affected outcome. Route of delivery and associated malformations were not related to poorer outcome. Necrotizing enterocolitis did not occur in any of our patients. Conclusion: Outcome in our patients was favorable as measured by survival, LOS, and days on PN. Primary predictors of poor outcome were factors related to short bowel syndrome and secondary closure, indicating a need to further improve treatment of short bowel syndrome. Answer: Yes, splanchnic perfusion pressure (SPP) appears to be more predictive of outcome than intragastric pressure (IGP) in neonates with gastroschisis. A study found that the correlation coefficient of IGP and date of extubation was 0.20 and of SPP and date of extubation was -0.51. Similarly, the correlation coefficient of IGP and return of bowel function was -0.06 and of SPP and return of bowel function was -0.50, suggesting that SPP may be more predictive of outcome than IGP after gastroschisis repair (PUBMED:15135675). Another study directly compared the two clinical indices and found that SPP was a stronger predictor than IAP for the ability to achieve primary closure in the management of neonatal gastroschisis. The data suggested that an intraoperative SPP of more than 43 mm Hg may obviate the need for silo placement (PUBMED:16677879). These findings indicate that monitoring SPP could be more beneficial in predicting and managing the outcomes for neonates undergoing gastroschisis repair.
Instruction: Is surgery always mandatory for type A aortic dissection? Abstracts: abstract_id: PUBMED:17062223 Is surgery always mandatory for type A aortic dissection? Background: The International Registry of Aortic Dissections showed that 42% of the unoperated patients with type A acute aortic dissection were discharged from the hospital after intensive medical treatment. We analyzed our experience to identify a preoperative score for in-hospital mortality to propose an alternative strategy for type A acute aortic dissection. Methods: From 1980 to 2004, 616 consecutive patients with type A acute aortic dissection underwent surgery in our center. The preoperative univariate risk factors with a probability value less than 0.05 were entered into multivariate analysis. A risk equation was developed: predicted mortality = exp(beta 0 + sigma beta i X i)/[1 + exp(beta 0 + sigma beta i X i)]. Results: Early mortality was 25.1% (154 of 616 patients). Five risk factors were identified: age, coma, acute renal failure, shock, and redo operation. The beta i values are age 0.023, shock 0.771, reoperation 0.595, coma 1.162, acute renal failure 0.778; the constant (beta 0) is -2.986. Conclusions: Our large, single-center experience allowed us to develop a mathematical model to predict 30-day mortality for type A acute aortic dissection. When the expected mortality is 58% or less, surgery is always indicated. When the predicted mortality is greater than 58%, the possibility of survival is similar, according to International Registry of Aortic Dissections data, for surgery and medical treatment. In such cases surgery can no longer be considered mandatory, and a careful evaluation of the individual patient is recommended to choose the more suitable strategy. abstract_id: PUBMED:24826248 Acute Aortic Dissection Mimicking STEMI in the Catheterization Laboratory: Early Recognition Is Mandatory. Coronary malperfusion due to type A aortic dissection is a life-threatening condition where timely recognition and treatment are mandatory. A 77-year-old woman underwent an acute evolving type A aortic dissection mimicking acute myocardial infarction. Two pathophysiologic mechanisms are discussed: either thrombosis migrating from a previously treated giant aneurism of proximal left anterior descending or a local arterial complication due to left main stenting. Recognition of these occurrences in the catheterization laboratory is important to look immediately for surgery. abstract_id: PUBMED:26670778 Ascending aortic replacement for acute type A aortic dissection in octogenarians. Objective: The management of acute type A aortic dissection in elderly patients is controversial. This study aimed to investigate the validity of ascending aortic replacement for acute type A aortic dissection in octogenarians compared with younger patients. Methods: Twenty-five octogenarians, among 117 consecutive patients with acute type A aortic dissection between January 2000 and October 2013 who underwent emergency surgery, were reviewed retrospectively. The median age was 84 years (80-91 years). The patients were six men and 19 women. All 25 patients underwent ascending aortic replacement under deep hypothermic circulatory arrest. In the same period, 55 patients younger than 80 years with acute type A aortic dissection had ascending aortic replacement performed. Clinical data were prospectively entered into our institutional database. Late follow-up was 6.8 ± 2.8 years and was 100% complete. Results: The 30-day mortality rate was 8% (2/25 patients), which was similar to that in patients younger than 80 years (5%). There were no reoperations in octogenarians and five reoperations in younger patients in the follow-up period. Survival at 1 and 5 years was 80.0 and 59.7% in octogenarians and 90.6 and 81.9% in younger patients, respectively (P = 0.036). Conclusion: Ascending aortic replacement for octogenarians with acute type A aortic dissection was successfully performed, resulting in satisfactory early and midterm survival. Aggressive surgical treatment is mandatory for improving the outcome in octogenarians with acute type A aortic dissection. abstract_id: PUBMED:19883511 Loeys-Dietz syndrome type I and type II: clinical findings and novel mutations in two Italian patients. Background: Loeys-Dietz syndrome (LDS) is a rare autosomal dominant disorder showing the involvement of cutaneous, cardiovascular, craniofacial, and skeletal systems. In particular, LDS patients show arterial tortuosity with widespread vascular aneurysm and dissection, and have a high risk of aortic dissection or rupture at an early age and at aortic diameters that ordinarily are not predictive of these events. Recently, LDS has been subdivided in LDS type I (LDSI) and type II (LDSII) on the basis of the presence or the absence of cranio-facial involvement, respectively. Furthermore, LDSII patients display at least two of the major signs of vascular Ehlers-Danlos syndrome. LDS is caused by mutations in the transforming growth factor (TGF) beta-receptor I (TGFBR1) and II (TGFBR2) genes. The aim of this study was the clinical and molecular characterization of two LDS patients. Methods: The exons and intronic flanking regions of TGFBR1 and TGFBR2 genes were amplified and sequence analysis was performed. Results: Patient 1 was a boy showing dysmorphic signs, blue sclerae, high-arched palate, bifid uvula; skeletal system involvement, joint hypermobility, velvety and translucent skin, aortic root dilatation, tortuosity and elongation of the carotid arteries. These signs are consistent with an LDSI phenotype. The sequencing analysis disclosed the novel TGFBR1 p.Asp351Gly de novo mutation falling in the kinase domain of the receptor. Patient 2 was an adult woman showing ascending aorta aneurysm, with vascular complications following surgery intervention. Velvety and translucent skin, venous varicosities and wrist dislocation were present. These signs are consistent with an LDSII phenotype. In this patient and in her daughter, TGFBR2 genotyping disclosed in the kinase domain of the protein the novel p.Ile510Ser missense mutation. Conclusion: We report two novel mutations in the TGFBR1 and TGFBR2 genes in two patients affected with LDS and showing marked phenotypic variability. Due to the difficulties in the clinical approach to a TGFBR-related disease, among patients with vascular involvement, with or without aortic root dilatation and LDS cardinal features, genotyping is mandatory to clarify the diagnosis, and to assess the management, prognosis, and counselling issues. abstract_id: PUBMED:37180800 Onset of pain to surgery time in acute aortic dissections type A: a mandatory factor for evaluating surgical results? Objective: An acute aortic dissection type A (AADA) is a rare but life-threatening event. The mortality rate ranges between 18% to 28% and mortality is often within the first 24 h and up to 1%-2% per hour. Although the onset of pain to surgery time has not been a relevant factor in terms of research in the field of AADA, we hypothesize that a patient's preoperative conditions depend on the length of this time. Methods: Between January 2000 and January 2018, 430 patients received surgical treatment for acute aortic dissection DeBakey type I at our tertiary referral hospital. In 11 patients, the exact time point of initial onset of pain was retrospectively not detectable. Accordingly, a total of 419 patients were included in the study. The cohort was categorized into two groups: Group A with an onset of pain to surgery time &lt; 6 h (n = 211) and Group B &gt; 6 h (n = 208), respectively. Results: Median age was 63.5 years (y) ((IQR: 53.3-71.4 y); (67.5% male)). Preoperative conditions differed significantly between the cohorts. Differences were detected in terms of malperfusion (A: 39.3%; B: 23.6%; P: 0.001), neurological symptoms (A: 24.2%; B: 15.4%; P: 0.024), and the dissection of supra-aortic arteries (A: 25.1%; B: 16.8%; P: 0.037). In particular, cerebral malperfusion (A 15.2%: B: 8.2%; P: 0.026) and limb malperfusion (A: 18%, B: 10.1%; P: 0.020) were significantly increased in Group A. Furthermore, Group A showed a decreased median survival time (A: 1,359.0 d; B: 2,247.5 d; P: 0.001), extended ventilation time (A: 53.0 h; B: 44.0 h; P: 0.249) and higher 30-day mortality rate (A: 25.1%; B: 17.3%; P: 0.051). Conclusions: Patients with a short onset of pain to surgery time in cases of AADA present themselves not only with more severe preoperative symptoms but are also the more compromised cohort. Despite early presentation and emergency aortic repair, these patients show increased chances of early mortality. The "onset of pain to surgery time" should become a mandatory factor when making comparable surgical evaluations in the field of AADA. abstract_id: PUBMED:25117175 A case of painless acute Type-A thoracic aortic dissection. We describe the case of an 83-year-old lady with a known aneurysmal thoracic aorta, developing acute breathlessness and hypoxia, with no pain and unremarkable cardiovascular examination. As D-dimers were raised, she was treated with low-molecular-weight heparin (LMWH) for suspected pulmonary embolism. CT pulmonary angiography showed acutely dissecting, Type-A, thoracic aortic aneurysm. The patient was treated medically with β-blockers. Despite a poor prognosis, she remains well 2 months later. Observational studies of patients over 70 with Type-A dissection show only 75.3% experience pain, are offered surgery less and have higher mortality. d-Dimers are almost always elevated in aortic dissection. No previous studies document breathlessness as the only presenting symptom. This case emphasises the need, in older populations, for a low suspicion threshold for aortic dissection. abstract_id: PUBMED:29445600 Chronic type B "residual" after type A: what I would do? "The major goal of surgery for acute type A aortic dissection is to have an alive patient." This motto still remains the most important directive. However, also depending onto the extent of the underlying pathology and consecutively depending onto the extent of primary surgery, there is and will be need for additional classical surgical or interventional treatment sooner or later during follow-up in a substantial number of patients having had surgery for acute type A aortic dissection. This article shall guide the interested reader through the underlying mechanisms as well as treatment options in patients with chronic type B "residual" after type A repair and shall finally suggest preventive strategies to reduce the occurrence of this pathology to a minimum. abstract_id: PUBMED:28661215 Anastomotic leak after surgical repair of type A aortic dissection - prevalence and consequences in midterm follow-up. Background: This study reports the mid-term prevalence and therapeutic consequences of anastomotic leaks after surgery for Stanford type A aortic dissections. Patients And Methods: From July 2007 to July 2013, 93 patients survived surgery for acute type A dissections at our center and underwent a standardized follow-up. The pre-, peri-, and postoperative as well as the midterm results were collected prospectively. Follow-up computed tomography (CT) imaging was performed 7 days, 3, and 12 months after surgery, and yearly thereafter, to assess the presence or progression of anastomotic leaks at the aorto-prosthesis anastomotic sites. Results: The mean follow-up was 4 years (1534 ± 724 days). Follow-up CT revealed anastomotic leaks in 4 patients (4.3 %). All leaks developed during midterm follow-up and half of them did not increase with time. Two patients required redo surgery for an increase in periaortic extravasation and compression of neighboring structures. Further analysis was not able to reveal independent risk factors for development or deterioration of leaks. Conclusions: Anastomotic leaks after surgery for Stanford Type A aortic dissection can develop in midterm follow-up, even after initially excellent results. Meticulous follow-up is mandatory to detect possible deterioration and a need for redo surgery. abstract_id: PUBMED:31038681 Prevalence of type III arch configuration in patients with type B aortic dissection. Objectives: Type III aortic arch configuration consistently presents anatomical and biomechanical characteristics which have been associated with an increased risk of type B aortic dissection (TBD). Our aim was to investigate the prevalence of type III arch in patients with TBD and type B intramural haematoma (IMH-B). Methods: A multicentre retrospective analysis was performed on patients with TBD and IMH-B observed between 2002 and 2017. The computed tomographic images were reviewed to identify the type of aortic arch. Exclusion criteria included previous arch surgery, presence of aortic dissection or aneurysm proximal to the left subclavian artery and bovine arches. An ad hoc systematic literature review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines to assess the prevalence of type III arch in non-TBD and non-aneurysmal patients. Results: Two hundred and sixty-one patients with TBD/IMH-B were found to be suitable for the study and were stratified according to aortic arch classification. The ad hoc literature search provided 10 relevant articles, from which a total of 7983 control cases were retrieved. TBD/IMH-B patients were significantly younger than controls [64.3, standard error: 0.74 (62.84-65.76) vs mean pooled age 70.5, standard error: 0.40 (69.71-71.28)]. Patients with TBD/IMH-B presented with a significantly higher prevalence of type III arch [41.0% (107/261) (35.2-47.1)] than controls [16% (1241/7983) (10-22)]. Conclusions: Our data indicate an association between type III arch configuration and the occurrence of TBD/IMH-B. These findings warrant further studies to disclose the potential role of type III arch configuration as an anatomical risk factor for TBD/IMH-B. abstract_id: PUBMED:38488985 Surgical Treatment for Type A Aortic Dissection after Endovascular Aortic Repair: A 12-year, Single-Center Study. Objective: This study aims to investigate the clinical manifestations, operative techniques, and outcomes of patients who undergo open repair after thoracic endovascular aortic repair (TEVAR). Methods: From January 2010 to June 2022, 113 consecutive type A aortic dissection (TAAD) patients underwent secondary open operation after TEVAR at our institution, and the median interval from primary intervention to open surgery was 12 (1.9-48.0) months. We divided the patients into two groups (RTAD (retrograde type A dissection) group, N = 56; PNAD (proximal new aortic dissection) group, N = 57) according to their anatomical features. Survival analysis during the follow-up was evaluated using a Kaplan-Meier survival curve and a log-rank test. Results: The 30-day mortality was 6.2% (7/113), the median follow-up period was 31.7 (IQR 14.7-65.6) months, and the overall survival at 1 year, 5 years, and 10 years was 88.5%, 88.5%, and 87.6%, respectively. Fourteen deaths occurred during the follow-up, but there were no late aorta-related deaths. Three patients underwent total thoracoabdominal aortic replacement 1 year after a second open operation. The RTAD group had a smaller ascending aorta size (42.5 ± 7.7 mm vs 48.4 ± 11.4 mm; P &lt; .01) and a closer proximal landing zone (P &lt; .01) compared to the PNAD group. However, there were no differences in survival between the two groups. Conclusions: TAAD can present as an early or a late complication after TEVAR due to stent-grafting-related issues or disease progression. Open operation can be performed to treat TAAD, and this has acceptable early and mid-term outcomes. Follow-up should become mandatory for patients after TEVAR because these patients are at increased risk for TAAD. Answer: Surgery is not always considered mandatory for type A aortic dissection. According to a study analyzing data from 1980 to 2004, when the expected mortality from surgery is 58% or less, surgery is always indicated. However, when the predicted mortality is greater than 58%, the possibility of survival is similar for both surgery and medical treatment according to the International Registry of Aortic Dissections data. In such cases, surgery can no longer be considered mandatory, and a careful evaluation of the individual patient is recommended to choose the more suitable strategy (PUBMED:17062223). In contrast, other studies emphasize the urgency and necessity of surgical intervention in certain contexts. For instance, acute aortic dissection mimicking acute myocardial infarction requires immediate recognition and surgical treatment (PUBMED:24826248). Furthermore, aggressive surgical treatment is considered mandatory for improving outcomes in octogenarians with acute type A aortic dissection, as it results in satisfactory early and midterm survival (PUBMED:26670778). The management of acute type A aortic dissection in elderly patients is controversial, and observational studies show that older patients with Type-A dissection are offered surgery less and have higher mortality (PUBMED:25117175). Additionally, meticulous follow-up is mandatory to detect possible deterioration and the need for redo surgery after initial repair for Stanford Type A aortic dissection (PUBMED:28661215). In summary, while surgery is often the treatment of choice for type A aortic dissection, especially in younger patients or those with certain complications, it is not universally mandatory. The decision to operate should be based on a careful assessment of the individual patient's risk factors and expected outcomes.
Instruction: Does the choice of nucleotide substitution models matter topologically? Abstracts: abstract_id: PUBMED:27009141 Does the choice of nucleotide substitution models matter topologically? Background: In the context of a master level programming practical at the computer science department of the Karlsruhe Institute of Technology, we developed and make available an open-source code for testing all 203 possible nucleotide substitution models in the Maximum Likelihood (ML) setting under the common Akaike, corrected Akaike, and Bayesian information criteria. We address the question if model selection matters topologically, that is, if conducting ML inferences under the optimal, instead of a standard General Time Reversible model, yields different tree topologies. We also assess, to which degree models selected and trees inferred under the three standard criteria (AIC, AICc, BIC) differ. Finally, we assess if the definition of the sample size (#sites versus #sites × #taxa) yields different models and, as a consequence, different tree topologies. Results: We find that, all three factors (by order of impact: nucleotide model selection, information criterion used, sample size definition) can yield topologically substantially different final tree topologies (topological difference exceeding 10 %) for approximately 5 % of the tree inferences conducted on the 39 empirical datasets used in our study. Conclusions: We find that, using the best-fit nucleotide substitution model may change the final ML tree topology compared to an inference under a default GTR model. The effect is less pronounced when comparing distinct information criteria. Nonetheless, in some cases we did obtain substantial topological differences. abstract_id: PUBMED:25981189 Variance estimation for nucleotide substitution models. The current variance estimators for most evolutionary models were derived when a nucleotide substitution number estimator was approximated with a simple first order Taylor expansion. In this study, we derive three variance estimators for the F81, F84, HKY85 and TN93 nucleotide substitution models, respectively. They are obtained using the second order Taylor expansion of the substitution number estimator, the first order Taylor expansion of a squared deviation and the second order Taylor expansion of a squared deviation, respectively. These variance estimators are compared with the existing variance estimator in terms of a simulation study. It shows that the variance estimator, which is derived using the second order Taylor expansion of a squared deviation, is more accurate than the other three estimators. In addition, we also compare these estimators with an estimator derived by the bootstrap method. The simulation shows that the performance of this bootstrap estimator is similar to the estimator derived by the second order Taylor expansion of a squared deviation. Since the latter one has an explicit form, it is more efficient than the bootstrap estimator. abstract_id: PUBMED:27435359 Memory-Based Simple Heuristics as Attribute Substitution: Competitive Tests of Binary Choice Inference Models. Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in terms of attribute substitution in heuristic use (Kahneman &amp; Frederick, 2005). In this framework, it is predicted that people will rely on heuristic or knowledge-based inference depending on the subjective difficulty of the inference task. We conducted competitive tests of binary choice inference models representing simple heuristics (fluency and familiarity heuristics) and knowledge-based inference models. We found that a simple heuristic model (especially a familiarity heuristic model) explained inference patterns for subjectively difficult inference tasks, and that a knowledge-based inference model explained subjectively easy inference tasks. These results were consistent with the predictions of the attribute substitution framework. Issues on usage of simple heuristics and psychological processes are discussed. abstract_id: PUBMED:17049029 Pseudo-likelihood for non-reversible nucleotide substitution models with neighbour dependent rates. In the field of molecular evolution genome substitution models with neighbour dependent substitution rates have recently received much attention. It is well-known that substitution of nucleotides does not occur independently of neighbouring nucleotides, but there has been less focus on the phenomenon that this substitution process is also not time-reversible. In this paper I construct a pseudo-likelihood type method for inference in non-reversible substitution models with neighbour dependent substitution rates. I also construct an EM-algorithm for maximising the pseudo-likelihood. For human-mouse aligned sequence data a number of different models are investigated, where I show that strand-symmetric models are appropriate, and that overlapping di-nucleotide models do not fit the data well. abstract_id: PUBMED:27372251 Accounting for substitution and spatial heterogeneity in a labelled choice experiment. Many environmental valuation studies using stated preferences techniques are single-site studies that ignore essential spatial aspects, including possible substitution effects. In this paper substitution effects are captured explicitly in the design of a labelled choice experiment and the inclusion of different distance variables in the choice model specification. We test the effect of spatial heterogeneity on welfare estimates and transfer errors for minor and major river restoration works, and the transferability of river specific utility functions, accounting for key variables such as site visitation, spatial clustering and income. River specific utility functions appear to be transferable, resulting in low transfer errors. However, ignoring spatial heterogeneity increases transfer errors. abstract_id: PUBMED:21642006 Confidence intervals for the substitution number in the nucleotide substitution models. In the nucleotide substitution model for molecular evolution, a major task in the exploration of an evolutionary process is to estimate the substitution number per site of a protein or DNA sequence. The usual estimators are based on the observation of the difference proportion of the two nucleotide sequences. However, a more objective approach is to report a confidence interval with precision rather than only providing point estimators. The conventional confidence intervals used in the literature for the substitution number are constructed by the normal approximation. The performance and construction of confidence intervals for evolutionary models have not been much investigated in the literature. In this article, the performance of these conventional confidence intervals for one-parameter and two-parameter models are explored. Results show that the coverage probabilities of these intervals are unsatisfactory when the true substitution number is small. Since the substitution number may be small in many situations for an evolutionary process, the conventional confidence interval cannot provide accurate information for these cases. Improved confidence intervals for the one-parameter model with desirable coverage probability are proposed in this article. A numerical calculation shows the substantial improvement of the new confidence intervals over the conventional confidence intervals. abstract_id: PUBMED:28754661 Predicting Amino Acid Substitution Probabilities Using Single Nucleotide Polymorphisms. Fast genome sequencing offers invaluable opportunities for building updated and improved models of protein sequence evolution. We here show that Single Nucleotide Polymorphisms (SNPs) can be used to build a model capable of predicting the probability of substitution between amino acids in variants of the same protein in different species. The model is based on a substitution matrix inferred from the frequency of codon interchanges observed in a suitably selected subset of human SNPs, and predicts the substitution probabilities observed in alignments between Homo sapiens and related species at 85-100% of sequence identity better than any other approach we are aware of. The model gradually loses its predictive power at lower sequence identity. Our results suggest that SNPs can be employed, together with multiple sequence alignment data, to model protein sequence evolution. The SNP-based substitution matrix developed in this work can be exploited to better align protein sequences of related organisms, to refine the estimate of the evolutionary distance between protein variants from related species in phylogenetic trees and, in perspective, might become a useful tool for population analysis. abstract_id: PUBMED:23499773 Context-dependent substitution models for circular DNA. The most general context-dependent Markov substitution process, where each substitution event involves only one site and substitution rates depend on the whole sequence, is presented for the first time. The focus is on circular DNA sequences, where the problem of specifying the behaviour of the first and last sites in a linear sequence does not arise. Important special cases include (1) the established models where each site behaves independently, (2) models which are increasingly applied to non-coding DNA, where each site depends on only the immediate neighbouring sites, and (3) models where each site depends on two closest neighbours on both sides, such as the codon models. These special cases are classified and illustrated by published models. It is shown that the existing codon substitution models mix up the mutation and selection processes, rendering the substitution rates challenging to interpret. The classification suggests the study of a more interpretable codon model, where the mutation and selection processes are clearly delineated. Furthermore, this model allows a natural accommodation of possibly different selection pressures in overlapping reading frames, which may contribute to furthering the understanding of viral diseases. Also included are brief discussions on the stationary distribution of a context-dependent substitution process and a simple recipe for simulating it on a computer. abstract_id: PUBMED:9866196 Models of amino acid substitution and applications to mitochondrial protein evolution. Models of amino acid substitution were developed and compared using maximum likelihood. Two kinds of models are considered. "Empirical" models do not explicitly consider factors that shape protein evolution, but attempt to summarize the substitution pattern from large quantities of real data. "Mechanistic" models are formulated at the codon level and separate mutational biases at the nucleotide level from selective constraints at the amino acid level. They account for features of sequence evolution, such as transition-transversion bias and base or codon frequency biases, and make use of physicochemical distances between amino acids to specify nonsynonymous substitution rates. A general approach is presented that transforms a Markov model of codon substitution into a model of amino acid replacement. Protein sequences from the entire mitochondrial genomes of 20 mammalian species were analyzed using different models. The mechanistic models were found to fit the data better than empirical models derived from large databases. Both the mutational distance between amino acids (determined by the genetic code and mutational biases such as the transition-transversion bias) and the physicochemical distance are found to have strong effects on amino acid substitution rates. A significant proportion of amino acid substitutions appeared to have involved more than one codon position, indicating that nucleotide substitutions at neighboring sites may be correlated. Rates of amino acid substitution were found to be highly variable among sites. abstract_id: PUBMED:20543444 Grid selection of models of nucleotide substitution. jModelTest is a Java program for the statistical selection of models of nucleotide substitution with thousands of users around the world. For large data sets, the calculations carried out by this program can be too expensive for many users. Here we describe the port of the jModeltest code for Grid computing using DRMAA. This work should facilitate the use of jModelTest on a broad scale. Answer: Yes, the choice of nucleotide substitution models can matter topologically. A study found that the selection of nucleotide substitution models, the information criterion used, and the definition of sample size can yield topologically substantially different final tree topologies for approximately 5% of the tree inferences conducted on empirical datasets. This indicates that using the best-fit nucleotide substitution model may change the final Maximum Likelihood (ML) tree topology compared to an inference under a default General Time Reversible (GTR) model (PUBMED:27009141).
Instruction: Do patients with a higher body mass index have a greater risk of advanced-stage renal cell carcinoma? Abstracts: abstract_id: PUBMED:12946742 Do patients with a higher body mass index have a greater risk of advanced-stage renal cell carcinoma? Objectives: To evaluate whether patients with a higher body mass index (BMI) are at elevated risk of an advanced tumor stage for renal cell carcinoma at diagnosis. A high BMI has recently been proved to be associated with advanced tumor stages for some malignant diseases. Methods: From January 1994 to December 2000, 693 operations for renal cell carcinoma were performed in 683 patients at our institution. Ten patients underwent surgery twice for bilateral tumors. Of the 683 patients, 417 were men and 266 women. The mean age at surgery was 62.2 years, and the mean tumor diameter was 5.2 cm. Seventy-eight percent of the patients were asymptomatic at tumor diagnosis. The following parameters were evaluated with regard to a possible correlation to tumor stage and/or tumor diameter: BMI, presence of symptoms, age, sex, hemoglobin, lactate dehydrogenase, erythrocyte sedimentation rate, serum cholesterol, and triglycerides. For statistical analysis, the Spearman rank correlation test was used. Results: The mean BMI was 26.8 +/- 4.4 (range 16.9 to 44.3). Statistical analysis showed a significant positive correlation between advanced T stage and the presence of symptoms (P &lt;0.0001), erythrocyte sedimentation rate (P &lt;0.0001), lactate dehydrogenase (P = 0.0015), and age (P = 0.046), and an inverse correlation with hemoglobin (P &lt;0.0001) and serum cholesterol (P &lt;0.0001). For all other investigated parameters, including BMI, no significant correlation could be proved. Conclusions: Our data indicate that obese patients are not at greater risk of advanced tumor stages of renal cell carcinoma at the time of diagnosis compared with a population of normal weight. abstract_id: PUBMED:28797586 Obesity as defined by waist circumference but not body mass index is associated with higher renal mass complexity. Objectives: Obesity, typically defined as a body mass index (BMI)≥30kg/m2, is an established risk factor for renal cell carcinoma (RCC) but is paradoxically linked to less advanced disease at diagnosis and improved outcomes. However, BMI has inherent flaws, and alternate obesity-defining metrics that emphasize abdominal fat are available. We investigated 3 obesity-defining metrics, to better examine the associations of abdominal fat vs. generalized obesity with renal tumor stage, grade, or R.E.N.A.L. nephrometry score. Methods And Materials: In a prospective cohort of 99 subjects with renal masses undergoing resection and no evidence of metastatic disease, obesity was assessed using 3 metrics: body mass index (BMI), radiographic waist circumference (WC), and retrorenal fat (RRF) pad distance. R.E.N.A.L. nephrometry scores were calculated based on preoperative CT or MRI. Univariate and multivariate analyses were performed to identify associations between obesity metrics and nephrometry score, tumor grade, and tumor stage. Results: In the 99 subjects, surgery was partial nephrectomy in 51 and radical nephrectomy in 48. Pathology showed benign masses in 11 and RCC in 88 (of which 20 had stage T3 disease). WC was positively correlated with nephrometry score, even after controlling for age, sex, race, and diabetes status (P = 0.02), whereas BMI and RRF were not (P = 0.13, and P = 0.57, respectively). WC in stage T2/T3 subjects was higher than in subjects with benign masses (P = 0.03). In contrast, subjects with Fuhrman grade 1 and 2 tumors had higher BMI (P&lt;0.01) and WC (P = 0.04) than subjects with grade 3 and 4 tumors. Conclusions: Our data suggest that obesity measured by WC, but not BMI or RRF, is associated with increased renal mass complexity. Tumor Fuhrman grade exhibited a different trend, with both high WC and BMI associated with lower-grade tumors. Our findings indicate that WC and BMI are not interchangeable obesity metrics. Further evaluation of RCC-specific outcomes using WC vs. BMI is warranted to better understand the complex relationship between general vs. abdominal obesity and RCC characteristics. abstract_id: PUBMED:10069258 Body mass index and risk of renal cell carcinoma. To examine the association between body mass index and renal cell carcinoma risk, we analyzed data from a case-control study of members of a health maintenance organization in western Washington State. We identified cases diagnosed between 1980 and 1995 through a population-based cancer registry. We selected controls from membership files. We collected adult weight and height from medical records. Increased body mass index was associated with increases in risk for both men and women (for the top quartile relative to the bottom quartile of maximum body mass index: in women, OR = 3.3, 95% CI = 1.2-8.7; in men, OR = 2.3, 95% CI = 1.2-4.5). abstract_id: PUBMED:22561514 The benefit of laparoscopic partial nephrectomy in high body mass index patients. Objective: The aims of the present study were to evaluate the effect of body mass index on the surgical outcomes of open partial nephrectomy and laparoscopic partial nephrectomy, and to analyze whether higher body mass index patients may derive greater benefit from laparoscopic partial nephrectomy. Methods: We reviewed 110 patients who underwent open partial nephrectomy and 47 patients who underwent laparoscopic partial nephrectomy at our institution. We analyzed the data to determine what kind of factor would be associated with prolonged operative time, increased estimated blood loss and prolonged ischemic time, and compared the result of open partial nephrectomy with that of laparoscopic partial nephrectomy. Results: A statistically significant correlation was observed between body mass index and operative time or estimated blood loss in open partial nephrectomy. Multivariate analysis also demonstrated that body mass index was an independent predictor for prolonged operative time and higher estimated blood loss in open partial nephrectomy, but not in laparoscopic partial nephrectomy. In the normal body mass index group (body mass index&lt;25.0 kg/m2), although mean operative time in the laparoscopic partial nephrectomy group was significantly longer than that in the open partial nephrectomy group, the difference was relatively small. In the high body mass index group (body mass index≥25.0 kg/m2), the mean operative time of the two groups was not statistically different. The estimated blood loss of open partial nephrectomy was significantly higher than that of laparoscopic partial nephrectomy in both groups. In both operative procedures, tumor size was an independent predictor for prolonged ischemic time in multivariate analysis. Conclusions: Body mass index was an independent predictor for prolonged operative time and higher estimated blood loss in open partial nephrectomy but not in laparoscopic partial nephrectomy. Laparoscopic partial nephrectomy was less influenced by body mass index and had a greater benefit, especially in high body mass index patients. abstract_id: PUBMED:23400428 Influence of body mass index, smoking, and blood pressure on survival of patients with surgically-treated, low stage renal cell carcinoma: a 14-year retrospective cohort study. The association of body mass index, smoking, and blood pressure, which are related to the three well-established risk factors of renal cell carcinoma, and survival in patients with renal cell carcinoma is not much studied. Our objective was to evaluate this association. A cohort of 1,036 patients with low stage (pT1 and pT2) renal cell carcinoma who underwent radical or partial nephrectomy were enrolled. We retrospectively reviewed medical records and collected survival data. The body mass index, smoking status, and blood pressure at the time of surgery were recorded. Patients were grouped according to their obesity grade, smoking status, and hypertension stage. Survival analysis showed a significant decrease in overall (P = 0.001) and cancer-specific survival (P &lt; 0.001) with being underweight, with no differences of smoking status or perioperative blood pressure. On multivariate analysis, perioperative blood pressure ≥ 160/100 mmHg (HR, 2.642; 95% CI, 1.221-5.720) and being underweight (HR, 4.320; 95% CI, 1.557-11.984) were independent predictors of overall and cancer-specific mortality, respectively. Therefore, it is concluded that being underweight and perioperative blood pressure ≥ 160/100 mmHg negatively affect cancer-specific and overall survival, respectively, while smoking status does not influence survivals in patients with renal cell carcinoma. abstract_id: PUBMED:33477676 Body Mass Index in Patients Treated with Cabozantinib for Advanced Renal Cell Carcinoma: A New Prognostic Factor? We analyzed the clinical and pathological features of renal cell carcinoma (RCC) patients treated with cabozantinib stratified by body mass index (BMI). We retrospectively collected data from 16 worldwide centers involved in the treatment of RCC. Overall survival (OS) and progression-free survival (PFS) were analyzed using Kaplan-Meier curves. Cox proportional models were used at univariate and multivariate analyses. We collected data from 224 patients with advanced RCC receiving cabozantinib as second- (113, 5%) or third-line (111, 5%) therapy. The median PFS was significantly higher in patients with BMI ≥ 25 (9.9 vs. 7.6 months, p &lt; 0.001). The median OS was higher in the BMI ≥ 25 subgroup (30.7 vs. 11.0 months, p = 0.003). As third-line therapy, both median PFS (9.2 months vs. 3.9 months, p = 0.029) and OS (39.4 months vs. 11.5 months, p = 0.039) were longer in patients with BMI ≥ 25. BMI was a significant predictor for both PFS and OS at multivariate analysis. We showed that a BMI ≥ 25 correlates with longer survival in patients receiving cabozantinib. BMI can be easily assessed and should be included in current prognostic criteria for advanced RCC. abstract_id: PUBMED:30516928 Prognostic significance of body mass index in patients with localized renal cell carcinoma. Objective: To investigate the relationship between the pretreatment body mass index (BMI) and the clinical outcomes in patients with localized stage I - III renal cell carcinoma (RCC) surgically treated. Materials And Methods: From January 2000 to December 2012, 798 patients with stage I - III RCC were recruited from First Affiliated Hospital and Cancer Center of Sun Yat - Sen University. Patients were divided into two groups of BMI &lt; 25 kg / m2 or BMI ≥ 25 kg / m2 according to the World Health Organization classifications for Asian populations. The differences in the long-term survival of these two BMI groups were analyzed. Results: The 5 - year failure - free survival rates for BMI &lt; 25 kg / m2 and BMI ≥ 25 kg / m2 groups were 81.3% and 93.3%, respectively (P = 0.002), and the 5 - year overall survival rates were 82.5% and 93.8%, respectively (P = 0.003). BMI was a favored prognostic factor of overall survival and failure - free survival in a Cox regression model. Conclusions: Pretreatment body mass index was an independent prognostic factor for Chinese patients surgically treated, localized stage I - III RCC. abstract_id: PUBMED:20006879 Prognostic value of body mass index in Korean patients with renal cell carcinoma. Purpose: Whether body mass index is a prognostic factor in patients with renal cell carcinoma continues to be debated. We investigated the association between body mass index, and clinical/pathological features and prognosis in a large cohort of Korean patients with renal cell carcinoma. Materials And Methods: The medical records of 1,017 patients with renal cell carcinoma who underwent curative surgery between 1988 and 2006 were reviewed. Mean followup was 76.9 months. We analyzed the association of body mass index at surgery with tumor pathological features, and its associations with cancer specific survival and overall survival were evaluated using the Kaplan-Meier method and Cox regression models. Additional survival analysis was performed in a subgroup of 897 patients with T1-4N0M0 disease. Results: Of the 1,017 patients 363 (35.7%), 526 (51.7%) and 128 (12.6%) had a body mass index of less than 23 (normal), 23 to 27.5 (overweight) and 27.5 or greater (obese) kg/m(2), respectively. Overweight and obese patients had less aggressive tumors, such as less lymph node and/or distant metastases (p = 0.001), low pathological T stage (p = 0.047) and low Fuhrman grade (p = 0.033) vs normal weight patients. In terms of cancer specific survival and overall survival multivariate analysis showed that overweight (p = 0.040 and p = 0.047, respectively) and obese (p = 0.024 and p = 0.010, respectively) patients had good survival rates compared to those with a body mass index in the normal range in the cohort (T1-4NallMall) groups. In addition, overweight (p = 0.022 and p = 0.029, respectively) and obese (p = 0.009 and p = 0.002, respectively) status was significantly associated with cancer specific and overall survival in the T1-4N0M0 groups. Conclusions: Our findings suggest that overweight and obese Korean patients with renal cell carcinoma have more favorable pathological features and a better prognosis than those with a normal body mass index. abstract_id: PUBMED:16184476 A prospective study of body mass index, hypertension, and smoking and the risk of renal cell carcinoma (United States). Objective: We prospectively investigated the independent association of hypertension, thiazide use, body mass index, weight change, and smoking with the risk of renal cell carcinoma among men and women using biennial mailed questionnaires. Methods: The study population included 118,191 women participating in the Nurses' Health Study and 48,953 men participating in the Health Professionals Follow-up Study. Results: During 24 years of follow-up for women and 12 years for men, 155 and 110 incident cases of renal cell carcinoma were confirmed, respectively. In multivariate models including age, body mass index (BMI), smoking and hypertension, higher BMI was confirmed as a risk factor for women and smoking as a risk factor for men and women. After adjusting for age, updated BMI and smoking, an updated diagnosis of hypertension was associated with renal cell carcinoma (RCC); the relative risk (RR) was 1.9 (95% CI 1.4-2.7) for women and 1.8 (95% CI 1.2-2.7) for men. Based on limited data regarding the use of thiazide diuretics, we did not observe a risk associated with their use, independent of the diagnosis of hypertension. Conclusions: Diagnosis of hypertension, higher BMI, and increasing pack-years of smoking appear to independently increase the risk of renal cell carcinoma. abstract_id: PUBMED:17656207 Influence of body mass index on prognosis of Japanese patients with renal cell carcinoma. Objectives: Obesity, a significant risk factor for renal cell carcinoma (RCC), has recently become a public health problem in Japan. In addition, the incidence of RCC in Japan has steadily increased during the past few decades. The objective of the present study was to investigate the relationship of body mass index (BMI) to the prognosis of Japanese patients with RCC. Methods: From January 1991 to December 2002, 270 operations for RCC in 269 consecutive Japanese patients were performed at our institution. BMI data were available for 264 (98%) of the 269 patients. Overall, cancer-specific, and recurrence-free survival were investigated using the Kaplan-Meier method, and the Cox regression model was used to determine the significant prognostic factors on multivariate analysis. Results: Of the 264 patients, 140 (53.0%), 67 (25.4%), and 57 (21.6%) had a BMI of less than 23, 23 to 25, and greater than 25 kg/m2, respectively. These three BMI groups were not significantly different in age, sex, presenting symptoms, tumor stage, or grade. On univariate analysis, a significant advantage regarding overall and cancer-specific, but not recurrence-free, survival was found for patients with a BMI of 23 kg/m2 or greater compared with those with a BMI of less than 23 kg/m2. Also, on multivariate analysis, the BMI was significantly associated with overall and cancer-specific, but not recurrence-free, survival. Conclusions: Our findings suggest that a BMI of 23 kg/m2 or more favorably affects the prognosis of Japanese patients with RCC. However, additional studies are needed to verify these results. Answer: The relationship between body mass index (BMI) and the risk of advanced-stage renal cell carcinoma (RCC) is not straightforward, as evidenced by the findings from various studies. One study found no significant correlation between BMI and advanced tumor stages of RCC at the time of diagnosis (PUBMED:12946742). This suggests that obese patients are not at greater risk of advanced tumor stages compared with a population of normal weight. However, another study indicated that while BMI was not associated with renal mass complexity, waist circumference (WC), which is a measure of abdominal fat, was positively correlated with nephrometry score, even after controlling for other factors. This study found that WC, but not BMI or retrorenal fat (RRF), is associated with increased renal mass complexity (PUBMED:28797586). In contrast, a case-control study showed that increased BMI was associated with an increased risk for RCC in both men and women (PUBMED:10069258). This suggests that higher BMI may be a risk factor for RCC development, but it does not directly address the stage of cancer at diagnosis. Another study found that higher BMI patients may derive greater benefit from laparoscopic partial nephrectomy compared to open partial nephrectomy, indicating that BMI can influence surgical outcomes (PUBMED:22561514). Furthermore, a retrospective cohort study found that being underweight, rather than overweight, was associated with a decrease in overall and cancer-specific survival in patients with low-stage RCC (PUBMED:23400428). This suggests that higher BMI may not necessarily be associated with worse outcomes. Additionally, a study on patients treated with cabozantinib for advanced RCC found that a BMI ≥ 25 was associated with longer survival, indicating that higher BMI could be a favorable prognostic factor in this context (PUBMED:33477676). In summary, the evidence does not conclusively show that patients with a higher BMI have a greater risk of advanced-stage RCC. Some studies suggest that higher BMI may be associated with increased risk or complexity of RCC, while others indicate that higher BMI may be associated with better outcomes or not correlated with advanced stages at all.
Instruction: Does the efficacy of BCG decline with time since vaccination? Abstracts: abstract_id: PUBMED:9526191 Does the efficacy of BCG decline with time since vaccination? Objective: To investigate whether the protective efficacy of bacille Calmette-Guérin (BCG) against tuberculosis decreases with time since vaccination. Design: A quantitative review of all 10 randomized trials of BCG against tuberculosis in purified protein derivative (PPD)-negative individuals, that presented data for discrete periods. For each trial, we derived log rate ratios for the annual change in the efficacy of BCG. We also compared efficacy in the first two years, and the first 10 years, to that in the rest of the trial. Results: There was considerable heterogeneity between trials in the annual change in the efficacy of BCG. In seven efficacy decreased overtime, while in three it increased. Average annual change in efficacy was not related to overall efficacy. Efficacy also varied between trials in the first two years after vaccination, at more than two years after vaccination and in the first ten years after vaccination. However the variation in efficacy between trials more than 10 years after vaccination was not statistically significant (P = 0.26). We therefore calculated that the average efficacy more than 10 years after vaccination was 14% (95% confidence interval -9% to 32%). Conclusion: BCG protection can wane with time since vaccination. There is no good evidence that BCG provides protection more than 10 years after vaccination. abstract_id: PUBMED:3589111 BCG vaccination in France First used in 1921 and obligatory since 1950, BCG vaccination is a part of the classical arsenal in the struggle against tuberculosis in France. The progressive reduction in the incidence of tuberculosis leads one to wonder what to expect now and in the future, so much so that the degree of protection conferred by BCG is continually discussed. In animal experiments, BCG vaccination is efficacious but there is no absolute protection conferred. In man, the results of 9 prospective studies performed with control groups have thrice shown an 80% protection, thrice a 30% protection and thrice no protection (in the case of studies from Southern India). On the grounds that there were large differences in the methodology of the 9 studies and that the best methodology was found in the 3 studies which showed good protective efficacy of BCG, it is justifiable to consider that the protection conferred against tuberculosis by a correct BCG vaccination is of the order of 80% and lasts 15 years (direct effect of BCG). Equally, a similar protection has been observed in numerous retrospective studies. But it is not accompanied by a reduction in the transmission of tuberculous bacilli in the population vaccinated with BCG. Since one does not observe any reduction in the incidence of tuberculosis in non-vaccinated subjects who live in contact with the vaccinated population (indirect effect of BCG).(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:9676117 Results of the BCG vaccination in Hungary since 1929: evaluation of preventive and immunotherapeutic effectiveness Background: The BCG (Bacille Calmette-Guérin), a living attenuated bacterial vaccine with a characteristic residual virulence, has been used to prevent tuberculosis since 1921 (in Hungary non-systematically since 1929) and applied for immunostimulation in neoplasia since the 1960s. Measures: Considering the grave tuberculosis epidemiological situation in Hungary, the BCG revaccination became compulsory up to 20 years old tuberculin negatives since 1959. The Pasteur P1173P2 BCG strain has been used for vaccine manufacturing with improved quality control methods according to the requirements of the WHO. With in systematic BCG primo and revaccination policy 8.1 million BCG vaccination from 1959 to 1983 then further 3.1 million between 1984 and 1996 have been performed. Results: Linear regression analysis demonstrates that the decrease of the TB incidence in children was 3-5 times more rapid (annual average decrease was 25.5%) than in adult since 1959. Multiple regression analysis indicates that the BCG is the strongest explanatory variable decreasing children TB incidence among other antituberculosis measures. The BCG vaccination efficacy ins demonstrated by 2 x 2 table analysis. The systematic BCG vaccination, the living and persisting BCG in the macrophages, confers acquired resistance against virulent TB infections. The immunostimulation in neoplasia has been applied with concentrated BCG developed in Hungary since 1979. The adverse reactions are at accepted frequency. The number of BCG vaccinated subjects was estimated at 1.5 billion from 1948 to 1974 in the world. The yearly number of BCG vaccination in the WHOI-EPI System is estimated 50-100 million. Conclusion: The efficacy of the BCG vaccination can only be ensured if the vaccine is manufactured and controlled with standardized methods, and applied in a systematic vaccination programme. The effectiveness has to be evaluated in statistically valid biostatistical models. abstract_id: PUBMED:26970464 Does effect of BCG vaccine decrease with time since vaccination and increase tuberculin skin test reaction? The protective efficacy of BCG was studied for over 15 years, from 1968, in South India. A secondary analysis of data was performed to investigate the relationship between Bacille Calmette-Guérin (BCG) and tuberculosis (TB) disease and between BCG and positive tuberculin skin test for different time periods among children aged less than 10 years. A randomized controlled trial was conducted, where 281,161 persons were allocated to receive BCG 0.1mg, BCG 0.01mg or placebo. Tuberculin skin test was performed at baseline and at 4 years after BCG vaccination. Surveys were conducted every 2.5 years to detect all new cases of culture-positive/smear-positive TB occurring in the community over a 15-year period. Relative risk (RR) was obtained from the ratio of incidence among the vaccinated and the placebo groups. Among those children vaccinated with 0.1mg of BCG, the RR for TB was 0.56 (95% CI: 0.32-0.87, P=0.01) at 12.5 years but increased to 0.73 later. Similar pattern was seen with 0.01mg. The increase in the number of skin test positives with 0.1mg of BCG was 57.8%, 49.4% and 34% for cut-off points at ≥10mm, ≥12mm and ≥15mm, respectively. The study suggests that the effect of BCG may decrease since vaccination and the tuberculin positive was higher at post-vaccination test period due to BCG. abstract_id: PUBMED:28738015 Observational study to estimate the changes in the effectiveness of bacillus Calmette-Guérin (BCG) vaccination with time since vaccination for preventing tuberculosis in the UK. Background: Until recently, evidence that protection from the bacillus Calmette-Guérin (BCG) vaccination lasted beyond 10 years was limited. In the past few years, studies in Brazil and the USA (in Native Americans) have suggested that protection from BCG vaccination against tuberculosis (TB) in childhood can last for several decades. The UK's universal school-age BCG vaccination programme was stopped in 2005 and the programme of selective vaccination of high-risk (usually ethnic minority) infants was enhanced. Objectives: To assess the duration of protection of infant and school-age BCG vaccination against TB in the UK. Methods: Two case-control studies of the duration of protection of BCG vaccination were conducted, the first on minority ethnic groups who were eligible for infant BCG vaccination 0-19 years earlier and the second on white subjects eligible for school-age BCG vaccination 10-29 years earlier. TB cases were selected from notifications to the UK national Enhanced Tuberculosis Surveillance system from 2003 to 2012. Population-based control subjects, frequency matched for age, were recruited. BCG vaccination status was established from BCG records, scar reading and BCG history. Information on potential confounders was collected using computer-assisted interviews. Vaccine effectiveness was estimated as a function of time since vaccination, using a case-cohort analysis based on Cox regression. Results: In the infant BCG study, vaccination status was determined using vaccination records as recall was poor and concordance between records and scar reading was limited. A protective effect was seen up to 10 years following infant vaccination [&lt; 5 years since vaccination: vaccine effectiveness (VE) 66%, 95% confidence interval (CI) 17% to 86%; 5-10 years since vaccination: VE 75%, 95% CI 43% to 89%], but there was weak evidence of an effect 10-15 years after vaccination (VE 36%, 95% CI negative to 77%; p = 0.396). The analyses of the protective effect of infant BCG vaccination were adjusted for confounders, including birth cohort and ethnicity. For school-aged BCG vaccination, VE was 51% (95% CI 21% to 69%) 10-15 years after vaccination and 57% (95% CI 33% to 72%) 15-20 years after vaccination, beyond which time protection appeared to wane. Ascertainment of vaccination status was based on self-reported history and scar reading. Limitations: The difficulty in examining vaccination sites in older women in the high-risk minority ethnic study population and the sparsity of vaccine record data in the later time periods precluded robust assessment of protection from infant BCG vaccination &gt; 10 years after vaccination. Conclusions: Infant BCG vaccination in a population at high risk for TB was shown to provide protection for at least 10 years, whereas in the white population school-age vaccination was shown to provide protection for at least 20 years. This evidence may inform TB vaccination programmes (e.g. the timing of administration of improved TB vaccines, if they become available) and cost-effectiveness studies. Methods to deal with missing record data in the infant study could be explored, including the use of scar reading. Funding: The National Institute for Health Research Health Technology Assessment programme. During the conduct of the study, Jonathan Sterne, Ibrahim Abubakar and Laura C Rodrigues received other funding from NIHR; Ibrahim Abubakar and Laura C Rodrigues have also received funding from the Medical Research Council. Punam Mangtani received funding from the Biotechnology and Biological Sciences Research Council. abstract_id: PUBMED:7899761 BCG vaccination Given especially to children, BCG vaccine has a protective effect against severe forms of tuberculosis in 70 to 80% of the subjects. It has a modest epidemiological effect since socioeconomic and sanitary conditions as well as human immunodeficiency virus infection influence disease spread. The precise mechanisms and the duration of BCG protection remains poorly defined. Vaccine complications in HIV positive subjects appear minimal compared with expectations. The tuberculosis irradication policy is defined in each country depending on the annual risk of infection. The threshold of 0.01% (0.03% in France) would justify BCG vaccination of exposed persons alone. Today there is renewed interest in research concerning BCG, 70 years after its first introduction for clinical use. abstract_id: PUBMED:18093699 Abortive reaction and time of scar formation after BCG vaccination. We describe two practical issues regarding BCG vaccination that merit attention. These are time of scar formation after BCG vaccination and abortive reaction. Scar formation after vaccination may take 6 months or more. Babies showing abortive reaction after BCG reaction should be considered different from non-reactors. All health care providers and vaccinologists should be sensitized that abortive reaction is one of the local reactions after BCG vaccination. abstract_id: PUBMED:28454649 Variable BCG efficacy in rhesus populations: Pulmonary BCG provides protection where standard intra-dermal vaccination fails. M.bovis BCG vaccination against tuberculosis (TB) notoriously displays variable protective efficacy in different human populations. In non-human primate studies using rhesus macaques, despite efforts to standardise the model, we have also observed variable efficacy of BCG upon subsequent experimental M. tuberculosis challenge. In the present head-to-head study, we establish that the protective efficacy of standard parenteral BCG immunisation varies among different rhesus cohorts. This provides different dynamic ranges for evaluation of investigational vaccines, opportunities for identifying possible correlates of protective immunity and for determining why parenteral BCG immunisation sometimes fails. We also show that pulmonary mucosal BCG vaccination confers reduced local pathology and improves haematological and immunological parameters post-infection in animals that are not responsive to induction of protection by standard intra-dermal BCG. These results have important implications for pulmonary TB vaccination strategies in the future. abstract_id: PUBMED:30552267 Is the decline in neonatal mortality in northern Ghana, 1996-2012, associated with the decline in the age of BCG vaccination? An ecological study. Objective: To examine the association between early Bacille Calmette-Guerin (BCG) vaccination and neonatal mortality in northern Ghana. Methods: This ecological study used vaccination and mortality data from the Navrongo Health and Demographic Surveillance System. First, we assessed and compared changes in neonatal mortality rates (NMRs) and median BCG vaccination age from 1996 to 2012. Second, we compared the changes in NMR and median BCG vaccination age from 2002 to 2012 by delivery place when data on delivery place were available. Results: Neonatal mortality rates declined from 46 to 12 per 1000 live births between 1996 and 2012 (trend test: p&lt;0.001). Within the same period, median BCG vaccination age declined from 46 to 4 days (trend test: p&lt;0.001). Among home deliveries, BCG vaccination age declined from 39 days in 2002 to 7 days in 2012 (trend test: p&lt;0.001) and neonatal mortality declined by 24/1000 (trend test: p&lt;0.001). Among health facility deliveries, BCG vaccination age was stable around 3 days from 2002 to 2012 (trend test: p=0.49) and neonatal mortality declined by 9/1000 (trend test: p=0.04). In a small study of children whose vaccination cards were inspected within the first 28 days of life, the HR for BCG-vaccinated compared with BCG-unvaccinated children was 0.55 (95% CI 0.12 to 2.40). Conclusion: The data support the hypothesis that early BCG vaccination may be associated with a decrease in neonatal mortality. However, as suggested by WHO, randomised control trials are required to address the question of whether there is indeed a causal association between early BCG vaccination and neonatal mortality. abstract_id: PUBMED:11187378 BCG vaccination--controversy and compromise In Sweden, BCG-vaccination is recommended to certain risk groups only, as the incidence of TB is very low. Children from high-endemic areas, as well as health care personnel, especially those working in risk areas, are the most important target groups. The efficacy of BCG vaccination has varied in different investigations, but early Nordic studies have shown approximately 80 percent protection. Vaccination prevents disseminated but not localized pulmonary disease. There are no data supporting revaccination. Today some Swedish children are vaccinated without a clear indication, due to caretakers' fear of TB. The risk of new infection is very low in Sweden today, and is for all practical purposes limited to the closest family members of affected individuals. If large numbers of refugees from high-endemic countries arrive in Sweden, the epidemiological situation must be closely monitored. Answer: Yes, the efficacy of BCG (Bacille Calmette-Guérin) vaccination against tuberculosis (TB) can decline with time since vaccination. A quantitative review of 10 randomized trials showed that there was considerable heterogeneity between trials in the annual change in the efficacy of BCG. In seven trials, efficacy decreased over time, while in three it increased. The average efficacy more than 10 years after vaccination was calculated to be 14%, with a 95% confidence interval of -9% to 32%, indicating no good evidence that BCG provides protection more than 10 years after vaccination (PUBMED:9526191). Another study conducted in South India over 15 years found that the relative risk for TB among children vaccinated with BCG increased from 0.56 at 12.5 years to 0.73 later, suggesting that the effect of BCG may decrease with time since vaccination (PUBMED:26970464). An observational study in the UK found that infant BCG vaccination provided protection for at least 10 years, while school-age vaccination provided protection for at least 20 years, after which protection appeared to wane (PUBMED:28738015). In contrast, a study in France suggested that the protection conferred by BCG vaccination is around 80% and lasts for 15 years (PUBMED:3589111). However, the methodology of studies showing varying levels of protection has been questioned, and the best methodology was found in the studies which showed good protective efficacy of BCG. Overall, these studies indicate that while BCG vaccination can provide significant protection against TB, the level and duration of protection can vary, and in many cases, the efficacy of BCG does decline over time since vaccination.
Instruction: Are We Missing the Mark? Abstracts: abstract_id: PUBMED:23519918 Estimation of Stratified Mark-Specific Proportional Hazards Models with Missing Marks. An objective of randomized placebo-controlled preventive HIV vaccine efficacy trials is to assess the relationship between the vaccine effect to prevent infection and the genetic distance of the exposing HIV to the HIV strain represented in the vaccine construct. Motivated by this objective, recently a mark-specific proportional hazards model with a continuum of competing risks has been studied, where the genetic distance of the transmitting strain is the continuous `mark' defined and observable only in failures. A high percentage of genetic marks of interest may be missing for a variety of reasons, predominantly due to rapid evolution of HIV sequences after transmission before a blood sample is drawn from which HIV sequences are measured. This research investigates the stratified mark-specific proportional hazards model with missing marks where the baseline functions may vary with strata. We develop two consistent estimation approaches, the first based on the inverse probability weighted complete-case (IPW) technique, and the second based on augmenting the IPW estimator by incorporating auxiliary information predictive of the mark. We investigate the asymptotic properties and finite-sample performance of the two estimators, and show that the augmented IPW estimator, which satisfies a double robustness property, is more efficient. abstract_id: PUBMED:26511033 Mark-specific hazard ratio model with missing multivariate marks. An objective of randomized placebo-controlled preventive HIV vaccine efficacy (VE) trials is to assess the relationship between vaccine effects to prevent HIV acquisition and continuous genetic distances of the exposing HIVs to multiple HIV strains represented in the vaccine. The set of genetic distances, only observed in failures, is collectively termed the 'mark.' The objective has motivated a recent study of a multivariate mark-specific hazard ratio model in the competing risks failure time analysis framework. Marks of interest, however, are commonly subject to substantial missingness, largely due to rapid post-acquisition viral evolution. In this article, we investigate the mark-specific hazard ratio model with missing multivariate marks and develop two inferential procedures based on (i) inverse probability weighting (IPW) of the complete cases, and (ii) augmentation of the IPW estimating functions by leveraging auxiliary data predictive of the mark. Asymptotic properties and finite-sample performance of the inferential procedures are presented. This research also provides general inferential methods for semiparametric density ratio/biased sampling models with missing data. We apply the developed procedures to data from the HVTN 502 'Step' HIV VE trial. abstract_id: PUBMED:33191955 A Hybrid Approach for the Stratified Mark-Specific Proportional Hazards Model with Missing Covariates and Missing Marks, with Application to Vaccine Efficacy Trials. Deployment of the recently licensed CYD-TDV dengue vaccine requires understanding of how the risk of dengue disease in vaccine recipients depends jointly on a host biomarker measured after vaccination (neutralization titer - NAb) and on a "mark" feature of the dengue disease failure event (the amino acid sequence distance of the dengue virus to the dengue sequence represented in the vaccine). The CYD14 phase 3 trial of CYD-TDV measured NAb via case-cohort sampling and the mark in dengue disease failure events, with about a third missing marks. We addressed the question of interest by developing inferential procedures for the stratified mark-specific proportional hazards model with missing covariates and missing marks. Two hybrid approaches are investigated that leverage both augmented inverse probability weighting and nearest neighborhood hot deck multiple imputation. The two approaches differ in how the imputed marks are pooled in estimation. Our investigation shows that NNHD imputation can lead to biased estimation without properly selected neighborhood. Simulations show that the developed hybrid methods perform well with unbiased NNHD imputations from proper neighborhood selection. The new methods applied to CYD14 show that NAb is strongly inversely associated with risk of dengue disease in vaccine recipients, more strongly against dengue viruses with shorter distances. abstract_id: PUBMED:25737891 Identification of a person with the help of bite mark analysis. Forensic dentistry is an essential part of Forensic science, mainly involves the identification of an assailant by comparing a record of their dentition (set of teeth) with a record of a bite mark left on a victim. Other uses in law for dentists include the identification of human remains, medico-legal assessment of trauma to oral tissues, and testimony about dental malpractice. While the practice of human identification is well established, validated and proven to be accurate, the practice of bite mark analysis is less well accepted. The principle of identifying an injury as a bite mark is complex and, depending on severity and anatomical location, highly subjective. Following the identification of an injury as a bite mark, the comparison of the pattern produced to a suspect's dentition is even more contentious and an area of great debate within contemporary odontological practice. Like fingerprints and DNA, bite marks are unique to an individual - such as distance and angles between teeth, missing teeth, fillings and dental work. This type of impression evidence can be left in the skin of a victim and also can be in food, chewing gum and other miscellaneous items such as pens and pencils. The advent of DNA analysis and its recovery from bite marks has offered an objective method of bite mark analysis. abstract_id: PUBMED:26004801 MARK-AGE data management: Cleaning, exploration and visualization of data. Databases are an organized collection of data and necessary to investigate a wide spectrum of research questions. For data evaluation analyzers should be aware of possible data quality problems that can compromise results validity. Therefore data cleaning is an essential part of the data management process, which deals with the identification and correction of errors in order to improve data quality. In our cross-sectional study, biomarkers of ageing, analytical, anthropometric and demographic data from about 3000 volunteers have been collected in the MARK-AGE database. Although several preventive strategies were applied before data entry, errors like miscoding, missing values, batch problems etc., could not be avoided completely. Such errors can result in misleading information and affect the validity of the performed data analysis. Here we present an overview of the methods we applied for dealing with errors in the MARK-AGE database. We especially describe our strategies for the detection of missing values, outliers and batch effects and explain how they can be handled to improve data quality. Finally we report about the tools used for data exploration and data sharing between MARK-AGE collaborators. abstract_id: PUBMED:26807411 Missing data exploration: highlighting graphical presentation of missing pattern. Functions shipped with R base can fulfill many tasks of missing data handling. However, because the data volume of electronic medical record (EMR) system is always very large, more sophisticated methods may be helpful in data management. The article focuses on missing data handling by using advanced techniques. There are three types of missing data, that is, missing completely at random (MCAR), missing at random (MAR) and not missing at random (NMAR). This classification system depends on how missing values are generated. Two packages, Multivariate Imputation by Chained Equations (MICE) and Visualization and Imputation of Missing Values (VIM), provide sophisticated functions to explore missing data pattern. In particular, the VIM package is especially helpful in visual inspection of missing data. Finally, correlation analysis provides information on the dependence of missing data on other variables. Such information is useful in subsequent imputations. abstract_id: PUBMED:33135748 A Simple, Inexpensive Method for Mark-Recapture of Ixodid Ticks. Mark-recapture techniques have been widely used and specialized to study organisms throughout the field of biology. To mark-recapture ticks (Ixodida), we have created a simple method to mark ticks using nail polish applied with an insect pin secured in a pencil that allows for a variety of questions to be answered. For measuring tick control efficacy, estimating population estimates, or measuring movement of ticks, this inexpensive mark-recapture method has been easily applied in the field and in the lab to provide useful data to answer a variety of questions about ticks. abstract_id: PUBMED:26461462 Goodness-of-fit test of the stratified mark-specific proportional hazards model with continuous mark. Motivated by the need to assess HIV vaccine efficacy, previous studies proposed an extension of the discrete competing risks proportional hazards model, in which the cause of failure is replaced by a continuous mark only observed at the failure time. However the model assumptions may fail in several ways, and no diagnostic testing procedure for this situation has been proposed. A goodness-of-fit test procedure for the stratified mark-specific proportional hazards model in which the regression parameters depend nonparametrically on the mark and the baseline hazards depends nonparametrically on both time and the mark is proposed. The test statistics are constructed based on the weighted cumulative mark-specific martingale residuals. The critical values of the proposed test statistics are approximated using the Gaussian multiplier method. The performance of the proposed tests are examined extensively in simulations for a variety of the models under the null hypothesis and under different types of alternative models. An analysis of the 'Step' HIV vaccine efficacy trial using the proposed method is presented. The analysis suggests that the HIV vaccine candidate may increase susceptibility to HIV acquisition. abstract_id: PUBMED:24706730 What is the difference between missing completely at random and missing at random? The terminology describing missingness mechanisms is confusing. In particular the meaning of 'missing at random' is often misunderstood, leading researchers faced with missing data problems away from multiple imputation, a method with considerable advantages. The purpose of this article is to clarify how 'missing at random' differs from 'missing completely at random' via an imagined dialogue between a clinical researcher and statistician. abstract_id: PUBMED:18572728 A survey of the incidence of missing anterior teeth: potential value in bite mark analysis. Bite mark analysis involves comparison of individual dental characteristics between a dentition and the bite injury. A bite mark injury may result from sexual assault, or physical assault, and defensive injuries, and as such can be used to link a suspect to a victim or vice versa. Missing teeth are one of the characteristics that could implicate or exclude a suspected biter. However frequency data for use by forensic odontologists can only be collated from epidemiological studies. Therefore an audit was undertaken of missing anterior teeth in adult patients (n = 1010) attending for treatment, gathering data that could be more relevant to odontology. One in five of the sample presented with missing teeth that were either replaced with a denture (11%), not replaced (6%) or missing with the gap closed (2%). Answer: The provided abstracts discuss various aspects of "missing marks" in different contexts, ranging from statistical modeling in vaccine efficacy trials to forensic dentistry. In vaccine efficacy trials, "marks" refer to specific genetic characteristics or biomarkers that are relevant to the study outcomes. For example, in HIV vaccine trials, the genetic distance of the exposing HIV to the vaccine strain is a critical "mark" that is often missing due to rapid post-acquisition viral evolution (PUBMED:23519918, PUBMED:26511033). Similarly, in dengue vaccine trials, the "mark" could be the amino acid sequence distance of the dengue virus to the vaccine strain (PUBMED:33191955). These missing marks pose significant challenges for statistical analysis and interpretation of vaccine efficacy. Researchers have developed various methods to handle missing marks, such as inverse probability weighting (IPW), augmentation of IPW with auxiliary information, and hybrid approaches combining augmented IPW with nearest neighborhood hot deck multiple imputation (PUBMED:23519918, PUBMED:26511033, PUBMED:33191955). These methods aim to provide consistent and efficient estimations despite the missing data. In forensic dentistry, "marks" refer to bite marks, which can be used to identify individuals based on unique dental characteristics (PUBMED:25737891). However, the identification process can be complex and subjective, and the presence of missing teeth can further complicate the analysis (PUBMED:18572728). The abstracts also discuss the importance of data management, including cleaning, exploration, and visualization of data, to handle issues like missing values, outliers, and batch effects (PUBMED:26004801). They highlight the use of graphical presentations and advanced techniques to explore missing data patterns (PUBMED:26807411). In summary, the abstracts suggest that while significant progress has been made in addressing the issue of missing marks in various fields, challenges remain. Whether in the context of vaccine efficacy trials or forensic analysis, the problem of missing data requires careful consideration and the application of specialized methods to ensure accurate and reliable results. Thus, in a metaphorical sense, the question "Are We Missing the Mark?" could refer to whether current methodologies and practices adequately address the issue of missing data to achieve the desired outcomes in research and forensic investigations.
Instruction: Does drug treatment of patients with acute bronchitis reduce additional care seeking? Abstracts: abstract_id: PUBMED:11115198 Does drug treatment of patients with acute bronchitis reduce additional care seeking? Evidence from the Practice Partner Research Network. Background: Considerable discussion has focused on treatment methods for patients with acute bronchitis. Objective: To examine whether antibiotic or bronchodilator treatment is associated with differences in follow-up visit rates for patients with acute bronchitis. Methods: A retrospective medical chart review was conducted for patients with a new episode of acute bronchitis over a 3-year period in the Practice Partner Research Network (29,248 episodes in 24,753 patients). Primary outcomes of interest were another visit in the next 14 days (early follow-up) or 15 to 28 days after initial treatment (late follow-up). Results: Antibiotics were used more commonly in younger patients (&lt;18 years), whereas older patients (&gt;65 years) were more likely to receive no treatment. Younger patients treated with antibiotics were less likely to return for an early follow-up visit, but no differences were seen in adults and older patients. Late follow-up rates were not affected by the initial treatment strategy. When patients did return for a follow-up visit, no new medication was prescribed to most (66% of younger patients and 78% of older adults). However, compared with patients who did not receive an antibiotic at their first visit, patients initially treated with an antibiotic were about 50% more likely to receive a new antibiotic at their second visit. Conclusions: Initial prescribing of an antibiotic reduces early follow-up for acute bronchitis in younger patients but seems to have no effect in adults. However, reductions in future follow-up visits might be outweighed by increases in antibiotic consumption because patients who return for a follow-up visit seem to receive additional antibiotic prescriptions. Arch Fam Med. 2000;9:997-1001 abstract_id: PUBMED:17116614 Health care utilization of home care patients at an academic medical center in Taiwan. Background: Previous surveys of home care patients in Taiwan have primarily concentrated on patients' status and needs. The aim of this study was to review the actual health care utilization of home care patients during the course of 1 year. Methods: Home care patients at an academic medical center in Taiwan were selected and their insurance claims data at this hospital in 2001 were analyzed. Analyses included the patients' patterns and diagnoses of visits and admissions, and their drug utilization. For diagnoses made at outpatient departments, the grouping system from the National Hospital Ambulatory Medical Care Survey in the United States was used. The Anatomical Therapeutic Chemical Classification system was applied to drug grouping. Results: The home care agency of the hospital cared for 165 patients (66 women, 99 men) in 2001. In total, these 165 patients received 1,358 home visits, 2,751 outpatient visits, and 108 inpatient admissions. While the most frequent diagnoses for all visits were cerebrovascular disease, hypertension, diabetes mellitus, chronic and unspecified bronchitis, psychoses, and other disorders of the central nervous system, the most frequent diagnoses at discharge from the hospital were urinary tract infection and pneumonia. In all visits, 12,282 items of drugs were prescribed in 2,337 prescriptions. On average, each prescription contained 5.3 +/- 2.8 items of drugs. The most frequently prescribed drugs were antacids, expectorants, laxatives, selective calcium channel blockers, and antithrombotic agents. Conclusion: The home care agency of the hospital should pay more attention to provision of comprehensive care and review of drug prescribing. abstract_id: PUBMED:12549944 Pharmacy-based intervention to reduce antibiotic use for acute bronchitis. Background: Intervention programs can reduce inappropriate antibiotic use for the treatment of acute bronchitis in a closed health maintenance organization model. Objective: To evaluate the impact of a pharmacy-based intervention program intended to reduce antibiotic use in the treatment of acute bronchitis in a community-based physician group model. Subjects: Adult and pediatric patients with an office or urgent care visit for acute bronchitis during the baseline and study periods were included in the study. The clinicians were primary care physicians, nurse practitioners, and physician assistants in a suburban community-based physician group setting. Methods: All patients treated for acute bronchitis from January 1 through June 30, 1998, were evaluated for initial receipt of antibiotics and use of clinic resources (office visits, additional antibiotics). From September through December of 1998, physicians were provided literature from the Centers for Disease Control and Prevention (CDC), cough and cold package inserts, and newsletters intended to educate the providers regarding the inappropriateness of antibiotics in the treatment of acute bronchitis. Patient-directed literature from the CDC was placed in the examination rooms and clinic waiting areas beginning September 1998. From January 1 through June 30, 1999, all patients treated for acute bronchitis were assessed for receipt of antibiotics and use of clinic resources. A separate geographic clinic site served as a control during both study periods. Results: During 1998, 888 of 1840 patients (48.3%) received antibiotics for treatment of acute bronchitis; this total decreased to 924 of 2392 (38.6%; p &lt; or = 0.001) in 1999, a reduction of 20%. The rate of antibiotic prescribing in control patients was unchanged during the concomitant time periods (142/446, 31.8% vs. 102/321, 31.8%). The rate of subsequent physician visits was similar (8% vs. 9%) between patients receiving antibiotics and those who did not. However, significantly more patients initially receiving antibiotics required a subsequent antibiotic prescription (45/1812, 2.5% vs. 24/2420, 1.0%; p &lt; or = 0.001). Conclusions: A pharmacy-based intervention program reduces the incidence of inappropriate antibiotic use in the treatment of acute bronchitis. Reduced antibiotic prescribing does not increase consumption of healthcare resources; patients who receive antibiotics for acute bronchitis are more likely to subsequently require additional antibiotic prescriptions. While a significant decrease in antibiotic use was realized, other interventions are required to further reduce the prevalence of antibiotic use in acute bronchitis. abstract_id: PUBMED:20601446 U.S. Food and Drug Administration approval: ofatumumab for the treatment of patients with chronic lymphocytic leukemia refractory to fludarabine and alemtuzumab. Purpose: To describe the data and analyses that led to the U.S. Food and Drug Administration (FDA) approval of ofatumumab (Arzerra, GlaxoSmithKline) for the treatment of patients with chronic lymphocytic leukemia (CLL) refractory to fludarabine and alemtuzumab. Experimental Design: The FDA reviewed the results of a planned interim analysis of a single-arm trial, enrolling 154 patients with CLL refractory to fludarabine, and a supportive dose-finding, activity-estimating trial in 33 patients with CLL. Patients in the primary efficacy study received ofatumumab weekly for eight doses, then every 4 weeks for an additional four doses; patients in the supportive trial received four weekly doses. In the primary efficacy study, endpoints were objective response rate and response duration. Results: For regulatory purposes, the primary efficacy population consisted of 59 patients with CLL refractory to fludarabine and alemtuzumab. In this subgroup, the investigator-determined objective response rate was 42% [99% confidence interval (CI), 26-60], with a median duration of response of 6.5 months (95% CI, 5.8-8.3); all were partial responses. The most common adverse reactions in the primary efficacy study were neutropenia, pneumonia, pyrexia, cough, diarrhea, anemia, fatigue, dyspnea, rash, nausea, bronchitis, and upper respiratory tract infections. Infusion reactions occurred in 44% of patients with the first infusion (300 mg) and 29% with the second infusion (2,000 mg). The most common serious adverse reactions were infections, neutropenia, and pyrexia. Conclusions: On October 26, 2009, the FDA granted accelerated approval to ofatumumab for the treatment of patients with CLL refractory to fludarabine and alemtuzumab, on the basis of demonstration of durable tumor shrinkage. abstract_id: PUBMED:28158685 Low Efficacy of Antibiotics Against Staphylococcus aureus Airway Colonization in Ventilated Patients. Background: Airway-colonization by Staphylococcus aureus predisposes to the development of ventilator-associated tracheobronchitis (VAT) and ventilator-associated pneumonia (VAP). Despite extensive antibiotic treatment of intensive care unit patients, limited data are available on the efficacy of antibiotics on bacterial airway colonization and/or prevention of infections. Therefore, microbiologic responses to antibiotic treatment were evaluated in ventilated patients. Methods: Results of semiquantitative analyses of S. aureus burden in serial endotracheal-aspirate (ETA) samples and VAT/VAP diagnosis were correlated to antibiotic treatment. Minimum inhibitory concentrations of relevant antibiotics using serially collected isolates were evaluated. Results: Forty-eight mechanically ventilated patients who were S. aureus positive by ETA samples and treated with relevant antibiotics for at least 2 consecutive days were included in the study. Vancomycin failed to reduce methicillin-resistant S. aureus (MRSA) or methicillin-susceptible S. aureus (MSSA) burden in the airways. Oxacillin was ineffective for MSSA colonization in approximately 30% of the patients, and responders were typically coadministered additional antibiotics. Despite antibiotic exposure, 15 of the 39 patients (approximately 38%) colonized only by S. aureus and treated with appropriate antibiotic for at least 2 days still progressed to VAP. Importantly, no change in antibiotic susceptibility of S. aureus isolates was observed during treatment. Staphylococcus aureus colonization levels inversely correlated with the presence of normal respiratory flora. Conclusions: Antibiotic treatment is ineffective in reducing S. aureus colonization in the lower airways and preventing VAT or VAP. Staphylococcus aureus is in competition for colonization with the normal respiratory flora. To improve patient outcomes, alternatives to antibiotics are urgently needed. abstract_id: PUBMED:21867823 Aerosolized antibiotics in the intensive care unit. This review summarizes recent clinical data examining the use of aerosolized antimicrobial therapy for the treatment of respiratory tract infections in mechanically ventilated patients in the intensive care unit. Aerosolized antibiotics provide high concentrations of drug in the lung without the systemic toxicity associated with the intravenous antibiotics. First introduced in the 1960s as a treatment of tracheobronchitis and bronchopneumonia caused by Pseudomonas aeruginosa, now, more than 40 years later, there is a resurgence of interest in using this mode of delivery as a primary therapy for ventilator-associated tracheobronchitis and an adjunctive therapy for ventilator-associated pneumonia. abstract_id: PUBMED:38456194 Diagnosis and Treatment of Pneumonia in Urgent Care Clinics: Opportunities for Improving Care. Background: Community-acquired pneumonia is a well-studied condition; yet, in the urgent care setting, patient characteristics and adherence to guideline-recommended care are poorly described. Within Intermountain Health, a nonprofit integrated US health care system based in Utah, more patients present to urgent care clinics (UCCs) than emergency departments (EDs) for pneumonia care. Methods: We performed a retrospective cohort study 1 January 2019 through 31 December 2020 in 28 UCCs within Utah. We extracted electronic health record data for patients aged ≥12 years with ICD-10 pneumonia diagnoses entered by the bedside clinician, excluding patients with preceding pneumonia within 30 days or missing vital signs. We compared UCC patients with radiographic pneumonia (n = 4689), without radiographic pneumonia (n = 1053), without chest imaging (n = 1472), and matched controls with acute cough/bronchitis (n = 15 972). Additional outcomes were 30-day mortality and the proportion of patients with ED visits or hospital admission within 7 days after the index encounter. Results: UCC patients diagnosed with pneumonia and possible/likely radiographic pneumonia by radiologist report had a mean age of 40 years and 52% were female. Almost all patients with pneumonia (93%) were treated with antibiotics, including those without radiographic confirmation. Hospital admissions and ED visits within 7 days were more common in patients with radiographic pneumonia vs patients with "unlikely" radiographs (6% vs 2% and 10% vs 6%, respectively). Observed 30-day all-cause mortality was low (0.26%). Patients diagnosed without chest imaging presented similarly to matched patients with cough/acute bronchitis. Most patients admitted to the hospital the same day after the UCC visit (84%) had an interim ED encounter. Pneumonia severity scores (pneumonia severity index, electronic CURB-65, and shock index) overestimated patient need for hospitalization. Conclusions: Most UCC patients with pneumonia were successfully treated as outpatients. Opportunities to improve care include clinical decision support for diagnosing pneumonia with radiographic confirmation and development of pneumonia severity scores tailored to the UCC. abstract_id: PUBMED:37167145 Evaluation of the efficacy and safety of a combination drug containing ambroxol, guaifenesin, and levosalbutamol versus a fixed-dose combination of bromhexine/guaifenesin/salbutamol in the treatment of productive cough in adult patients with acute bronchitis Aim: To evaluate the efficacy and safety of a combination drug containing ambroxol, guaifenesin, and levosalbutamol, oral solution, versus Ascoril Expectorant, syrup (combination of bromhexine, guaifenesin, and salbutamol) in the treatment of productive cough in adult patients with acute bronchitis. Materials And Methods: This open-label, randomized, phase III study included patients with acute bronchitis who had a productive cough with difficulty in sputum expectoration. 244 patients were randomized in a 1:1 ratio and received 10 mL of the study drug or reference drug 3 times daily for 2 weeks. After 7 and 14 days of treatment, the physician evaluated patient's subjective complaints and the efficacy of therapy. The primary endpoint was the proportion of patients with high and very high efficacy. Results: The primary endpoint was reached by 70 (0.5738) patients in the study drug group and 54 (0.4426) in the reference drug group (p=0.04). The intergroup difference was 0.1311 [95% confidence interval: 0.0057; 0.2566]. The lower limit of the 95% confidence interval was above zero, which confirms the superiority of therapy with the study drug over therapy with Ascoril Expectorant. The proportion of patients with a 1-point total score reduction and with complete resolution of all symptoms according to the Modified Cough Relief and Sputum Expectoration Questionnaire after 7 and 14 days was numerically higher in the study drug group versus the reference drug group. There were no statistically significant differences between the groups in the incidence of adverse events. Conclusion: The efficacy of a new combination drug containing ambroxol, guaifenesin, and levosalbutamol in the treatment of productive cough in adult patients with acute bronchitis is superior to the efficacy of Ascoril Expectorant. The safety profiles of the study drug and the reference drug were comparable. abstract_id: PUBMED:35152033 COVID-related hospitalization, intensive care treatment, and all-cause mortality in patients with psychosis and treated with clozapine. Clozapine, an antipsychotic, is associated with increased susceptibility to infection with COVID-19, compared to other antipsychotics. Here, we investigate associations between clozapine treatment and increased risk of adverse outcomes of COVID-19, namely COVID-related hospitalisation, intensive care treatment, and death, amongst patients taking antipsychotics with schizophrenia-spectrum disorders. Using the clinical records of South London and Maudsley NHS Foundation Trust, we identified 157 individuals who had an ICD-10 diagnosis of schizophrenia-spectrum disorders, were taking antipsychotics (clozapine or other antipsychotics) at the time of COVID-19 pandemic in the UK and had a laboratory-confirmed COVID-19 infection. The following health outcomes were measured: COVID-related hospitalisation, COVID-related intensive care treatment and death. We tested associations between clozapine treatment and each outcome using logistic regression models, adjusting for gender, age, ethnicity, neighbourhood deprivation, obesity, smoking status, diabetes, asthma, bronchitis and hypertension using propensity scores. Of the 157 individuals who developed COVID-19 while on antipsychotics (clozapine or other antipsychotics), there were 28% COVID-related hospitalisations, 8% COVID-related intensive care treatments and 8% deaths of any cause during the 28 days follow-up period. amongst those taking clozapine, there were 25% COVID-related hospitalisations, 7% COVID-related intensive care treatments and 7% deaths. In both unadjusted and adjusted analyses, we found no significant association between clozapine and any of the outcomes. Thus, we found no evidence that patients with clozapine treatment at time of COVID-19 infection had increased risk of hospitalisation, intensive care treatment or death, compared to non-clozapine antipsychotic-treated patients. However, further research should be considered in larger samples to confirm this. abstract_id: PUBMED:30372367 Safety of ambroxol in the treatment of airway diseases in adult patients. Introduction: Ambroxol is a widely used secretolytic and mucoactive over-the-counter agent primarily used to treat respiratory diseases associated with viscid mucus. Following post-marketing reports of hypersensitivity reactions and severe cutaneous adverse reactions (SCARs) possibly linked to ambroxol, the European Union's Pharmacovigilance Risk Assessment Committee (PRAC) initiated in April 2014 a review of the safety of ambroxol in all its registered indications, which was finalized in 2016. Areas covered: Here, we evaluate the clinical safety of ambroxol and provide an expert opinion on the benefit-risk balance of ambroxol in the treatment of adult patients with bronchopulmonary diseases. The evidence for this review is derived from clinical trials of ambroxol that were provided to the PRAC by the marketing authorization holders of ambroxol-containing medicines. Expert opinion: Clinical experience accumulated from randomized clinical trials and observational studies suggests that ambroxol is a safe and well-tolerated treatment of bronchopulmonary diseases, with a well-balanced and favorable benefit-risk profile. All reported adverse events were mild and self-limiting, and the risk of SCARs with ambroxol is low. Further investigations could address the safety and efficacy of ambroxol in pediatric lung diseases and in additional therapeutic indications, such as biofilm-dependent airway disease and lysosomal storage disorders. Answer: The evidence from the provided abstracts suggests that drug treatment of patients with acute bronchitis may have varying effects on additional care seeking. A retrospective medical chart review conducted over a 3-year period in the Practice Partner Research Network found that younger patients treated with antibiotics for acute bronchitis were less likely to return for an early follow-up visit, but no differences were seen in adults and older patients. Late follow-up rates were not affected by the initial treatment strategy. However, patients initially treated with an antibiotic were about 50% more likely to receive a new antibiotic at their second visit if they did return for a follow-up visit (PUBMED:11115198). In contrast, a pharmacy-based intervention program aimed at reducing antibiotic use for the treatment of acute bronchitis in a community-based physician group model showed that reduced antibiotic prescribing did not increase consumption of healthcare resources. The study found that patients who received antibiotics for acute bronchitis were more likely to subsequently require additional antibiotic prescriptions (PUBMED:12549944). These findings suggest that while antibiotic treatment may reduce early follow-up visits for younger patients, it does not necessarily reduce overall healthcare resource consumption, as patients may still require additional antibiotic prescriptions upon return. Additionally, the impact of drug treatment on care seeking may differ based on patient age and the specific treatment strategy employed.
Instruction: Education of patients after whiplash injury: is oral advice any better than a pamphlet? Abstracts: abstract_id: PUBMED:18824949 Education of patients after whiplash injury: is oral advice any better than a pamphlet? Study Design: Randomized parallel-group trial with 1-year follow-up. Objective: To evaluate whether education of patients communicated orally by a specially trained nurse is superior to giving patients a pamphlet after a whiplash injury. Summary Of Background Data: Long-lasting pain and physical disability after whiplash injuries are related to both serious personal suffering and huge socio-economic costs. Pure educational interventions after such injuries seem generally as effective as more costly interventions, but it is unknown if the way advice is communicated is of any importance. Methods: Participants with relatively mild complaints after car collisions were recruited from emergency departments and GPs. A total of 182 participants were randomized to either: (1) a 1 hour-educational session with a specially trained nurse, or (2) an educational pamphlet. Outcome parameters were neck pain, headache, disability, and return to work. Recovery was defined as scoring pain 0 or 1 (0-10 point scale) and not being off sick at the time of the follow-ups. Results: After 3, 6, and 12 months 60%, 58%, and 66%, respectively of the participants had recovered. Group differences were nonsignificant on all outcome parameters, even though the outcome tended to be better for the group receiving personal advice. Conclusion: Prognosis did not differ between patients who received personal education and those who got a pamphlet. However, a systematic tendency toward better outcome with personal communicated information was observed and the question how patients should be educated to reduce the risk of chronicity after whiplash is worth further investigation, since no treatment have been proven to prevent long-lasting symptoms, and all forms of advice or educational therapy are so cheap that even a modest effect justifies its use. abstract_id: PUBMED:22419306 Patient education for neck pain. Background: Neck disorders are common, disabling, and costly. The effectiveness of patient education strategies is unclear. Objectives: To assess the short- to long-term effects of therapeutic patient education (TPE) strategies on pain, function, disability, quality of life, global perceived effect, patient satisfaction, knowledge transfer, or behaviour change in adults with neck pain associated with whiplash or non-specific and specific mechanical neck pain with or without radiculopathy or cervicogenic headache. Search Methods: We searched computerised bibliographic databases (inception to 11 July 2010). Selection Criteria: Eligible studies were randomised controlled trials (RCT) investigating the effectiveness of TPE for acute to chronic neck pain. Data Collection And Analysis: Paired independent review authors conducted selection, data abstraction, and 'Risk of bias' assessment. We calculated risk ratio (RR) and standardised mean differences (SMD). Heterogeneity was assessed; no studies were pooled. Main Results: Of the 15 selected trials, three were rated low risk of bias. Three TPE themes emerged.Advice focusing on activation: There is moderate quality evidence (one trial, 348 participants) that an educational video of advice focusing on activation was more beneficial for acute whiplash-related pain when compared with no treatment at intermediate-term [RR 0.79 (95% confidence interval (CI) 0.59 to 1.06)] but not long-term follow-up [0.89 (95% CI, 0.65 to 1.21)]. There is low quality evidence (one trial, 102 participants) that a whiplash pamphlet on advice focusing on activation is less beneficial for pain reduction, or no different in improving function and global perceived improvement from generic information given out in emergency care (control) for acute whiplash at short- or intermediate-term follow-up. Low to very low quality evidence (nine trials using diverse educational approaches) showed either no evidence of benefit or difference for varied outcomes. Advice focusing on pain &amp; stress coping skills and workplace ergonomics: Very low quality evidence (three trials, 243 participants) favoured other treatment or showed no difference spanning numerous follow-up periods and disorder subtypes. Low quality evidence (one trial, 192 participants) favoured specific exercise training for chronic neck pain at short-term follow-up.Self-care strategies: Very low quality evidence (one trial, 58 participants) indicated that self-care strategies did not relieve pain for acute to chronic neck pain at short-term follow-up. Authors' Conclusions: With the exception of one trial, this review has not shown effectiveness for educational interventions, including advice to activate, advice on stress-coping skills, workplace ergonomics and self-care strategies. Future research should be founded on sound adult learning theory and learning skill acquisition. abstract_id: PUBMED:19160247 Patient education for neck pain with or without radiculopathy. Background: Neck disorders are common, disabling, and costly. The effectiveness of patient education strategies is unclear. Objectives: To assess whether patient education strategies, either alone or in combination with other treatments, are of benefit for pain, function, global perceived effect, quality of life, or patient satisfaction, in adults with neck pain with and without radiculopathy. Search Strategy: Computerized bibliographic databases were searched from their start up to May 31, 2008. Selection Criteria: Eligible studies were quasi or randomized trials (RCT) investigating the effectiveness of patient education strategies for neck disorder. Data Collection And Analysis: Paired independent review authors carried out study selection, data abstraction, and methodological quality assessment. Relative risk and standardized mean differences (SMD) were calculated. The appropriateness of combining studies was assessed on clinical and statistical grounds. Because of differences in intervention type or disorder, no studies were considered appropriate to pool. Main Results: Of the 10 selected trials, two (20%) were rated high quality. Advice was assessed as follows:Eight trials of advice focusing on activation compared to no treatment or to various active treatments, including therapeutic exercise, manual therapy and cognitive behavioural therapy, showed either inferiority or no difference for pain, spanning a full range of follow-up periods and disorder types. When compared to rest, two trials that assessed acute whiplash-associated disorders (WAD) showed moderate evidence of no difference for various forms of advice focusing on activation. Two trials studying advice focusing on pain &amp; stress coping skills found moderate evidence of no benefit for pain in chronic mechanical neck disorder (MND) at intermediate/long-term follow-up. One trial compared the effects of ' traditional neck school ' to no treatment, yielding limited evidence of no benefit for pain at intermediate-term follow-up in mixed acute/subacute/chronic neck pain. Authors' Conclusions: This review has not shown effectiveness for educational interventions in various disorder types and follow-up periods, including advice to activate, advice on stress coping skills, and 'neck school'. In future research, further attention to methodological quality is necessary. Studies of multimodal interventions should consider study designs, such as factorial designs, that permit discrimination of the specific educational components. abstract_id: PUBMED:18843681 Patient education for neck pain with or without radiculopathy. Background: Neck disorders are common, disabling, and costly. The effectiveness of patient education strategies is unclear. Objectives: To assess whether patient education strategies, either alone or in combination with other treatments, are of benefit for pain, function, global perceived effect, quality of life, or patient satisfaction, in adults with neck pain with and without radiculopathy. Search Strategy: Computerized bibliographic databases were searched from their start up to May 31, 2008. Selection Criteria: Eligible studies were quasi or randomized trials (RCT) investigating the effectiveness of patient education strategies for neck disorder. Data Collection And Analysis: Paired independent review authors carried out study selection, data abstraction, and methodological quality assessment. Relative risk and standardized mean differences (SMD) were calculated. The appropriateness of combining studies was assessed on clinical and statistical grounds. Because of differences in intervention type or disorder, no studies were considered appropriate to pool. Main Results: Of the 10 selected trials, two (20%) were rated high quality. Advice was assessed as follows:Eight trials of advice focusing on activation compared to no treatment or to various active treatments, including therapeutic exercise, manual therapy and cognitive behavioural therapy, showed either inferiority or no difference for pain, spanning a full range of follow-up periods and disorder types. When compared to rest, two trials that assessed acute whiplash-associated disorders (WAD) showed moderate evidence of no difference for various forms of advice focusing on activation.Two trials studying advice focusing on pain &amp; stress coping skills found moderate evidence of no benefit for pain in chronic mechanical neck disorder (MND) at intermediate/long-term follow-up.One trial compared the effects of 'traditional neck school' to no treatment, yielding limited evidence of no benefit for pain at intermediate-term follow-up in mixed acute/subacute/chronic neck pain. Authors' Conclusions: This review has not shown effectiveness for educational interventions in various disorder types and follow-up periods, including advice to activate, advice on stress coping skills, and 'neck school'. In future research, further attention to methodological quality is necessary. Studies of multimodal interventions should consider study designs, such as factorial designs, that permit discrimination of the specific educational components. abstract_id: PUBMED:24703832 Comprehensive physiotherapy exercise programme or advice for chronic whiplash (PROMISE): a pragmatic randomised controlled trial. Background: Evidence suggests that brief physiotherapy programmes are as effective for acute whiplash-associated disorders as more comprehensive programmes; however, whether this also holds true for chronic whiplash-associated disorders is unknown. We aimed to estimate the effectiveness of a comprehensive exercise programme delivered by physiotherapists compared with advice in people with a chronic whiplash-associated disorder. Methods: PROMISE is a two group, pragmatic randomised controlled trial in patients with chronic (&gt;3 months and &lt;5 years) grade 1 or 2 whiplash-associated disorder. Participants were randomly assigned by a computer-generated randomisation schedule to receive either the comprehensive exercise programme (20 sessions) or advice (one session and telephone support). Sealed opaque envelopes were used to conceal allocation. The primary outcome was pain intensity measured on a 0-10 scale. Outcomes were measured at baseline, 14 weeks, 6 months, and 12 months by a masked assessor. Analysis was by intention to treat, and treatment effects were calculated with linear mixed models. The trial is registered with the Australian New Zealand Clinical Trials Registry, number ACTRN12609000825257. Findings: 172 participants were allocated to either the comprehensive exercise programme (n=86) or advice group (n=86); 157 (91%) were followed up at 14 weeks, 145 (84%) at 6 months, and 150 (87%) at 12 months. A comprehensive exercise programme was not more effective than advice alone for pain reduction in the participants. At 14 weeks the treatment effect on a 0-10 pain scale was 0·0 (95% CI -0·7 to 0·7), at 6 months 0·2 (-0·5 to 1·0), and at 12 months -0·1 (-0·8 to 0·6). CNS hyperexcitability and symptoms of post-traumatic stress did not modify the effect of treatment. We recorded no serious adverse events. Interpretation: We have shown that simple advice is equally as effective as a more intense and comprehensive physiotherapy exercise programme. The need to identify effective and affordable strategies to prevent and treat acute through to chronic whiplash associated disorders is an important health priority. Future avenues of research might include improving understanding of the mechanisms responsible for persistent pain and disability, investigating the effectiveness and timing of drugs, and study of content and delivery of education and advice. Funding: The National Health and Medical Research Council of Australia, Motor Accidents Authority of New South Wales, and Motor Accident Insurance Commission of Queensland. abstract_id: PUBMED:22996847 The efficacy of patient education in whiplash associated disorders: a systematic review. Background: Until now, there is no firm evidence for conservative therapy in patients with chronic Whiplash Associated Disorders (WAD). While chronic WAD is a biopsychosocial problem, education may be an essential part in the treatment and the prevention of chronic WAD. However, it is still unclear which type of educative intervention has already been used in WAD patients and how effective such interventions are. Objective: This systematic literature study aimed at providing an overview of the literature regarding the currently existing educative treatments for patients with whiplash or WAD and their evidence. Study Design: Systematic review of the literature. Methods: A systematic literature search was conducted in the following databases: Pubmed, Springerlink, and Web of Science using different keyword combinations. We included randomized controlled clinical trials (RCT) that encompass the effectiveness of education for patients with WAD. The included articles were evaluated on their methodological quality. Results: Ten RCT's of moderate to good quality remained after screening. Both oral and written advice, education integrated in exercise programs and behavioral programs appear effective interventions for reducing pain and disability and enhancing recovery and mobility in patients with WAD. In acute WAD, a simple oral education session will suffice. In subacute or chronic patients broader (multidisciplinary) programs including education which tend to modulate pain behavior and activate patients seems necessary. Limitations: Because of limited studies and the broad range of different formats and contents of education and different outcome measures, further research is needed before solid conclusions can be drawn regarding the use and the modalities of these educational interventions in clinical practice. Conclusion: Based on this systematic literature study is seems appropriate for the pain physician to provide education as part of a biopsychosocial approach of patients with whiplash. Such education should target removing therapy barriers, enhancing therapy compliance and preventing and treating chronicity. Still, more studies are required to provide firm evidence for the type, duration, format, and efficacy of education in the different types of whiplash patients. abstract_id: PUBMED:16582844 Education by general practitioners or education and exercises by physiotherapists for patients with whiplash-associated disorders? A randomized clinical trial. Study Design: Randomized clinical trial. Objective: To compare the effectiveness of education and advice given by general practitioners (GPs) with education, advice, and active exercise therapy given by physiotherapists (PTs) for patients with whiplash-associated disorders. Summary Of Background Data: Available evidence from systematic reviews has indicated beneficial effects for active interventions in patients with whiplash-associated disorders. However, it remained unclear which kind of active treatment was most effective. Methods: Whiplash patients with symptoms or disabilities at 2 weeks after accident were recruited in primary care. Eligible patients still having symptoms or disabilities at 4 weeks were randomly allocated to GP care or physiotherapy. GPs and PTs treated patients according to a dynamic multimodal treatment protocol primarily aimed to increase activities and influence unfavorable psychosocial factors for recovery. We trained all health care providers about the characteristics of the whiplash problem, available evidence regarding prognosis and treatment, and protocol of the interventions. The content of the information provided to patients during treatment depended on the treatment goals set by the GPs or PTs. Also, the type of exercises chosen by the PTs depended on the treatment goals, and it was not explicitly necessary that exercise therapy was provided in all patients. Primary outcome measures included neck pain intensity, headache intensity, and work activities. Furthermore, an independent blinded assessor measured functional recovery, cervical range of motion, disability, housekeeping and social activities, fear of movement, coping, and general health status. We assessed outcomes at 8, 12, 26, and 52 weeks after the accident. Results: A total of 80 patients were randomized to either GP care (n = 42) or physiotherapy (n = 38). At 12 and 52 weeks, no significant differences were found concerning the primary outcome measures. At 12 weeks, physiotherapy was significantly more effective than GP care for improving 1 of the measures of cervical range of motion (adjusted mean difference 12.3 degrees ; 95% confidence interval [CI] 2.7-21.9). Long-term differences between the groups favored GP care but were statistically significant only for some secondary outcome measures, including functional recovery (adjusted relative risk 2.3; 95% CI 1.0-5.0), coping (adjusted mean difference 1.7 points; 95% CI 0.2-3.3), and physical functioning (adjusted mean difference 8.9 points; 95% CI 0.6-17.2). Conclusions: We found no significant differences for the primary outcome measures. Treatment by GPs and PTs were of similar effectiveness. The long-term effects of GP care seem to be better compared to physiotherapy for functional recovery, coping, and physical functioning. Physiotherapy seems to be more effective than GP care on cervical range of motion at short-term follow-up. abstract_id: PUBMED:12421771 Whiplash associated disorders: a review of the literature to guide patient information and advice. Objectives: To review the literature and provide an evidence based framework for patient centred information and advice on whiplash associated disorders. Methods: A systematic literature search was conducted, which included both clinical and non-clinical articles to encompass the wide range of patients' informational needs. From the studies and previous reviews retrieved, 163 were selected for detailed review. The review process considered the quantity, consistency, and relevance of all selected articles. These were categorised under a grading system to reflect the quality of the evidence, and then linked to derived evidence statements. Results: The main messages that emerged were: physical serious injury is rare; reassurance about good prognosis is important; over-medicalisation is detrimental; recovery is improved by early return to normal pre-accident activities, self exercise, and manual therapy; positive attitudes and beliefs are helpful in regaining activity levels; collars, rest, and negative attitudes and beliefs delay recovery and contribute to chronicity. These findings were synthesised into patient centred messages with the potential to reduce the risk of chronicity. Conclusions: The scientific evidence on whiplash associated disorders is of variable quality, but sufficiently robust and consistent for the purpose of guiding patient information and advice. While the delivery of appropriate messages can be both oral and written, consistency is imperative, so an innovative patient educational booklet, The Whiplash Book, has been developed and published. abstract_id: PUBMED:24704678 Does structured patient education improve the recovery and clinical outcomes of patients with neck pain? A systematic review from the Ontario Protocol for Traffic Injury Management (OPTIMa) Collaboration. Background Context: In 2008, the Bone and Joint Decade 2000 to 2010 Task Force on Neck Pain and Its Associated Disorders recommended patient education for the management of neck pain. However, the effectiveness of education interventions has recently been challenged. Purpose: To update the findings of the Bone and Joint Decade 2000 to 2010 Task Force on Neck Pain and Its Associated Disorders and evaluate the effectiveness of structured patient education for the management of patients with whiplash-associated disorders (WAD) or neck pain and associated disorders (NAD). Study Design/setting: Systematic review of the literature and best-evidence synthesis. Patient Sample: Randomized controlled trials that compared structured patient education with other conservative interventions. Outcome Measures: Self-rated recovery, functional recovery (eg, disability, return to activities, work, or school), pain intensity, health-related quality of life, psychological outcomes such as depression or fear, or adverse effects. Methods: We systematically searched eight electronic databases (MEDLINE, EMBASE, CINAHL, PsycINFO, the Cochrane Central Register of Controlled Trials, DARE, PubMed, and ICL) from 2000 to 2012. Randomized controlled trials, cohort studies, and case-control studies meeting our selection criteria were eligible for critical appraisal. Random pairs of independent reviewers critically appraised eligible studies using the Scottish Intercollegiate Guidelines Network criteria. Scientifically admissible studies were summarized in evidence tables and synthesized following best-evidence synthesis principles. Results: We retrieved 4,477 articles. Of those, nine were eligible for critical appraisal and six were scientifically admissible. Four admissible articles investigated patients with WAD and two targeted patients with NAD. All structured patient education interventions included advice on activation or exercises delivered orally combined with written information or as written information alone. Overall, as a therapeutic intervention, structured patient education was equal or less effective than other conservative treatments including massage, supervised exercise, and physiotherapy. However, structured patient education may provide small benefits when combined with physiotherapy. Either mode of delivery (ie, oral or written education) provides similar results in patients with recent WAD. Conclusions: This review adds to the Bone and Joint Decade 2000 to 2010 Task Force on Neck Pain and Its Associated Disorders by defining more specifically the role of structured patient education in the management of WAD and NAD. Results suggest that structured patient education alone cannot be expected to yield large benefits in clinical effectiveness compared with other conservative interventions for patients with WAD or NAD. Moreover, structured patient education may be of benefit during the recovery of patients with WAD when used as an adjunct therapy to physiotherapy or emergency room care. These benefits are small and short lived. abstract_id: PUBMED:28622489 The Role of Exercise and Patient Education in the Noninvasive Management of Whiplash. Synopsis The majority of people with whiplash-associated disorder do not have neurological deficit or fracture and are therefore largely managed with nonsurgical interventions such as exercise, patient education, and behavioral-based interventions. To date, clinical guidelines, systematic reviews, and the results of high-quality randomized controlled trials recommend exercise and patient education as the primary interventions for people in both acute and chronic stages after injury. However, the relatively weak evidence and small effect sizes in individual trials have led authors of some systematic reviews to reach equivocal recommendations for either exercise or patient education, and led policy makers and funders to question whether the more expensive intervention (exercise) should be funded at all. Physical therapists, one of the most commonly consulted professionals treating individuals with whiplash-associated disorder, need to look beyond the evidence for insights as to what role patient education and exercise should play in the future management of whiplash. This clinical commentary therefore will review the evidence for exercise, patient education, and behavioral-based interventions for whiplash and provide clinical insight as to the future role that exercise and patient education should play in the management of this complex condition. Possible subgroups of patients who may best respond to exercise will be explored using stratification based on impairments, treatment response, and risk/prognostic factors. J Orthop Sports Phys Ther 2017;47(7):481-491. Epub 16 Jun 2017. doi:10.2519/jospt.2017.7138. Answer: The question of whether oral advice is superior to a pamphlet for educating patients after a whiplash injury has been investigated, but the evidence does not conclusively favor one method over the other. A randomized parallel-group trial with a 1-year follow-up compared a 1-hour educational session with a specially trained nurse to an educational pamphlet for patients with mild complaints after car collisions. The study found no significant differences in prognosis between the two groups on various outcome parameters, including neck pain, headache, disability, and return to work. However, there was a systematic tendency toward better outcomes for the group receiving personal advice, suggesting that the method of communication might be worth further investigation, especially since educational therapy is relatively inexpensive and even a modest effect could justify its use (PUBMED:18824949). Other systematic reviews and randomized controlled trials have also explored the efficacy of patient education in whiplash-associated disorders (WAD). One systematic review found that, with the exception of one trial, educational interventions, including advice to activate, advice on stress-coping skills, workplace ergonomics, and self-care strategies, did not show effectiveness for varied outcomes (PUBMED:22419306). Another review concluded that educational interventions in various disorder types and follow-up periods, including advice to activate and 'neck school', have not shown effectiveness (PUBMED:19160247, PUBMED:18843681). A pragmatic randomized controlled trial comparing a comprehensive exercise program delivered by physiotherapists with advice found that simple advice was equally as effective as a more intense and comprehensive physiotherapy exercise program for pain reduction in participants with chronic whiplash-associated disorder (PUBMED:24703832). A systematic review aimed at providing an overview of the literature regarding the currently existing educative treatments for patients with whiplash or WAD suggested that both oral and written advice, education integrated into exercise programs, and behavioral programs appear effective for reducing pain and disability and enhancing recovery and mobility in patients with WAD (PUBMED:22996847). In summary, the current evidence does not clearly indicate that oral advice is any better than a pamphlet for patient education after a whiplash injury. The effectiveness of educational interventions seems to be similar across different modes of delivery, and further research is needed to determine the most effective educational strategies for patients with whiplash injuries.
Instruction: Does Educator Training or Experience Affect the Quality of Multiple-Choice Questions? Abstracts: abstract_id: PUBMED:26277486 Does Educator Training or Experience Affect the Quality of Multiple-Choice Questions? Rationale And Objectives: Physicians receive little training on proper multiple-choice question (MCQ) writing methods. Well-constructed MCQs follow rules, which ensure that a question tests what it is intended to test. Questions that break these are described as "flawed." We examined whether the prevalence of flawed questions differed significantly between those with or without prior training in question writing and between those with different levels of educator experience. Materials And Methods: We assessed 200 unedited MCQs from a question bank for our senior medical student radiology elective: an equal number of questions (50) were written by faculty with previous training in MCQ writing, other faculty, residents, and medical students. Questions were scored independently by two readers for the presence of 11 distinct flaws described in the literature. Results: Questions written by faculty with MCQ writing training had significantly fewer errors: mean 0.4 errors per question compared to a mean of 1.5-1.7 errors per question for the other groups (P &lt; .001). There were no significant differences in the total number of errors between the untrained faculty, residents, and students (P values .35-.91). Among trained faculty 17/50 questions (34%) were flawed, whereas other faculty wrote 38/50 (76%) flawed questions, residents 37/50 (74%), and students 44/50 (88%). Trained question writers' higher performance was mainly manifest in the reduced frequency of five specific errors. Conclusions: Faculty with training in effective MCQ writing made fewer errors in MCQ construction. Educator experience alone had no effect on the frequency of flaws; faculty without dedicated training, residents, and students performed similarly. abstract_id: PUBMED:33088746 Effect of Faculty Training on Quality of Multiple-Choice Questions. Background: Multiple-choice question (MCQ) is frequently used assessment tool in medical education, both for certification and competitive examinations. Ill-constructed MCQs impact the utility of the assessment and thus the fate of examinee. We conducted this study to ascertain whether a short training session for faculty on MCQ writing results in desired improvement in their item-writing skills. Methods: A 1-day workshop on constructing high-quality MCQs was conducted for the faculty as a before-after design, following training session of 3 h duration. 28 participants wrote preworkshop (n = 133) and postworkshop (n = 137) MCQs, which were analyzed and compared for 17 item-writing flaws. A mock test of 100 MCQs (selected by stratified random sampling from all the MCQs generated during the workshop) was conducted for MBBS-passed students for item analysis. Results: Item-writing flaws reduced following the training (15% vs. 27.7%, P &lt; 0.05). Improvement mainly occurred in quality of options; heterogeneity dropped from 27.1% prior to the workshop to 5.8% postworkshop. The proportion of MCQs failing the cover test remained similarly high (68.4% vs. 60.6%), and there was no improvement in writing of the stem before and after the workshop. The item analysis did not reveal any significant improvement in facility value, discriminating index, and proportion of nonfunctioning distractors. Conclusion: A single, short-duration faculty training session is not good enough to correct flaws in writing of the MCQs. There is a need of focused training of the faculty in MCQ writing. Courses with a longer duration, supplemented by repeated or continuous faculty development programs, need to be explored. abstract_id: PUBMED:35288095 Great Question! The Art and Science of Crafting High-Quality Multiple-Choice Questions. Assessment of medical knowledge is essential to determine the progress of an adult learner. Well-crafted multiple-choice questions are one proven method of testing a learner's understanding of a specific topic. The authors provide readers with rules that must be followed to create high-quality multiple-choice questions. Common question writing mistakes are also addressed to assist readers in improving their item-writing skills. abstract_id: PUBMED:23012132 Guidelines for the construction of multiple choice questions tests. Multiple Choice Questions (MCQs) are generally recognized as the most widely applicable and useful type of objective test items. They could be used to measure the most important educational outcomes - knowledge, understanding, judgment and problem solving. The objective of this paper is to give guidelines for the construction of MCQs tests. This includes the construction of both "single best option" type, and "extended matching item" type. Some templates for use in the "single best option" type of questions are recommended. abstract_id: PUBMED:34049647 Multiple-Choice Tests: A-Z in Best Writing Practices. Multiple-choice tests are the most used method of assessment in medical education. However, there is limited literature in medical education and psychiatry to inform the best practices in writing good-quality multiple-choice questions. Moreover, few physicians and psychiatrists have received training and have experience in writing them. This article highlights the strategies in writing high-quality multiple-choice items and discusses some common flaws that can impact validity and reliability of the assessment examinations. abstract_id: PUBMED:36092385 Construction and Writing Flaws of the Multiple-Choice Questions in the Published Test Banks of Obstetrics and Gynecology: Adoption, Caution, or Mitigation? Background The item-writing flaws (IWFs) in multiple-choice questions (MCQs) can affect test validity. The purpose of this study was to explore the IWFs in the published resources, estimate their frequency and pattern, rank, and compare the current study resources, and propose a possible impact for teachers and test writers. Methods This cross-sectional study was conducted from September 2017 to December 2020. MCQs from the published MCQ books in Obstetrics and Gynecology was the target resources. They were stratified into four clusters (study-book related, review books, self-assessment books, and online-shared test banks). The sample size was estimated and 2,300 out of 11,195 eligible MCQs were randomly selected. The MCQs (items) were judged on a 20-element compiled checklist that is organized under three sections as follows: (1) structural flaws (seven elements), (2) test-wiseness flaws (five elements), and (3) irrelevant difficulty flaws (eight elements). Rating was done dichotomously, 0 = violating and 1 = not violating. Item flaws were recorded and analyzed using the Excel spreadsheets and IBM SPSS. Results Twenty three percent of the items ( n = 537) were free from any violations, whereas 30% ( n = 690) contained one violation, and 47% ( n = 1073) contained more than one violation. The most commonly reported IWFs were "Options are Not in Order (61%)." The best questions with the least flaws (75th percentiles) were obtained from the self-assessment books followed by study-related MCQ books. The average scores of good-quality items in the cluster of self-assessment books were significantly higher than other book clusters. Conclusion There were variable presentations and percentages of item violations. Lower quality questions were observed in review-related MCQ books and the online-shared test banks. Using questions from these resources needs a caution or avoidance strategy. Relative higher quality questions were reported for the self-assessment followed by the study-related MCQ books. An adoption strategy may be applied with mitigation if needed. abstract_id: PUBMED:38074575 Assessment of Higher Ordered Thinking in Medical Education: Multiple Choice Questions and Modified Essay Questions. This article was migrated. The article was marked as recommended. Background: Multiple choice questions and Modified Essay Questions are two widely used methods of assessment in medical education. There is a lack of substantial evidence whether both forms of questions can assess higher ordered thinking or not. Objective: The objective of this paper is to assess the ability of a well-constructed Multiple-Choice Question (MCQ) to assess higher ordered thinking skills as compared to a Modified Essay Questions (MEQ) in medical education. Methods: The medical education literature was searched for articles related to comparison between multiple choice questions and modified essay questions, looking for credible evidence for using multiple choice questions for assessment of higher ordered thinking. Results and Conclusion: A well-structured MCQ has the capacity to assess higher ordered thinking and because of many other advantages that this format offers. Multiple choice questions should be considered as a preferable choice in undergraduate medical education as literature shows that different levels of Bloom's taxonomy can be assessed by this assessment format and its use for assessing only lower ordered thinking i.e. recall of knowledge, is not very convincing. abstract_id: PUBMED:38282273 Ten tips for effective use and quality assurance of multiple-choice questions in knowledge-based assessments. Multiple-choice questions (MCQs) are the most popular type of items used in knowledge-based assessments in undergraduate and postgraduate healthcare education. MCQs allow assessment of candidates' knowledge on a broad range of knowledge-based learning outcomes in a single assessment. Single-best-answer (SBA) MCQs are the most versatile and commonly used format. Although writing MCQs may seem straight-forward, producing decent-quality MCQs is challenging and warrants a range of quality checks before an item is deemed suitable for inclusion in an assessment. Like all assessments, MCQ-based examinations must be aligned with the learning outcomes and learning opportunities provided to the students. This paper provides evidence-based guidance on the effective use of MCQs in student assessments, not only to make decisions regarding student progression but also to build an academic environment that promotes assessment as a driver for learning. Practical tips are provided to the readers to produce authentic MCQ items, along with appropriate pre- and post-assessment reviews, the use of standard setting and psychometric evaluation of assessments based on MCQs. Institutions need to develop an academic culture that fosters transparency, openness, equality and inclusivity. In line with contemporary educational principles, teamwork amongst teaching faculty, administrators and students is essential to establish effective learning and assessment practices. abstract_id: PUBMED:33305012 Problem-solving strategies used in anatomical multiple-choice questions. Background And Aims: Multiple-choice questions (MCQ) in the anatomical sciences are often perceived to be targeting recall of facts and regurgitation of trivial details. Moving away from this assumption requires the design of purposeful multiple-choice questions that focus on higher-order cognitive functions as opposed to rote memorization. In order to develop such questions, it was important to first understand the strategies that students use in solving multiple-choice questions. Using the think-aloud protocol, this study seeks to understand strategies students use in solving multiple-choice questions. Specifically, it seeks to uncover patterns in the reasoning process and tactics used when solving higher and lower order MCQ in anatomy. The research also provides insights onto how these strategies influence the student's probability of answering questions correctly. Methods: Multiple-choice questions were created at three levels of cognitive functioning based on the ideas, connections, extensions (ICE) learning framework. The think-aloud protocol was used to unravel problem-solving strategies used by 92 undergraduate anatomy students as they solved multiple-choice questions. Results: Sixteen strategies were identified through the oral and written think-alouds that students used to solve MCQ. Eleven of these have been described and supported by the literature, while the rest were utilized by our students when solving MCQ in anatomy. Domain-specific strategies of visualizing and recalling had the highest use. Personal connection was a strategy that allowed students to achieve success in all ICE levels in the oral think-alouds and in the I and E levels in the written think-alouds. Conclusions: This research argues that it is upon us as educators to make learning visible to our students, specifically through the use of think-alouds. It also raises awareness that when educators facilitate the process of students making personal connections, it aids students in new knowledge being integrated effectively and retrieved accurately. abstract_id: PUBMED:28765439 Will a Short Training Session Improve Multiple-Choice Item-Writing Quality by Dental School Faculty? A Pilot Study. Faculty members are expected to write high-quality multiple-choice questions (MCQs) in order to accurately assess dental students' achievement. However, most dental school faculty members are not trained to write MCQs. Extensive faculty development programs have been used to help educators write better test items. The aim of this pilot study was to determine if a short workshop would result in improved MCQ item-writing by dental school faculty at one U.S. dental school. A total of 24 dental school faculty members who had previously written MCQs were randomized into a no-intervention group and an intervention group in 2015. Six previously written MCQs were randomly selected from each of the faculty members and given an item quality score. The intervention group participated in a training session of one-hour duration that focused on reviewing standard item-writing guidelines to improve in-house MCQs. The no-intervention group did not receive any training but did receive encouragement and an explanation of why good MCQ writing was important. The faculty members were then asked to revise their previously written questions, and these were given an item quality score. The item quality scores for each faculty member were averaged, and the difference from pre-training to post-training scores was evaluated. The results showed a significant difference between pre-training and post-training MCQ difference scores for the intervention group (p=0.04). This pilot study provides evidence that the training session of short duration was effective in improving the quality of in-house MCQs. Answer: Educator training has a significant impact on the quality of multiple-choice questions (MCQs). Studies have shown that faculty with training in effective MCQ writing made fewer errors in MCQ construction compared to those without such training (PUBMED:26277486). Specifically, faculty with MCQ writing training had significantly fewer errors, with a mean of 0.4 errors per question, as opposed to a mean of 1.5-1.7 errors per question for untrained faculty, residents, and students. Trained question writers also showed higher performance mainly in the reduced frequency of five specific errors (PUBMED:26277486). However, educator experience alone, without dedicated training, does not seem to affect the frequency of flaws in MCQs. Faculty without dedicated training, residents, and students performed similarly, indicating that experience in education does not necessarily translate to better MCQ writing skills (PUBMED:26277486). Moreover, a short training session for faculty on MCQ writing has been found to result in some improvement in their item-writing skills, particularly in the quality of options. However, such a brief intervention may not be sufficient to correct all flaws in writing MCQs, suggesting the need for more focused and possibly longer-duration training (PUBMED:33088746). In summary, educator training in MCQ writing is crucial for improving the quality of MCQs, while mere experience in education without specific training does not have the same effect. Continuous faculty development programs and more comprehensive training may be necessary to achieve significant improvements in MCQ writing quality (PUBMED:26277486, PUBMED:33088746).
Instruction: Cognitive impairment in older patients with breast cancer before systemic therapy: is there an interaction between cancer and comorbidity? Abstracts: abstract_id: PUBMED:24841981 Cognitive impairment in older patients with breast cancer before systemic therapy: is there an interaction between cancer and comorbidity? Purpose: To determine if older patients with breast cancer have cognitive impairment before systemic therapy. Patients And Methods: Participants were patients with newly diagnosed nonmetastatic breast cancer and matched friend or community controls age &gt; 60 years without prior systemic treatment, dementia, or neurologic disease. Participants completed surveys and a 55-minute battery of 17 neuropsychological tests. Biospecimens were obtained for APOE genotyping, and clinical data were abstracted. Neuropsychological test scores were standardized using control means and standard deviations (SDs) and grouped into five domain z scores. Cognitive impairment was defined as any domain z score two SDs below or ≥ two z scores 1.5 SDs below the control mean. Multivariable analyses evaluated pretreatment differences considering age, race, education, and site; comparisons between patient cases also controlled for surgery. Results: The 164 patient cases and 182 controls had similar neuropsychological domain scores. However, among patient cases, those with stage II to III cancers had lower executive function compared with those with stage 0 to I disease, after adjustment (P = .05). The odds of impairment were significantly higher among older, nonwhite, less educated women and those with greater comorbidity, after adjustment. Patient case or control status, anxiety, depression, fatigue, and surgery were not associated with impairment. However, there was an interaction between comorbidity and patient case or control status; comorbidity was strongly associated with impairment among patient cases (adjusted odds ratio, 8.77; 95% CI, 2.06 to 37.4; P = .003) but not among controls (P = .97). Only diabetes and cardiovascular disease were associated with impairment among patient cases. Conclusion: There were no overall differences between patients with breast cancer and controls before systemic treatment, but there may be pretreatment cognitive impairment within subgroups of patient cases with greater tumor or comorbidity burden. abstract_id: PUBMED:34287690 Comorbidity, cognitive dysfunction, physical functioning, and quality of life in older breast cancer survivors. Purpose: Older breast cancer survivors (BCS) may be at greater risk for cognitive dysfunction and other comorbidities; both of which may be associated with physical and emotional well-being. This study will seek to understand these relationships by examining the association between objective and subjective cognitive dysfunction and physical functioning and quality of life (QoL) and moderated by comorbidities in older BCS. Methods: A secondary data analysis was conducted on data from 335 BCS (stages I-IIIA) who were ≥ 60 years of age, received chemotherapy, and were 3-8 years post-diagnosis. BCS completed a one-time questionnaire and neuropsychological tests of learning, delayed recall, attention, working memory, and verbal fluency. Descriptive statistics and separate linear regression analyses testing the relationship of each cognitive assessment on physical functioning and QoL controlling for comorbidities were conducted. Results: BCS were on average 69.79 (SD = 3.34) years old and 5.95 (SD = 1.48) years post-diagnosis. Most were stage II (67.7%) at diagnosis, White (93.4%), had at least some college education (51.6%), and reported on average 3 (SD = 1.81) comorbidities. All 6 physical functioning models were significant (p &lt; .001), with more comorbidities and worse subjective attention identified as significantly related to decreased physical functioning. One model found worse subjective attention was related to poorer QoL (p &lt; .001). Objective cognitive function measures were not significantly related to physical functioning or QoL. Conclusions: A greater number of comorbidities and poorer subjective attention were related to poorer outcomes and should be integrated into research seeking to determine predictors of physical functioning and QoL in breast cancer survivors. abstract_id: PUBMED:28237423 What influences healthcare professionals' treatment preferences for older women with operable breast cancer? An application of the discrete choice experiment. Introduction: Primary endocrine therapy (PET) is used variably in the UK as an alternative to surgery for older women with operable breast cancer. Guidelines state that only patients with "significant comorbidity" or "reduced life expectancy" should be treated this way and age should not be a factor. Methods: A Discrete Choice Experiment (DCE) was used to determine the impact of key variables (patient age, comorbidity, cognition, functional status, cancer stage, cancer biology) on healthcare professionals' (HCP) treatment preferences for operable breast cancer among older women. Multinomial logistic regression was used to identify associations. Results: 40% (258/641) of questionnaires were returned. Five variables (age, co-morbidity, cognition, functional status and cancer size) independently demonstrated a significant association with treatment preference (p &lt; 0.05). Functional status was omitted from the multivariable model due to collinearity, with all other variables correlating with a preference for operative treatment over no preference (p &lt; 0.05). Only co-morbidity, cognition and cancer size correlated with a preference for PET over no preference (p &lt; 0.05). Conclusion: The majority of respondents selected treatment in accordance with current guidelines, however in some scenarios, opinion was divided, and age did appear to be an independent factor that HCPs considered when making a treatment decision in this population. abstract_id: PUBMED:19200420 Geriatric assessment in older patients with breast cancer. Most cases of breast cancer are diagnosed in older adults. Older women have an increased risk for breast cancer-specific mortality and are at higher risk for treatment-associated morbidity than younger women. However, they are also less likely to be offered preventive care or adjuvant therapy for this disease. Major gaps in evidence exist regarding the optimal evaluation and treatment of older women with breast cancer because of significant underrepresentation in clinical trials. Chronologic age alone is an inadequate predictor of treatment tolerance and benefit in this heterogeneous population. Multiple issues uniquely associated with aging impact cancer care, including functional impairment, comorbidity, social support, cognitive function, psychological state, and financial stress. Applying geriatric principles and assessment to this older adult population would inform decision making by providing estimates of life expectancy and identifying individuals most vulnerable to morbidity. Ongoing research is seeking to identify which assessment tools can best predict outcomes in this population, and thus guide experts in tailoring treatments to maximize benefits in older adults with breast cancer. abstract_id: PUBMED:31553501 Sleep disturbance and neurocognitive outcomes in older patients with breast cancer: Interaction with genotype. Background: Sleep disturbance and genetic profile are risks for cognitive decline in noncancer populations, yet their role in cancer-related cognitive problems remains understudied. This study examined whether sleep disturbance was associated with worse neurocognitive outcomes in breast cancer survivors and whether sleep effects on cognition varied by genotype. Methods: Newly diagnosed female patients (n = 319) who were 60 years old or older and had stage 0 to III breast cancer were recruited from August 2010 to December 2015. Assessments were performed before systemic therapy and 12 and 24 months later. Neuropsychological testing measured attention, processing speed, executive function, learning, and memory; self-perceived cognitive functioning was also assessed. Sleep disturbance was defined by self-report of routine poor or restless sleep. Genotyping included APOE, BDNF, and COMT polymorphisms. Random effects fluctuation models tested associations of between-person and within-person differences in sleep, genotype, and sleep-genotype interactions and cognition and controlled for age, reading level, race, site, and treatment. Results: One-third of the patients reported sleep disturbances at each time point. There was a sleep-APOE ε4 interaction (P = .001) in which patients with the APOE ε4 allele and sleep disturbances had significantly lower learning and memory scores than those who were APOE ε4-negative and without sleep disturbances. There was also a sleep disturbance-COMT genotype interaction (P = .02) in which COMT Val carriers with sleep disturbances had lower perceived cognition than noncarriers. Conclusions: Sleep disturbance was common and was associated with worse cognitive performance in older breast cancer survivors, especially those with a genetic risk for cognitive decline. Survivorship care should include sleep assessments and interventions to address sleep problems. abstract_id: PUBMED:10876704 Breast cancer in patients, 70 years or older Over 30% of breast cancers are diagnosed after age 70. The incidence of breast cancer in the elderly has increased since 1960. Risk factors for breast cancer are a medical history without pregnancy, a first pregnancy after age 30 and the use of hormonal replacement therapy. The biology of breast cancer at advanced age indicates a relative slow, less aggressive and hormone dependent tumour growth. In spite of these favourable characteristics, the prognosis is not better than at middle age. Over 20% of older patients die from co-existing other diseases within 5 years after the diagnosis of breast cancer. This comorbidity, mostly cardiovascular or pulmonary, affects the possibilities and the outcome of treatment. Treatment of the primary tumour is performed according to the same guidelines as in younger patients. Indication exists for hormonal adjuvant treatment with tamoxifen in patients with oestrogen receptor positive tumours. Hormonal treatment is the treatment of choice in metastatic disease. Chemotherapy is given in patients with oestrogen receptor negative tumours and in patients with progressive hepatic or pulmonary metastases. abstract_id: PUBMED:28159513 Functional status decline in older patients with breast and colorectal cancer after cancer treatment: A prospective cohort study. Objectives: The aim of the present study was to disentangle the impact of age and that of cancer diagnosis and treatment on functional status (FS) decline in older patients with cancer. Materials And Methods: Patients with breast and colorectal cancer aged 50-69years and aged ≥70years who had undergone surgery, and older patients without cancer aged ≥70years were included. FS was assessed at baseline and after 12months follow-up, using the Katz index for activities of daily living (ADL) and the Lawton scale for instrumental activities of daily living (IADL). FS decline was defined as ≥1 point decrease on the ADL or IADL scale from baseline to 12months follow-up. Results: In total, 179 older patients with cancer (≥ 70years), 341 younger patients with cancer (50-69years) and 317 older patients without cancer (≥ 70years) were included. FS decline was found in 43.6%, 24.6% and 28.1% of the groups, respectively. FS decline was significantly worse in older compared to younger patients with cancer receiving no chemotherapy (44.5% versus 17.6%, p&lt;0.001), but not for those who did receive chemotherapy (39.4% versus 30.8%, p=0.33). Among the patients with cancer, FS decline was significantly associated with older age (OR 2.63), female sex (OR 3.72), colorectal cancer (OR 2.81), polypharmacy (OR 2.10) and, inversely, with baseline ADL dependency (OR 0.44). Conclusion: Cancer treatment, and older age are important predictors of FS decline. The relation of baseline ADL dependency and chemotherapy with FS decline suggest that the fittest of the older patients with cancer were selected for chemotherapy. abstract_id: PUBMED:30945393 Association between cognitive impairment and guideline adherence for application of chemotherapy in older patients with breast cancer: Results from the prospective multicenter BRENDA II study. Background: This study examined the association between cognitive impairment and guideline adherence for application of chemotherapy in older patients with breast cancer. Patients And Methods: In the prospective multicenter cohort study BRENDA II, patients aged ≥65 years with primary breast cancer were sampled over a period of 4 years (2009-2012). A multiprofessional team (tumor board) discussed recommendation for adjuvant chemotherapy according to the German S3 guideline. Cognitive impairment was screened by the clock-drawing test (CDT) prior to adjuvant treatment. Results: Two hundred and sixty-three patients were included in the study and CDT data were available for 193 patients. Thirty-one percent of the patients had cognitive impairment with different degree of severity. In high-risk patients (n = 61) tumor board recommendation in favor of chemotherapy was 90% and in intermediate-risk patients (n = 170) 27%. Not receiving recommendation for chemotherapy in spite of guideline recommendation was more frequent in patients with cognitive impairment (67%) vs patients without cognitive impairment (46%) with P = 0.02 (OR 2.4, 95% confidence interval (CI) 1.2-4.9). Age, education, migration background and comorbidities were not associated with chemotherapy recommendation by the tumor board among cognitively impaired patients. Once the tumor board had recommended chemotherapy, application of chemotherapy was similar in both groups of patients with or without cognitive impairment. Conclusion: Almost one third of older patients with breast cancer are affected by cognitive impairment prior to adjuvant treatment. In these patients, cognitive impairment was associated with tumor board decision against chemotherapy in spite of a positive guideline recommendation. abstract_id: PUBMED:34080094 Impact of chemotherapy on cognitive functioning in older patients with HER2-positive breast cancer: a sub-study in the RESPECT trial. Purpose: To investigate whether postoperative adjuvant trastuzumab plus chemotherapy negatively affected cognitive functioning during the post-chemotherapy period compared with trastuzumab monotherapy in older patients with HER2-positive breast cancer. Methods: In the randomized RESPECT trial, women aged between 70 and 80 years with HER2-positive, stage I to IIIA invasive breast cancer who underwent curative operation were randomly assigned to receive either 1-year trastuzumab monotherapy or 1-year trastuzumab plus chemotherapy. Cognitive functioning was assessed using the Mini-Mental State Examination (MMSE) test at enrollment and 1 and 3 years after initiation of the protocol treatment. The primary outcome was change in the MMSE total score from baseline. Secondary outcomes included prevalence of suspected mild cognitive impairment (MMSE total score &lt; 28) and suspected dementia (MMSE total score &lt; 24). Results: The analytical population consisted of 29 and 26 patients in the trastuzumab monotherapy and trastuzumab plus chemotherapy groups, respectively. The group differences in mean changes of the MMSE total score were 0.6 (95% confidence interval [CI] - 0.3 to 1.6) at 1 year and 0.9 (95% CI - 1.0 to 2.8) at 3 years (P = 0.136 for the group difference pooling the two visits). The prevalence of suspected mild cognitive impairment at 3 years was 41.7% in the trastuzumab monotherapy group and 28.6% in the trastuzumab plus chemotherapy group (P = 0.548). Conclusion: This randomized sub-study did not show worse cognitive functioning during the post-chemotherapy period with trastuzumab plus chemotherapy than with trastuzumab monotherapy in older patients with HER2-positive breast cancer. Trial Registration Number: NCT01104935 (first posted April 16, 2010). abstract_id: PUBMED:34529259 Does Breast Cancer Surgery Impact Functional Status and Independence in Older Patients? A Narrative Review. Surgery is the recommended treatment modality for primary breast cancer. Breast cancer surgery is non-visceral; therefore, it is often assumed that the subsequent impact on functional status in older women is less significant compared to other cancer types such as colorectal cancer. Evidence for this however, is lacking. The definition of functional status varies amongst healthcare professionals and patients, making comparisons between studies difficult. From the literature, the two most common themes in relation to functional status following breast cancer surgery are activities of daily living and quality of life. Both of these elements of functional status are adversely impacted in patients following breast cancer surgery. A more significant decline is seen in patients with pre-existing comorbidities and with greater intensity of surgery, which includes more invasive breast and/or axillary surgery as well as additional reconstructive procedures. Identifying and optimising pre-existing factors which may predict post-operative decline in functional status, such as cognitive impairment and deteriorating functional decline over the preceding year, may help in reducing deterioration in functional status after breast cancer surgery. Methods which may be employed to detect and optimise these factors include geriatric assessment and exercise intervention. Answer: Yes, there appears to be an interaction between cancer and comorbidity in relation to cognitive impairment in older patients with breast cancer before systemic therapy. A study found that although there were no overall differences between patients with breast cancer and controls before systemic treatment, there may be pretreatment cognitive impairment within subgroups of patient cases with greater tumor or comorbidity burden. Specifically, comorbidity was strongly associated with cognitive impairment among patient cases but not among controls. Diabetes and cardiovascular disease were particularly associated with impairment among patient cases (PUBMED:24841981). Additionally, another study highlighted that a greater number of comorbidities and poorer subjective attention were related to poorer outcomes, suggesting that comorbidities should be integrated into research seeking to determine predictors of physical functioning and quality of life in breast cancer survivors (PUBMED:34287690). Furthermore, healthcare professionals' treatment preferences for older women with operable breast cancer were influenced by several factors including patient age, comorbidity, and cognition, indicating that these factors are considered in the decision-making process for treatment (PUBMED:28237423). This suggests that comorbidity and cognitive impairment are important considerations in the management of older breast cancer patients.
Instruction: Preliminary investigation on prevalence of osteoporosis and osteopenia: Should we tune our focus on healthy adults? Abstracts: abstract_id: PUBMED:25407117 Preliminary investigation on prevalence of osteoporosis and osteopenia: Should we tune our focus on healthy adults? Aim: Osteoporosis and osteopenia are global health problems with increasing trend, particularly in developed regions. Apart from traditional well-recognized high-risk groups (i.e. postmenopausal women and elders), prevalence of such problems among adults should not be ignored because of the advantages of early detection and health promotion. Therefore, this preliminary study aims to investigate the prevalence of osteoporosis and osteopenia among adult office workers, which represented a relatively large proportion of the population in urbanized cities. Methods: An GE-Lunar Achilles ultrasonometer was used to screen the bone mineral density (BMD) of 80 participants. Results: The BMD T-score ranged from -3 to 3.5. The majority of the participants had normal BMD result (T-score, ≥ -1), whereas 35% was classified as abnormal (T-score, &lt; -1) including 31.3% osteopenia and 3.8% osteoporosis. Conclusion: High prevalence rate of abnormal BMD among healthy adults should be further studied in this population. The findings also suggest that the current ignorance in adulthood may increase the prevalence of osteoporotic fractures in the future. abstract_id: PUBMED:29535940 Prevalence of Osteoporosis in Apparently Healthy Adults above 40 Years of Age in Pune City, India. Purpose: The aim of study was to assess the prevalence of osteoporosis and changes in bone mass with increasing age and compare bone health status of apparently healthy men, premenopausal and postmenopausal women. Methods: Data were collected on anthropometric and sociodemographic factors in 421 apparently healthy Indian adults (women = 228), 40-75 years of age, in a cross-sectional study in Pune city, India. Bone mineral density (BMD) was measured by dual-energy X-ray absorptiometry at two sites-lumbar spine (LS) and left femur. Individuals were classified as having osteoporosis or osteopenia based on the World Health Organization criteria of T-scores. Results: Mean age of study population was 53.3 ± 8.4 years. Of the total women, 44.3% were postmenopausal with 49.2 ± 3.5 years as mean age at menopause. Postmenopausal women showed a rapid decline in BMD with age till 50 years while men showed a gradual decline. Premenopausal women showed no significant decline in BMD with age (P &gt; 0.1). Significantly lower T-scores were observed at LS in men compared to premenopausal (P &lt; 0.05). At left femur, T-scores were lower in men compared to premenopausal women (P &lt; 0.05) but not postmenopausal women (P &gt; 0.1). The prevalence of osteoporosis in men at LS was lower than postmenopausal women but higher than premenopausal women. Conclusion: In Indian men, a low T-score compared to women indicates higher susceptibility to osteoporosis. In women, menopause causes a rapid decline in BMD. Therefore, both Indian men and postmenopausal women require adequate measures to prevent osteoporosis during later years in life. abstract_id: PUBMED:29030130 Behavioural and objective vestibular assessment in persons with osteoporosis and osteopenia: a preliminary investigation. Introduction: Calcium is vital for the functioning of the inner ear hair cells as well as for the neurotransmitter release that triggers the generation of a nerve impulse. A reduction in calcium level could therefore impair the peripheric vestibular functioning. However, the outcome of balance assessment has rarely been explored in cases with osteopenia and osteoporosis, the medical conditions associated with reduction in calcium levels. Objective: The present study aimed to investigate the impact of osteopenia and osteoporosis on the outcomes of behavioural and objective vestibular assessment tests. Methods: The study included 12 individuals each in the healthy control group and osteopenia group, and 11 individuals were included in the osteoporosis group. The groups were divided based on the findings of bone mineral density. All the participants underwent behavioural tests (Fukuda stepping, tandem gait and subjective visual vertical) and objective assessment using cervical and ocular vestibular evoked myogenic potentials. Results: A significantly higher proportion of the individuals in the two clinical groups' demonstrated abnormal results on the behavioural balance assessment tests (p&lt;0.05) than the control group. However, there was no significant difference in latencies or amplitude of cervical vestibular evoked myogenic potential and oVEMP between the groups. The proportion of individuals with absence of ocular vestibular evoked myogenic potential was significantly higher in the osteoporosis group than the other two groups (p&lt;0.05). Conclusion: The findings of the present study confirm the presence of balance-related deficits in individuals with osteopenia and osteoporosis. Hence the clinical evaluations should include balance assessment as a mandatory aspect of the overall audiological assessment of individuals with osteopenia and osteoporosis. abstract_id: PUBMED:30525841 Nutrients associated with diseases related to aging: a new healthy aging diet index for elderly population. Introduction: several indexes are used to measure the quality of nutrition in advanced ages. None of them were designed to evaluate nutrition to avoid disabilities in elderly population. Objectives: to retrieve from literature "nutrients and intakes" showing to be involved in aging, and propose a new index, considering this information, to evaluate the quality of nutrition for preventing diseases related to aging. Methods: a bibliographic review was performed, retrieving information on nutrients associated with aging. All these nutrients were incorporated into a new Healthy Aging Diet Index (HADI). Next, a cross-sectional study was carried out with two convenience samples of elderly, collecting the nutritional and dietary data, calculating different validated indexes and comparing them with HADI to validate the results. Results: forty-eight manuscripts were retrieved for full-text analysis. Associations were found between cardiovascular diseases and macronutrients,dietary fibre, sodium and vitamin D; cancer and fatty acids; diabetes and fatty acids, fibre and simple sugars; osteopenia/osteoporosis and calcium and vitamin D; sarcopenia and proteins, calcium, and vitamin D; and between cognitive impairment and fatty acids and folates. Sample 2, associated with rural areas, obtained lower indexes' scores. The behavior of HADI is similar to the other indexes (6.24/14 and 6.10/14 in samples 1 and 2, respectively). Conclusions: the presented collection of nutrients adds useful evidence for the design of diets that allow healthy aging. The new index proposed is a tool of specific nutritional measurement in studies aimed to prevent diseases related to aging. abstract_id: PUBMED:30515581 Bone mineral density, vitamin D status, and calcium intake in healthy female university students from different socioeconomic groups in Turkey. Peak bone mass is reached in late adolescence. Low peak bone mass is a well recognized risk factor for osteoporosis later in life. Our data do not support a link between vitamin D status, bone mineral density (BMD), and socioeconomic status (SES). However, there was a marked inadequacy of daily calcium intake and a high presence of osteopenia in females with low SES. Purpose: Our aims were to (1) examine the effects of different SES on BMD, vitamin D status, and daily calcium intake and (2) investigate any association between vitamin D status and BMD in female university students. Subjects And Methods: A questionnaire was used to obtain information about SES, daily calcium intake, and physical activity in 138 healthy, female university students (age range 18-22 years). Subjects were stratified into lower, middle, and higher SES according to the educational and occupational levels of their parents. All serum samples were collected in spring for 25-hydroxyvitamin D concentration (25OHD). Lumbar spine and total body BMD was obtained by dual-energy X-ray absorptiometry (DXA) scan (Lunar DPX series). Osteopenia was defined as a BMD between - 1.0 and - 2.5 standard deviations (SDs) below the mean for healthy young adults on lumbar spine DXA. Results: No significant difference was found between the three socioeconomic groups in terms of serum 25OHD concentration, BMD levels, or BMD Z scores (p &gt; 0.05). Both the daily intake of calcium was significantly lower (p = 0.02), and the frequency of osteopenia was significantly higher in girls with low SES (p = 0.02). There was no correlation between serum 25OHD concentration and calcium intake and BMD values and BMD Z scores (p &gt; 0.05). The most important factor affecting BMD was weight (β = 0.38, p &lt; 0.001). Conclusions: Low SES may be associated with sub-optimal bone health and predispose to osteopenia in later life, even in female university students. abstract_id: PUBMED:37129731 Global prevalence of osteosarcopenic obesity amongst middle aged and older adults: a systematic review and meta-analysis. Purpose: Osteosarcopenic obesity syndrome (OSO) is a recently recognized disorder encompassing osteopenia/osteoporosis, sarcopenia, and obesity. However, evidence in pooling knowledge regarding the prevalence of OSO worldwide is scarce. Hence, this review aimed to determine the pooled prevalence of OSO in middle-aged and older adults. Methods: We conducted systematic searches in Scopus, Embase, PubMed Central, MEDLINE, ScienceDirect, and Google Scholar from inception until October 2022. We evaluated the quality of the included studies using the Newcastle-Ottawa scale. The meta-analysis results using a random-effects model included the pooled prevalence and 95% confidence intervals (CIs). Results: We included 20 studies with a total of 23,909 participants. Most of the studies were of good quality. The final pooled prevalence of OSO in middle-aged and older adults worldwide was 8% (95% CI: 6%-11%; n = 20). Females (pooled prevalence = 9%; 95% CI:7%-12%; n = 17) had a higher burden of OSO than males (pooled prevalence = 5%; 95% CI:3%-8%; n = 11). We also found that the burden was higher among studies reporting OSO prevalence only in the elderly population (pooled prevalence = 13%; 95% CI: 9%-17%). The asymmetric nature of the funnel plot indicates the presence of publication bias. Additional sensitivity analysis did not reveal any significant variation in the pooled effect size estimation. Conclusion: Approximately one in ten middle-aged and older adults suffer from OSO. The burden was highest among females and older adults. Diagnostic and intervention packages targeting such patients should be developed and implemented in high-risk settings. abstract_id: PUBMED:33479804 Bone mineral density in healthy adult Indian population: the Chandigarh Urban Bone Epidemiological Study (CUBES). Osteoporosis is a disease with a high burden of morbidity. For its accurate diagnosis, using indigenous data as reference standards is needed. However, normative data on bone density is lacking in India. Therefore, we aimed to determine the reference range for bone density for the healthy population of north India. Introduction: Osteoporosis is a major public health problem around the globe including India, resulting in significant morbidity, mortality, and health care burden. However, the reference values used for its diagnosis are largely based on data from the western population, which may lead to over- or underdiagnosis of osteoporosis in Indians. Our study aimed to determine the reference range for bone mineral density for the healthy population of India. Methods: This is a cross-sectional study of 825 subjects (men 380, women 445) (median age: 41 years, IQR 32-55 years), recruited by a house-to-house survey. The population was stratified into decade-wise groups and biochemical measurements including renal and liver function tests, glycated hemoglobin, serum calcium, 25-hydroxyvitamin D, parathyroid hormone, and bone mineral density were performed in all the subjects. The T-scores for men aged &gt; 50 years and post-menopausal women were calculated based on the data generated from this study in young men and women aged 20-40 years. Results: According to the BMD manufacturer's data, which is based on the western population, 70% of the Indian men (&gt; 50 years) and 48% of the post-menopausal Indian women had osteopenia while 18% of the men and 25% of the women had osteoporosis. However, according to the re-calculated T-scores from the current study, only 56% and 7.2% of men and 33% and 5% of women had osteopenia and osteoporosis, respectively. An age-related decline in bone mineral density, as seen in the western population, was also seen in both Indian men and women. Conclusion: We have established a reference database for BMD in healthy Indian adult population, which may have clinical implications for the diagnosis and intervention strategies for the management of osteoporosis. abstract_id: PUBMED:31835241 Do Older Adults With Reduced Bone Mineral Density Benefit From Strength Training? A Critically Appraised Topic. Clinical Scenario: Reduced bone mineral density (BMD) is a serious condition in older adults. The mild form, osteopenia, is often a precursor of osteoporosis. Osteoporosis is a pathological condition and a global health problem as it is one of the most common diseases in developed countries. Finding solutions for prevention and therapy should be prioritized. Therefore, the critically appraised topic focuses on strength training as a treatment to counteract a further decline in BMD in older adults. Clinical Question: Is strength training beneficial in increasing BMD in older people with osteopenia or osteoporosis? Summary of Key Findings: Four of the 5 reviewed studies with the highest evidence showed a significant increase in lumbar spine BMD after strength training interventions in comparison with control groups. The fifth study confirmed the maintenance of lumbar spine density due to conducted exercises. Moreover, 3 reviewed studies revealed increasing BMD at the femoral neck after strength training when compared with controls, which appeared significant in 2 of them. Clinical Bottom Line: The findings indicate that strength training has a significant positive influence on BMD in older women (ie, postmenopausal) with osteoporosis or osteopenia. However, it is not recommended to only rely on strength training as the increase of BMD may not appear fast enough to reach the minimal desired values. A combination of strength training and supplements/medication seems most adequate. Generalization of the findings to older men with reduced BMD should be done with caution due to the lack of studies. Strength of Recommendation: There is grade B of recommendation to support the validity of strength training for older women in postmenopausal phase with reduced BMD. abstract_id: PUBMED:36712516 Healthy plant-based diet index as a determinant of bone mineral density in osteoporotic postmenopausal women: A case-control study. Introduction: The association between plant-based diet indices and bone mineral density (BMD) of women with osteoporosis have not been studied in Iranian women. This study aimed to evaluate the association between plant-based diet indices and BMD in postmenopausal women with osteopenia/osteoporosis. Materials And Methods: The present research was a case-control study conducted on 131 postmenopausal women with osteoporosis/osteopenia and 131 healthy women. The BMD of the femoral neck and lumbar vertebrae was measured by the Dual-energy X-ray absorptiometry (DXEA) method. Participants were asked to complete a validated semi-quantitative food frequency questionnaire (FFQ). We used three versions of plant-based diet indices, including plant-based diet index (PDI), healthy plant-based diet index (hPDI), and unhealthy plant-based diet index (uPDI). Two different multivariable logistic regression was used for the crude and adjusted model to assess the relationship between PDI, hPDI, and uPDI with odds of femoral and lumbar BMD. Results: There was a reverse association between last tertile of hPDI with femoral BMD abnormality in the both adjusted model [Model 1: odds ratio (OR): 0.33; 95% confidence interval (CI): 0.19-0.63 and Model 2: OR: 0.30; 95% CI: 0.15-0.58, respectively]. Furthermore, we found a reverse relationship between hPDI with lumbar BMD abnormality in the first adjusted model (OR: 0.36; 95% CI: 0.19-0.67). On the other hand, a negative association was observed in the second and last tertile of hPDI with lumbar BMD abnormality (OR: 0.47; 95% CI: 0.24-0.90 and OR: 0.34; 95% CI: 0.17-0.64, respectively). According to the results, the association of femoral BMD abnormality in the last tertile of uPDI compared to the first tertile in the both adjusted models (Model 1: OR: 2.85; 95% CI: 1.52-5.36 and Model 2: OR: 2.63; 95% CI: 1.37-5.06) were significant. Also, we observed a positive relationship between the last tertile of uPDI with lumbar BMD abnormality compared to the lowest tertile in the both adjusted models (Model 1; OR: 4.16; 95% CI: 2.20-7.85, Model 2; OR: 4.23; 95% CI: 2.19-8.19). Conclusion: Overall, the findings indicated that in postmenopausal women with osteoporosis, a healthy plant-based diet could prevent bone loss, and an unhealthy plant-based diet might have detrimental effects on BMD. abstract_id: PUBMED:32921573 Osteosarcopenia Predicts Falls, Fractures, and Mortality in Chilean Community-Dwelling Older Adults. Objectives: The objective of this study was to describe the prevalence of osteosarcopenia and its association with falls, fractures, and mortality in community-dwelling older adults. Design: Follow-up of ALEXANDROS cohorts designed to study disability associated with obesity in older adults. Setting And Participants: Community-dwelling people aged 60 years and older living in Chile. Measures: At baseline, 1119 of 2372 participants had a dual-energy X-ray absorptiometry scan and the measurements for the diagnosis of sarcopenia. World Health Organization standards for bone mineral density were used to classify them as normal, osteopenia, and osteoporosis. Sarcopenia was identified using the algorithm from the European Working Group on Sarcopenia in Older People 1, validated for the Chilean population. Osteosarcopenia was defined as having sarcopenia plus osteoporosis or osteopenia. Results: The sample of 1119 participants (68.5% female) had a mean age of 72 years. At baseline, osteoporosis was identified in 23.2%, osteopenia in 49.8%, sarcopenia in 19.5%, and osteosarcopenia in 16.4% of the sample. The prevalence of osteosarcopenia increases with age, reaching 33.7% for those older than 80 years. Sarcopenia was found in 34.4% of osteoporotic people and osteoporosis in 40.8% of those with sarcopenia. After 5640 person-years of follow-up, 86 people died. The mortality was significantly higher for the group with osteosarcopenia (15.9%) compared with those without the condition (6.1%). After an adjusted Cox Regression analysis, the hazard ratio for death in people with osteosarcopenia was 2.48. Falls, fractures, and functional impairment were significantly more frequent in osteosarcopenic patients. Conclusions And Implications: Osteosarcopenia is a common condition among older adults and is associated with an increased risk of falls, fractures, functional impairment, and mortality. Considering the high proportion of sarcopenia among osteoporotic patients and vice versa, screening for the second condition when the first is suspected should be advised. Answer: The preliminary investigations into the prevalence of osteoporosis and osteopenia suggest that there is a need to focus on healthy adults, not just traditional high-risk groups such as postmenopausal women and the elderly. Studies have found that a significant proportion of healthy adults, including office workers in urbanized cities, have abnormal bone mineral density (BMD) results, with osteopenia and osteoporosis being present in 31.3% and 3.8% of participants, respectively (PUBMED:25407117). This indicates that the prevalence of these bone health issues among adults should not be ignored, as early detection and health promotion could be beneficial. In Pune City, India, a study found that postmenopausal women showed a rapid decline in BMD with age, while men showed a gradual decline. The prevalence of osteoporosis in men at the lumbar spine was lower than in postmenopausal women but higher than in premenopausal women, suggesting that both Indian men and postmenopausal women require measures to prevent osteoporosis (PUBMED:29535940). Additionally, balance-related deficits have been confirmed in individuals with osteopenia and osteoporosis, which could impair vestibular functioning and increase the risk of falls (PUBMED:29030130). This further supports the need for comprehensive clinical evaluations that include balance assessment in adults with these conditions. Moreover, a study on the healthy aging diet index (HADI) found associations between various nutrients and diseases related to aging, including osteopenia/osteoporosis, indicating that diet plays a role in bone health and the prevention of age-related diseases (PUBMED:30525841). Furthermore, socioeconomic status (SES) has been linked to bone health, with females from lower SES backgrounds showing a higher frequency of osteopenia and inadequate daily calcium intake, which could predispose them to osteoporosis later in life (PUBMED:30515581). Given these findings, it is clear that there is a need to broaden the focus on osteoporosis and osteopenia prevention and management to include healthy adults from various backgrounds and not just traditionally recognized high-risk groups. Early intervention and a multidimensional approach that includes dietary considerations, balance assessments, and SES factors could be key in reducing the future prevalence of osteoporotic fractures and improving overall bone health in the adult population.
Instruction: Can a soda-lime glass be used to demonstrate how patterns of strength dependence are influenced by pre-cementation and resin-cementation variables? Abstracts: abstract_id: PUBMED:22561646 Can a soda-lime glass be used to demonstrate how patterns of strength dependence are influenced by pre-cementation and resin-cementation variables? Objectives: To determine how the variability in biaxial flexure strength of a soda-lime glass analogue for a PLV and DBC material was influenced by precementation operative variables and following resin-cement coating. Methods: The flexural modulus of a transparent soda-lime glass was determined by longitudinally sectioning into rectangular bar-shaped specimens and the flexural moduli of three resin-based materials (Venus Flow, Rely-X Veneer and Clearfil Majesty Posterior) was also determined. Disc shaped soda-lime glass specimens (n=240) were divided into ten groups and were alumina particle air abraded, hydrofluoric (HF) acid-etched and resin-cement coated prior to biaxial flexure strength testing. Sample sets were profilometrically evaluated to determine the surface texture. One-way analyses of variance (ANOVA) and post hoc all paired Tukey tests were performed at a significance level of P&lt;0.05. The mean biaxial flexure strengths were plotted against resin-coating thickness and a regression analysis enabled estimation of the 'actual' magnitude of strengthening. Results: The mean three-point flexural modulus of the soda-lime glass was 40.0 (1.0)GPa and the Venus Flow, Rely-X Veneer and Clearfil Majesty Posterior were 3.0 (0.2)GPa, 6.0 (0.2)GPa and 14.8 (1.6)GPa, respectively. At a theoretical 'zero' resin-coating thickness an increase in biaxial flexure strength of 20.1% (63.2MPa), 30.8% (68.8MPa) and 36.3% (71.7MPa), respectively was evident compared with the control (52.6 (5.5)MPa). Conclusions: Disc-shaped specimens cut from round stock facilitated rapid fabrication of discs with uniform surface condition and demonstrated strength dependence was influenced by precementation parameters and resin-cementation variables. abstract_id: PUBMED:24012519 Atmospheric moisture effects on the testing rate and cementation seating load following resin-strengthening of a soda lime glass analogue for dental porcelain. Objectives: To investigate if resin-cementation of a soda lime glass dental analogue could elucidate information regarding the pattern of resin-reinforcement when coated in an environment actively scavenged of moisture. Methods: 192 soda lime disc-shaped specimens (alumina particle air abraded, hydrofluoric acid-etched and silane coated) were randomly assigned to eight groups (n=24 per group) prior to resin-coating at seating loads of 5 N (Groups A-D) and 30 N (Groups E-H) in an environment where moisture was actively scavenged and maintained below 15 ppm. Following one week storage the discs were tested in biaxial flexure at crosshead rates of 0.01, 0.1, 1 and 10mm/min. Analysis of group means was performed utilising a general linear model univariate analysis and post hoc all paired Tukey tests (P&lt;0.05). Results: The general linear model univariate analysis identified the mean biaxial flexure strength (BFS) was significantly influenced by the factors resin-cementation seating load (P&lt;0.001) and crosshead speed of the applied load (P&lt;0.001) with a significant interaction (P=0.008) between both factors. The linear logarithmic regression curves fitted to the group mean BFS data plotted against the crosshead speed highlighted significant differences between the pattern of resin-strengthening for the cementation loads and testing conditions. Conclusions: The decrease in resin-penetration expected within the 'resin-ceramic hybrid layer' following removal of the 30 N seating load was proposed as the modifying resin-strengthening parameter. These observations are supported by the viscoelastic and creep behaviour of resins at slow testing rates which becomes the dominant or determining phenomenon. abstract_id: PUBMED:23499569 Testing rate and cementation seating load effects on resin-strengthening of a dental porcelain analogue. Objectives: To determine the resin-strengthening dependence of a soda-lime-glass analogue for dental porcelain as a function of biaxial flexure strength (BFS), test crosshead rate and cementation seating load. Methods: Disc-shaped soda-lime glass specimens were divided into twelve groups (n=24), alumina particle air abraded and hydrofluoric acid-etched. Specimens (Groups A-D) were stored in a desiccator prior to testing at crosshead rates of 0.01, 0.1, 1 and 10mm/min, respectively. The remaining specimens were silane treated, Rely-X Veneer resin-coated with a seating load of 5N (Groups E-H) and 30N (Groups I-L) prior to light irradiation at 480±20mW/cm(2), 24h dry storage and BFS testing at 0.01, 0.1, 1 and 10mm/min, respectively. A linear logarithmic regression curve was fit to the raw data to elucidate static fatigue effects of the soda-lime-glass. Analysis of group means was performed utilising a general linear model univariate analysis and post hoc all paired Tukey tests (P&lt;0.05). Results: The linear logarithmic regression curve demonstrated the static fatigue effects of the soda-lime-glass analogue. Rely-X Veneer resin-coating (Groups E-L) resulted in significant increases in the mean BFS data for all crosshead rates examined (all P&lt;0.001). However, the pattern of rate dependence effects on resin-cementation deviated from the log relationship observed with the uncoated controls. Conclusion: This study further highlights that when slow crack growth is simulated during testing, valuable insights into the significant modification of a hereto well described phenomenon such as resin-strengthening mediated by the resin-ceramic hybrid layer is provided. abstract_id: PUBMED:36997430 Pre-cementation treatment of glass-ceramics with vacuum impregnated resin coatings. Objective: The study aimed to investigate the effectiveness of a vacuum impregnation process to eliminate the porosity at the ceramic-resin interface to optimize the reinforcement of a glass-ceramic by resin cementation. Methods: 100 leucite glass-ceramic disks (1.0 ± 0.1 mm thickness) were air-abraded, etched with 9.6 % HF acid, and silanated. Specimens were randomly allocated to 5 groups (n = 20). Group A received no further treatment (uncoated control). Groups B and D were resin-coated under atmospheric pressure, whereas groups C and E were resin-coated using vacuum impregnation. The polymerized resin-coating surfaces of specimens in groups B and C were polished to achieve a resin thickness of 100 ± 10 µm, while in groups D and E no resin-coating modification was performed prior to bi-axial flexure strength (BFS) determination. Optical microscopy was undertaken on the fracture fragments to identify the failure mode and origin. Comparisons of BFS group means were made by a one-way analysis of variance (ANOVA) and post-hoc Tukey test at α = 0.05. Results: All resin-coated sample groups (B-E) showed a statistically significant increase in mean BFS compared with the uncoated control (p &lt; 0.01). There was a significant difference in BFS between the ambient and vacuum impregnated unpolished groups (D and E) (p &lt; 0.01), with the greatest strengthening achieved using a vacuum impregnation technique. Significance: Results highlight the opportunity to further develop processes to apply thin conformal resin coatings, applied as a pre-cementation step to strengthen dental glass-ceramics. abstract_id: PUBMED:10546417 In-vitro study of resin-modified glass ionomer cements for cementation of orthodontic bands. Isolation, surplus removal and humidity as factors influencing the bond strength between enamel, cement and metal. The aim of this in vitro study was to investigate different light-cured and chemically cured resin-modified glass ionomer cements used for the cementation of orthodontic bands and to analyze various factors influencing the adhesive strength between enamel, cement and stainless steel. Four resin-modified glass ionomers (Fuji Ortho LC/GC, Fuji Duet/GC, Unitek Multi-Cure Glass Ionomer Orthodontic Band Cement/3M Unitek, Vitremer/3M) and 1 compomer (Band-Lok/Reliance) were examined. Flattened and polished bovine teeth embedded in polyurethane resin were used as enamel specimens. Before cementation, 50% of the specimens were moistened with the aerosol of an inhalation device, while the rest were dried with compressed air. Stainless steel cylinders (CrNi 18 10) were perpendicularly bonded onto the polished enamel using a custom-made cementation device and immediately topped with a pressure of 0.25 MPa. The cement was isolated with either Ketac Glaze/ESPE, Fuji Coat/GC, Cacao Butter/GC, Dryfoil/Jalenko or Final Varnish/VOCO, or was left uncoated. Eight minutes after the beginning of mixing, either the surplus cement was removed with a scalpel or surplus removal was simulated with ultrasound. After 24 hours storage in a water bath at 37 degrees C and 1,000 thermocycles the shear bond strength was determined. Significant differences with respect to the shear bond strength were found among the following cements, ranking from highest to lowest: Fuji Duet, Unitek cement &gt; Fuji Ortho LC &gt; Vitremer &gt; Band-Lok. The application of a barrier coating significantly increased the shear bond strength of all cements except Fuji Ortho LC. The light-cured resin Ketac Glaze proved to be the most effective barrier coating. A dry enamel surface increased the bond strength of all investigated cements except Unitek cement. The use of ultrasound led to no significant reduction in shear bond strength in comparison with surplus removal with a scalpel. abstract_id: PUBMED:31433135 Longevity of Bond Strength of an Indirect Composite Resin to Dentin Using Conventional or Self-Adhesive Resin Cementation: Influence of Dentin Pretreatment with TiF₄. The purpose was to evaluate the influence of dentin pretreatment with titanium tetrafluoride (TiF₄) on the longevity of bond strength (BS) of an indirect composite to dentin, using conventional resin cementation strategy or a self-adhesive resin cement. Forty third human molars with exposed dentin surfaces were used. The teeth were divided into groups (n = 10), according to the cementation strategy and the presence or absence of pretreatment with TiF₄. Microtensile strength testing and failure mode analysis were performed after 24 hours, 180 and 360 days of storage in water. Split-plot ANOVA and Tukey's test showed that BS was significantly higher when the conventional strategy was used, regardless of the time period, and of the application or no application of pretreatment with TiF₄ (p ⟨0.0001). When TiF₄ was used for both cementing strategies, BS was lower after 360 days (p = 0.0019). Both cementing strategies led to the formation of a shallow hybrid layer, regardless of the presence of TiF₄. BS was higher when the conventional cementation strategy was used, regardless of TiF₄ pretreatment. TiF₄ used as a pretreatment agent associated with different types of resin cementation was unable to maintain adhesive bond strength in the long term. abstract_id: PUBMED:27982188 Tensile Strength of Resin Cements Used with Base Metals in a Simulating Passive Cementation Technique for Implant-Supported Prostheses. The aim of this study was to analyze the tensile strength of two different resin cements used in passive cementation technique for implant-supported prosthesis. Ninety-six plastic cylinders were waxed in standardized forms, cast in commercially pure titanium, nickel-chromium and nickel-chromium-titanium alloys. Specimens were cemented on titanium cylinders using self-adhesive resin cement or conventional dual-cured resin cement. Specimens were divided in 12 groups (n=8) in accordance to metal, cement and ageing process. Specimens were immersed in distilled water at 37 °C for 24 h and half of them was thermocycled for 5,000 cycles. Specimens were submitted to bond strength test in a universal test machine EMIC-DL2000 at 5 mm/min speed. Statistical analysis evidenced higher tensile strength for self-adhesive resin cement than conventional dual-cured resin cement, whatever the used metal. Self-adhesive resin cement presented higher tensile strength compared to conventional dual-cured resin cement. In conclusion, metal type and ageing process did not influence the tensile strength results. abstract_id: PUBMED:36888845 Effect of aging and cementation systems on the bond strength to root dentin after fiber post cementation. This study evaluated the effect of aging and cementation of fiber posts using glass ionomer and resin cements on push-out bond strength, failure mode, and resin tag formation. One hundred and twenty bovine incisors were used. After post-space preparation, the specimens were randomly allocated into 12 groups (n = 10) according to the cementation system used: GC - GC Gold Label Luting &amp; Lining); RL - RelyX Luting 2; MC - MaxCem Elite; RU - RelyX U200 and the aging periods (24 hours, 6 months, and 12 months). Slices from the cervical, middle, and apical thirds were obtained and analyzed by push-out bond strength test and confocal laser scanning microscopy. One-way ANOVA and Tukey's post-hoc test was used at a significance level of 5%. For the push-out bond strength test, no differences among GC, RU, and MC in the cervical and middle thirds were observed, regardless of the period of storage (P &gt; 0.05). In the apical third, GC and RU showed similar bond strength but higher than other groups (P &gt; 0.05). After 12 months, GC showed the highest bond strength (P &lt; 0.05). Bond strength to post-space dentin decreased over time, regardless of the cementation system used. Cohesive failure was the most frequent, regardless of the period of storage, cementation system, and post-space third. Tag formation was similar among all groups. After 12 months, GC showed the highest bond strength values. abstract_id: PUBMED:30920097 The effectiveness of glass ionomer cement as a fiber post cementation system in endodontically treated teeth. This study compared the performance of a glass ionomer (GC Gold Label 1, GIC) as a fiber post cementation system for glass fiber posts with a self-adhesive resin cement (Relyx U200, RUC) and a conventional resin cement system (Scotchbond Muli-Purpose and Relyx ARC, RAC). Thirty endodontically treated canines were randomly divided in three groups (n = 10), according to the fiber post cementation system: (RAC)-Scotchbond Multi-Purpose and Relyx X ARC; (RUC)-Relyx U200 and (GIC)-GC Gold Label 1 Luting &amp; Lining. Rhodamine was incorporated into the cementation system prior to the fiber post cementation. After glass fiber post cementation, roots were incubated in artificial saliva for 6 months. After that, specimens from the cervical, middle, and apical thirds of the post space were prepared and analyzed using a push-out bond strength test and confocal laser microscopy. ANOVA one way and Tukey tests showed that GIC and RUC demonstrated similar push-out bond strength values, independently of the post space third (p &gt; .05); however, values were greater than those shown by RAC (p &lt; .05). For dentin penetrability, GIC and RUC also had similar results (p &gt; 0.05) and lower than RAC (p &lt; 0.05). Inside the root canal, the cementation system using glass ionomer cement (GC Gold Label 1 Luting &amp; Lining) has similar push-out bond strength to the self-adhesive resin cement (Relyx U200) and these were higher than the conventional resin (Relyx ARC), despite its higher dentin penetrability. abstract_id: PUBMED:28028417 A Twofold Comparison between Dual Cure Resin Modified Cement and Glass Ionomer Cement for Orthodontic Band Cementation. Aim: To test the solubility of dual cure resin modified resin cement in a food simulating solution and the shear bond strength compared to conventional Glass ionomer cement. Materials And Method: The materials tested were self-adhesive dual cure resin modified cement and Glass Ionomer (GIC). Twenty Teflon moulds were divided into two groups of tens. The first group was injected and packed with the modified resin cement, the second group was packed with GIC. To test the solubility, each mould was weighed before and after being placed in an analytical reagent for 30 days. The solubility was measured as the difference between the initial and final drying mass. To measure the Shear bond strength, 20 freshly extracted wisdom teeth were equally divided into two groups and embedded in self-cure acrylic resin. Four mm sections of stainless steel bands were cemented to the exposed buccal surfaces of teeth under a constant load of 500 g. Shear bond strength was measured using a computer controlled materials testing machine and the load required to deband the samples was recorded in Newtons. Results: GIC showed significantly higher mean weight loss and an insignificant lower Shear bond strength, compared to dual cure resin Cement. Conclusion: It was found that dual cure resin modified cement was less soluble than glass ionomer cement and of comparable bond strength rendering it more useful clinically for orthodontic band cementation. Answer: Yes, a soda-lime glass can be used to demonstrate how patterns of strength dependence are influenced by pre-cementation and resin-cementation variables. Studies have utilized soda-lime glass as an analogue for dental porcelain and leucite glass-ceramic to investigate the effects of various treatments and resin coatings on biaxial flexure strength (BFS) (PUBMED:22561646, PUBMED:36997430). One study determined the variability in BFS of soda-lime glass after it was subjected to alumina particle air abrasion, hydrofluoric acid etching, and resin-cement coating. The results showed that the strength dependence was influenced by these pre-cementation parameters and resin-cementation variables, with a significant increase in BFS observed at a theoretical 'zero' resin-coating thickness compared to the control (PUBMED:22561646). Another study investigated the effects of atmospheric moisture, resin-cementation seating load, and crosshead speed on the BFS of resin-strengthened soda-lime glass. The findings indicated that both the seating load and the speed of the applied load significantly influenced the BFS, with a notable interaction between these factors (PUBMED:24012519). Further research examined the resin-strengthening dependence of soda-lime glass as a function of BFS, test crosshead rate, and cementation seating load. The study highlighted that the pattern of rate dependence effects on resin-cementation deviated from the log relationship observed with uncoated controls, suggesting that slow crack growth during testing provides valuable insights into the modification of resin-strengthening mediated by the resin-ceramic hybrid layer (PUBMED:23499569). Additionally, the effectiveness of vacuum impregnation as a pre-cementation treatment to eliminate porosity at the ceramic-resin interface was studied, showing that vacuum impregnated resin coatings significantly increased the mean BFS compared to ambient pressure resin coatings (PUBMED:36997430). These studies collectively demonstrate that soda-lime glass can effectively be used to model and understand the influence of various pre-cementation and resin-cementation variables on the strength of dental materials.
Instruction: Magnetic toys: forbidden for pediatric patients with certain programmable shunt valves? Abstracts: abstract_id: PUBMED:22937481 3T magnetic resonance imaging testing of externally programmable shunt valves. Background: Exposure of externally programmable shunt-valves (EPS-valves) to magnetic resonance imaging (MRI) may lead to unexpected changes in shunt settings, or affect the ability to reprogram the valve. We undertook this study to examine the effect of exposure to a 3T MRI on a group of widely used EPS-valves. Methods: Evaluations were performed on first generation EPS-valves (those without a locking mechanism to prevent changes in shunt settings by external magnets other than the programmer) and second generation EPS-valves (those with a locking mechanisms). Fifteen new shunt-valves were divided into five groups of three identical valves each, and then exposed to a series of six simulated MRI scans. After each of the exposures, the valves were evaluated to determine if the valve settings had changed, and whether the valves could be reprogrammed. The study produced 18 evaluations for each line of shunt-valves. Results: Exposure of the first generation EPS-valves to a 3T magnetic field resulted in frequent changes in the valve settings; however, all valves retained their ability to be reprogrammed. Repeated exposure of the second generation EPS-valves has no effect on shunt valve settings, and all valves retained their ability to be interrogated and reprogrammed. Conclusions: Second generation EPS-valves with locking mechanisms can be safely exposed to repeated 3T MRI systems, without evidence that shunt settings will change. The exposure of the first generation EPS-valves to 3T MRI results in frequent changes in shunt settings that necessitate re-evaluation soon after MRI to avoid complications. abstract_id: PUBMED:34827547 Programmable Shunt Valves for Pediatric Hydrocephalus: 22-Year Experience from a Singapore Children's Hospital. (1) Background: pediatric hydrocephalus is a challenging condition. Programmable shunt valves (PSV) have been increasingly used. This study is undertaken to firstly, to objectively evaluate the efficacy of PSV as a treatment modality for pediatric hydrocephalus; and next, review its associated patient outcomes at our institution. Secondary objectives include the assessment of our indications for PSV, and corroboration of our results with published literature. (2) Methods: this is an ethics-approved, retrospective study. Variables of interest include age, gender, hydrocephalus etiology, shunt failure rates and incidence of adjustments made per PSV. Data including shunt failure, implant survival, and utility comparisons between PSV types are subjected to statistical analyses. (3) Results: in this case, 51 patients with PSV are identified for this study, with 32 index and 19 revision shunts. There are 3 cases of shunt failure (6%). The mean number of adjustments per PSV is 1.82 times and the mean number of adjustments made per PSV is significantly lower for MEDTRONIC™ Strata PSVs compared with others (p = 0.031). Next, PSV patients that are adjusted more frequently include cases of shunt revisions, PSVs inserted due to CSF over-drainage and tumor-related hydrocephalus. (4) Conclusion: we describe our institutional experience of PSV use in pediatric hydrocephalus and its advantages in a subset of patients whose opening pressures are uncertain and evolving. abstract_id: PUBMED:29576902 Maladjustment of programmable ventricular shunt valves by inadvertent exposure to a common hospital device. Background: Programmable ventricular shunt valves are commonly used to treat hydrocephalus. They can be adjusted to allow for varying amounts of cerebrospinal fluid (CSF) flow using an external magnetic programming device, and are susceptible to maladjustment from inadvertent exposure to magnetic fields. Case Description: We describe the case of a 3-month-old girl treated for hydrocephalus with a programmable StrataTM II valve found at the incorrect setting on multiple occasions during her hospitalization despite frequent reprogramming and surveillance. We found that the Vocera badge, a common hands-free wireless communication system worn by our nursing staff, had a strong enough magnetic field to unintentionally change the shunt setting. The device is worn on the chest bringing it into close proximity to the shunt valve when care providers hold the baby, resulting in the maladjustment. Conclusion: Some commonly used medical devices have a magnetic field strong enough to alter programmable shunt valve settings. Here, we report that the magnetic field from the Vocera hands-free wireless communication system, combined with the worn position, results in shunt maladjustment for the StrataTM II valve. Healthcare facilities using the Vocera badges need to put protocols in place and properly educate staff members to ensure the safety of patients with StrataTM II valves. abstract_id: PUBMED:35995353 The Effects of Using Hearing Aids and Hearing Assistive Technologies on Programmable Ventriculoperitoneal Shunt. Background: To investigate interaction between behind-the-ear (BTE) hearing aids, hearing assistive technologies, and programmable shunt valve to understand how use of BTE hearing aids in patients who underwent ventriculoperitoneal shunt (VPS) surgery affects the settings of a programmable shunt valve. Methods: In this study, we investigated the magnetic field (MF) generation of 3 BTE hearing aids made by different companies, 1 frequency modulated system using telecoil technology, and 1 wireless microphone technology and their interactions with 2 programmable shunt valves. All measurements were made in a silent booth using 2 different models. The influence of MF strength in the distance modeling was investigated based on the distance from source auditory prostheses. The measurements were recorded using a Gauss meter. In the anatomical modeling, the change in the settings and interaction of the valve in a bust mannequin were investigated. Results: No MF created by BTE hearing aids was detected in the distance modeling. The highest value measured was 32.67 μT (&lt;90 dB noise) when BTE hearing aids and frequency modulated systems were used, and this value decreased as the distance increased. No MF generation was observed at measurements done for distances &gt;10 mm. In the anatomical modeling, the settings of both programmable valves did not change under all acoustic conditions. Conclusions: This is the first study to our knowledge examining the MF created by hearing aids and hearing assistive technologies and its impact on programmable valves and variations in their settings. Our findings showed that it is safe to use BTE hearing aids, frequency modulated systems, and wireless microphone technologies in patients with a programmable VPS. abstract_id: PUBMED:19057906 Magnetic toys: forbidden for pediatric patients with certain programmable shunt valves? Background: Inadvertent adjustments and malfunctions of programmable valves have been reported in cases in which patients have encountered powerful electromagnetic fields such as those involved in magnetic resonance imaging, but the potential effects of magnetic toys on programmable valves are not well known. Materials And Methods: The magnetic properties of nine toy magnets were examined. To calculate the effect of a single magnet over a distance, the magnetic flux density was directly measured using a calibrated Hall probe at seven different positions between 0 and 120 mm from the magnet. Strata II small (Medtronic Inc.), Codman Hakim (Codman &amp; Shurtleff), and Polaris (Sophysa) programmable valves were then tested to determine the effects of the toy magnets on each valve type. Results: The maximal flux density of different magnetic toys differed between 17 and 540 mT, inversely proportional to the distance between toy and measurement instrument. Alterations to Strata and Codman valve settings could be effected with all the magnetic toys. The distances that still led to an alteration of the valve settings differed from 10 to 50 mm (Strata), compared with 5 to 30 mm (Codman). Valve settings of Polaris could not be altered by any toy at any distance due to its architecture with two magnets adjusted in opposite directions. Conclusion: This is the first report describing changes in the pressure setting of some adjustable valves caused by magnetic toys in close contact. Parents, surgeons, neurologists, pediatric oncologists, and paramedics should be informed about the potential dangers of magnetic toys to prevent unwanted changes to pressure settings. abstract_id: PUBMED:23830575 Programmable shunt valves for the treatment of hydrocephalus: a systematic review. Objective: To evaluate the clinical effectiveness of programmable valves compared with non-programmable valves of hydrocephalus. Methods: In this paper, the authors report a systematic review and meta-analysis of complications and revision rate for programmable valves and non-programmable implantation. Randomized or non-randomized controlled trials of hydrocephalus treated by programmable and non-programmable valves were considered for inclusion. Results: Seven published reports of eligible studies involving 1702 participants meet the inclusion criteria. Compared with non-programmable, programmable valves had no significant difference in catheter-related complications [RR = 0.88, 95%CI (0.66,1.19), p = 0.10] and infection rate [RR = 1.25, 95%CI (0.92,1.69), p = 1.00]. There were significant differences in overall complications [RR = 0.80, 95%CI (0.67,0.96), p &lt; 0.01], over-drainage or under-drainage complications [RR = 0.44, 95%CI (0.31,0.63), p &lt; 0.01] and revision rate [RR = 0.56, 95%CI (0.45,0.69), p &lt; 0.01] in favor of programmable valves. Conclusion: Although the studies seem to demonstrate a small advantage for the programmable shunts, the probable bias and the difficulties in patient selection are too important to make a general conclusion. abstract_id: PUBMED:34787715 Magnetic resonance imaging-related programmable ventriculoperitoneal shunt valve setting changes occur often. Purpose: Patients with programmable ventriculoperitoneal (VP) shunt valves undergo multiple skull radiographs to evaluate for setting changes resulting from MRI. Our purpose was to determine the rates of inadvertent, MRI-related, programmable VP shunt valve setting changes. Materials And Methods: In this retrospective cohort with a study period of January 2015-December 2018, we reviewed the pre- and post-MRI skull radiographs of patients with programmable VP shunts and collected the following data: Demographics, commercial type of the valve used, magnetic field strength of the MRI device used, and whether a setting change occurred. We used the chi-square test to identify variables associated with valve setting change. Results: We identified 210 MRI exposure events in 156 patients, and an MRI-related valve setting change rate of 56.7%. The setting change rate was significantly higher with higher magnetic field strength (p = 0.03), and with Medtronic Strata™ valves compared to Codman Hakim™ valves (p &lt; 0.0001). Conclusion: Inadvertent, MRI-related shunt valve setting changes are frequent with valves that lack a locking mechanism. Therefore, we suggest that when feasible, the clinicians could opt to manually reprogram the valves after the MRI to the preferred setting without the need for pre- and post-MRI radiographs. We believe that this protocol modification could help reduce ionizing radiation exposure and cost. Manufacturers may consider incorporating locking mechanisms into the design of such devices in order to reduce the unintended setting change rates. abstract_id: PUBMED:33007746 Interactions between programmable shunt valves and magnetically controlled growing rods for scoliosis. Objective: Although the advent of magnetic growing rod technology for scoliosis has provided a means to bypass multiple hardware lengthening operations, it is important to be aware that many of these same patients have a codiagnosis of hydrocephalus with magnet-sensitive programmable ventricular shunts. As the magnetic distraction of scoliosis rods has not previously been described to affect the shunt valve setting, the authors conducted an investigation to characterize the interaction between the two devices. Methods: In this ex vivo study, the authors carried out 360 encounters between four different shunt valve types at varying distances from the magnetic external remote control (ERC) used to distract the growing rods. Valve settings were examined before and after every interaction with the remote control to determine if there was a change in the setting. Results: The Medtronic Strata and Codman Hakim valves were found to have setting changes at distances of 3 and 6 inches but not at 12 inches. The Aesculap proGAV and Codman Certas valves, typically described as MRI-resistant, did not have any setting changes due to the magnetic ERC regardless of distance. Conclusions: Although it is not necessary to check a shunt valve after every magnetic distraction of scoliosis growing rods, if there is concern that the magnetic ERC may have been within 12 inches (30 cm) of a programmable ventricular shunt valve, the valve should be checked at the bedside with a programmer or with a skull radiograph along with postdistraction scoliosis radiographs. abstract_id: PUBMED:20150313 Programmable CSF shunt valves: radiographic identification and interpretation. The programmable CSF shunt valve has become an important tool in hydrocephalus treatment, particularly in the NPH population and in pediatric patients with complex hydrocephalus. The purpose of this study is to provide a single reference for the identification of programmable shunt valves and the interpretation of programmable shunt valve settings. Four major manufacturers of programmable shunts agreed to participate in this study. Each provided radiographic images and legends for their appropriate interpretation. Issues of MR imaging compatibility for each valve are also discussed. abstract_id: PUBMED:37700950 Programmable Versus Differential Pressure Ventriculoperitoneal Shunts for Pediatric Hydrocephalus: A 20-Year Single-Center Experience From Saudi Arabia. Background Shunt malfunction is the most common complication after ventriculoperitoneal shunt (VPS) insertion for pediatric hydrocephalus. The incidence of shunt malfunction and the need for VPS revision may be related to the type of valve used in the shunt. Therefore, we aimed to compare the outcome of VPS in the pediatric age group stratified by differential pressure valves (DPV) and programmable shunt valves (PSV). Materials and methods This ethics-approved retrospective study was conducted at a tertiary care hospital in Saudi Arabia. We included 175 children with congenital hydrocephalus who underwent a shunt insertion or revision between 2003 and 2018 and followed them up to December 2022. The VPS complication and revision rates were compared with the patient's demographics and shunt valve types. The Kaplan-Meier method, log-rank test, and univariate and multivariate Cox proportional hazards regression were used to analyze several variables and subsequent shunt revisions. Results Females represented 52% of the study participants, and the mean age of the patients was 21.7 ± 38.4 months. The main indication for VPS was congenital hydrocephalus due to aqueductal stenosis (40%). The differential shunt valve was used in 78.9% and the PSV in 21.1% of the patients. Surgical complications occurred in 33.7% of the patients. Shunt malfunction and infection occurred in 16% and 11.4% of the patients, respectively. The VPS revision rate was significantly lower when PSV was used (odds ratio = 0.39, P &lt; 0.05). Conclusion Overall, one-third of the studied pediatric cohort required shunt revision at some point during the 15-year follow-up. However, children with PSV had fewer revision rate-related complications compared to children with DPV during the first five years of follow-up. Answer: Magnetic toys can indeed pose a risk to pediatric patients with certain programmable shunt valves. A study examining the effects of magnetic toys on programmable valves found that the settings of Strata and Codman valves could be altered by all the magnetic toys tested. The distance at which these alterations could occur varied from 10 to 50 mm for Strata valves and 5 to 30 mm for Codman valves. However, Polaris valves were not affected by any toy at any distance due to their design, which includes two magnets adjusted in opposite directions (PUBMED:19057906). This suggests that while some programmable shunt valves are susceptible to magnetic interference from toys, others are designed to be more resistant. Therefore, it is important for parents, caregivers, and healthcare professionals to be aware of the potential risks associated with magnetic toys and to take appropriate precautions to prevent unwanted changes to the pressure settings of programmable shunt valves in pediatric patients.
Instruction: Does accessory pathway significantly alter left ventricular twist/torsion? Abstracts: abstract_id: PUBMED:24372902 Does accessory pathway significantly alter left ventricular twist/torsion? A study in Wolff-Parkinson-White syndrome by velocity vector imaging. Background: The aim of this study was to determine the impact of manifest accessory pathway on left ventricle (LV) twist physiology in Wolff-Parkinson-White (WPW) patients. Although this issue was addressed in 1 study based on speckle tracking method, there was no comparative study with a different technique. We planned to use velocity vector imaging (VVI) to find out how much an accessory pathway can affect LV twist mechanics. Methods: Thirty patients were enrolled regarding inclusion and exclusion criteria. Two serial comprehensive transthoracic echocardiography evaluations were performed before and after radiofrequency catheter ablation (RFCA) within 24 hours. Stored cine loops were analyzed using VVI technique and LV twist and related parameters were extracted. Results: Comparing pre- and post-RFCA data, no significant changes were observed in LV systolic and diastolic dimensions, LV ejection fraction (LVEF), and Doppler and tissue Doppler-related parameters. VVI study revealed remarkable rise in peak LV apical rotation (10.3º ± 3.0º to 13.8º ± 3.6º, P &lt; 0.001) and basal rotation (-6.0 ± 1.8º to -7.7 ± 1.8º, P &lt; 0.001) after RFCA. Subsequently LV twist showed a surge from 14.7º ± 3.9º to 20.2º ± 4.4º (P &lt; 0.001). LV untwisting rate changed significantly from -96 ± 67 to -149.0 ± 47.5°/sec (P &lt; 0.001) and apical-basal rotation delay showed a remarkable decline after RFCA (106 ± 81 vs. 42.8 ± 26.0 msec, P &lt; 0.001). Conclusion: Accessory pathways have a major impact on LV twist mechanics. abstract_id: PUBMED:25340769 Acute impact of pacing at different cardiac sites on left ventricular rotation and twist in dogs. Objectives: We evaluated the acute impact of different cardiac pacing sites on two-dimensional speckle-tracking echocardiography (STE) derived left ventricular (LV) rotation and twist in healthy dogs. Methods: Twelve dogs were used in this study. The steerable pacing electrodes were positioned into right heart through the superior or inferior vena cava, into LV through aorta across the aortic valve. The steerable pacing electrodes were positioned individually in the right atrium (RA), right ventricular apex (RVA), RV outflow tract (RVOT), His bundle (HB), LV apex (LVA) and LV high septum (LVS), individual pacing mode was applied at 10 minutes interval for at least 5 minutes from each position under fluoroscopy and ultrasound guidance and at stabilized hemodynamic conditions. LV short-axis images at the apical and basal levels were obtained during sinus rhythm and pacing. Offline STE analysis was performed. Rotation, twist, time to peak rotation (TPR), time to peak twist (TPT), and apical-basal rotation delay (rotational synchronization index, RSI) values were compared at various conditions. LV pressure was monitored simultaneously. Results: Anesthetic death occurred in 1 dog, and another dog was excluded because of bad imaging quality. Data from 10 dogs were analyzed. RVA, RVOT, HB, LVA, LVS, RARV (RA+RVA) pacing resulted in significantly reduced apical and basal rotation and twist, significantly prolonged apical TPR, TPT and RSI compared to pre-pacing and RA pacing (all P&lt;0.05). The apical and basal rotation and twist values were significantly higher during HB pacing than during pacing at ventricular sites (all P&lt;0.05, except basal rotation at RVA pacing). The apical TPR during HB pacing was significantly shorter than during RVOT and RVA pacing (both P&lt;0.05). The LV end systolic pressure (LVESP) was significantly lower during ventricular pacing than during pre-pacing and RA pacing. Conclusions: Our results show that RA and HB pacing results in less acute reduction on LV twist, rotation and LVESP compared to ventricular pacing. abstract_id: PUBMED:26987134 LEFT HIS BUNDLE BRANCH BLOCK ASSOCIATED WITH LEFT VENTRICULAR TORSION AND REDUCED EJECTION FRACTION The influence of left His bundle branch block (LBBB) on left ventricular (LV) torsion in patients with cardiomyopathy remains to be elucidated. The aim of this study was to evaluate LV torsion associated with LBBB and hemodynamic consequences of possible changes. We studied 64 patients with ischemic and dilatation cardiomyopathy (LV ejection fraction less than 40%) divided into 2 groups, with narrow and middle (153 ms) duration QRS complexes. Despite similar LV contractility, patients with LBBB had much less pronounced LV rotation and torsion. Torsion in patients with LBBB and narrow QRS complex was estimated at 2.95 ± 3.34 and 5.87 ± 3.83 respectively (p &lt; 0.01). Moreover; the group of patients with LBBB contained much more subjects with abnormal unidirectional rotation of the basal and apical parts than the group with narrow QRS complex, namely 11 (50%) and 9 (21.9%) respectively (p &lt; 0.001). Patients with LBBB and abnormal LV rotation sowed much longer delay of posterior wall contractility (63.3 ± 35.1 mc) compared with those having LBBB and multidirectional physiological LV rotation (8.0 ± 17/9 mc) (p &lt; 0.001) which suggests a higher degree of mechanical desynchronization. T is concluded that LBBB has negative effect on LV electrical activation and contractility resulting in abnormal torsion and mechanical desynchronization. abstract_id: PUBMED:9106430 Idiopathic left ventricular tachycardia with left and right bundle branch block configurations. Introduction: Idiopathic left ventricular tachycardia typically has a right bundle branch block configuration. The purpose of this case report is to demonstrate that idiopathic ventricular tachycardia arising in or near the left posterior fascicle also may have a left bundle branch block configuration. Methods And Results: A 27-year-old woman underwent an electrophysiologic procedure because of recurrent, verapamil-responsive, wide QRS complex tachycardia. Two types of ventricular tachycardia (cycle lengths 330 to 340 msec) were reproducibly inducible, one with a right bundle branch block configuration and left-axis deviation that had been documented clinically, and the other with a left bundle branch block configuration and axis of zero. A Purkinje potential recorded at the junction of the left ventricular mid-septum and inferior wall preceded the ventricular complex by 40 msec in both tachycardias. A single application of radiofrequency energy at this site successfully ablated both ventricular tachycardias. Conclusion: The findings of this case report demonstrate that idiopathic ventricular tachycardia arising in or near the left posterior fascicle may have a left bundle branch block configuration. abstract_id: PUBMED:33370804 Novel left ventricular cardiac synchronization: left ventricular septal pacing or left bundle branch pacing? It is well recognized that a high burden of right ventricular pacing results in deleterious clinical outcomes over the long term. His bundle pacing can achieve optimal ventricular synchronization; however, relatively high pacing thresholds, low R-wave amplitudes, and the long-term performance have been concerns. Recently, left ventricular (LV) septal endocardium pacing (LVSP) has demonstrated improved acute haemodynamics. Another novel technique of intraseptal left bundle branch pacing (LBBP) via transvenous approach has been adopted rapidly and has demonstrated its feasibility and effectiveness. This article reviews the clinical application and differences between LVSP and LBBP. Compared with LVSP, LBBP has strict criteria for left conduction system capture and lead location. In addition to LV septal capture it also stimulates the proximal left bundle branch, resulting in rapid and physiological LV activation. With a uniformity and standardization of the implant procedure and definitions, it may be possible to achieve widespread application of this form of physiological pacing. abstract_id: PUBMED:32685155 Idiopathic Left Ventricular Tachycardia Originating in the Left Posterior Fascicle. Ventricular tachycardias originating from the Purkinje system are the most common type of idiopathic left ventricular tachycardia. The majority if not all of the reentrant circuit involved in this type of tachycardia is formed by the Purkinje fibres of the left bundle branch, particularly the left posterior fascicle. In general, slowly conducting Purkinje fibres (P1) form the antegrade limb, and normally conducting Purkinje fibres (P2) form the retrograde limb of the reentrant circuit of the ventricular tachycardia originating from the left posterior fascicle. Elimination of the critical Purkinje elements in the reentrant circuit is the route to successful ablation. While the reentrant circuit identified by activation mapping provides the roadmap to ablation targets, comparing the difference in the His-ventricular interval during sinus rhythm and tachycardia also helps to identify the critical site in the reentrant circuit. abstract_id: PUBMED:9160234 Mechanisms of idiopathic left ventricular tachycardia. Idiopathic left ventricular tachycardia (ILVT) differs from idiopathic right ventricular outflow tract (RVOT) tachycardia with respect to mechanism and pharmacologic sensitivity. ILVT can be categorized into three subgroups. The most prevalent form, verapamil-sensitive intrafascicular tachycardia, originates in the region of left posterior fascicle of the left bundle. This tachycardia is adenosine insensitive, demonstrates entrainment, and is thought to be due to reentry. The tachycardia is most often ablated in the region of the posteroinferior interventricular septum. A second type of ILVT is a form analogous to adenosine-sensitive RVOT tachycardia. This tachycardia appears to originate from deep within the interventricular septum and exits from the left side of the septum. This form of VT also responds to verapamil and is thought to be due to cAMP-mediated triggered activity. A third form of ILVT is propranolol sensitive. It is neither or initiated or terminated by programmed stimulation, does not terminate with verapamil, and is transiently suppressed by adenosine, responses consistent with an automatic mechanism. Recognition of the heterogeneity of ILVT and its unique characteristics should facilitate appropriate diagnosis and therapy in this group of patients. abstract_id: PUBMED:9120154 Repetitive monomorphic tachycardia from the left ventricular outflow tract: electrocardiographic patterns consistent with a left ventricular site of origin. Objectives: This study sought to characterize the electrocardiographic patterns predictive of left ventricular sites of origin of repetitive monomorphic ventricular tachycardia (RMVT). Background: RMVT typically arises from the right ventricular outflow tract (RVOT) in patients without structural heart disease. The incidence of left ventricular sites of origin in this syndrome is unknown. Methods: Detailed endocardial mapping of the RVOT was performed in 33 consecutive patients with RMVT during attempted radiofrequency ablation. Left ventricular mapping was also performed if pace maps obtained from the RVOT did not reproduce the configuration of the induced tachycardia. Results: Pace maps identical in configuration to the induced tachycardia were obtained from the RVOT in 29 of 33 patients. Application of radiofrequency energy at sites guided by pace mapping resulted in elimination of RMVT in 24 (83%) of 29 patients. In four patients (12%), pace maps obtained from the RVOT did not match the induced tachycardia. All four patients had a QRS configuration during RMVT with precordial R wave transitions at or before lead V2. In two patients, RMVT was mapped to the mediosuperior aspect of the mitral valve annulus, near the left fibrous trigone; catheter ablation at that site was successful in both. In two patients, RMVT was mapped to the basal aspect of the superior left ventricular septum. Catheter ablation was not attempted because His bundle deflections were recorded from this site during sinus rhythm. Conclusions: RMVT can arise from the outflow tract of both the right and left ventricles. RMVTs with a precordial R wave transition at or before lead V2 are consistent with a left ventricular origin. abstract_id: PUBMED:32318885 How His bundle pacing prevents and reverses heart failure induced by right ventricular pacing. Ideal heart performance demands vigorous systolic contractions and rapid diastolic relaxation. These sequential events are precisely timed and interdependent and require the rapid synchronous electrical stimulation provided by the His-Purkinje system. Right ventricular (RV) pacing creates slow asynchronous electrical stimulation that disrupts the timing of the cardiac cycle and results in left ventricular (LV) mechanical asynchrony. Long-term mechanical asynchrony produces LV dysfunction, remodeling, and clinical heart failure. His bundle pacing preserves synchronous electrical and mechanical LV function, prevents or reverses RV pacemaker-induced remodeling, and reduces heart failure. abstract_id: PUBMED:29016834 The effect of ventricular pre-excitation on ventricular wall motion and left ventricular systolic function. Aims: The relationship between ventricular pre-excitation and left ventricular dysfunction has been described in the absence of sustained supraventricular tachycardia in a series of case reports. There have been no systematic studies about the effect of ventricular pre-excitation with different accessory pathway locations on ventricular wall motion and left ventricular (LV) systolic function. Methods And Results: Thirty patients were selected for each of 4 groups, including those with right septal pathways (Group 1), right free-wall pathways (Group 2), left free-wall pathways (Group 3), and non-pre-excited patients undergoing electrophysiological evaluation for supraventricular tachycardia. We analysed the influence of the location of the accessory pathway on ventricular wall motion, systolic function, ventricular synchronism, and LV size. Right-sided accessory pathways were associated with abnormal motion of the interventricular septum, LV dyssynchrony, decreased LV systolic function, and increased LV diameter. Eighteen of 60 cases (30.0%) with right-sided accessory pathways had LV dyssynchrony, and these patients had lower LV ejection fraction and higher LV end-diastolic diameter. Conclusion: Right-sided accessory pathways may impair ventricular wall motion and LV systolic function, resulting in decreased LV ejection fraction and increased LV end-diastolic diameter. These effects occurred in patients with LV dyssynchrony. These effects, including LV dyssynchrony, resolved after radiofrequency ablation. A right-sided free-wall accessory pathway may have more detrimental effects than a septal accessory pathway. Left ventricular dyssynchrony and abnormal interventricular septal motion appeared to be responsible for the pathogenesis of LV dysfunction and remodelling. Answer: Yes, an accessory pathway can significantly alter left ventricular (LV) twist/torsion. A study conducted on patients with Wolff-Parkinson-White (WPW) syndrome using velocity vector imaging (VVI) found that the presence of an accessory pathway had a major impact on LV twist mechanics. The study observed a remarkable increase in peak LV apical rotation and basal rotation after radiofrequency catheter ablation (RFCA) of the accessory pathway. Consequently, LV twist showed a significant surge, and LV untwisting rate changed significantly, indicating that the removal of the accessory pathway normalized the twist mechanics (PUBMED:24372902).
Instruction: Does electrocardiographic Q wave burden predict the extent of scarring or hibernating myocardium as quantified by positron emission tomography? Abstracts: abstract_id: PUBMED:15685303 Does electrocardiographic Q wave burden predict the extent of scarring or hibernating myocardium as quantified by positron emission tomography? Background: The extent of Q wave 'burden' on electrocardiograms (ECGs) has not been correlated with the extent of scarring and hibernation as determined quantitatively by positron emission tomography (PET). Objective: A retrospective study was performed to identify if ECG Q wave burden predicts the extent of scarring or mismatch (hibernating myocardium) as defined by rubidium-82/F-18 fluorodeoxyglucose PET viability imaging. Patients And Methods: Eighty-three consecutive patients with coronary artery disease undergoing rubidium-82/F-18 fluoro-deoxyglucose viability imaging (mean age 67.9+/-11 years, with a mean ejection fraction of 27+/-7%) formed the study population. Resting ECG was interpreted for the presence or absence of Q waves using standard ECG criteria for Q wave myocardial infarction. Patients were divided into two groups based on their Q wave burden on ECG (small to moderate scar: zero to four Q waves; large scar: five or more Q waves). Automated analysis was used to calculate the extent of scarring and mismatch (hibernating myocardium) on PET as a percentage of left ventricular myocardium. Mean PET scar and mismatch scores were calculated for the two groups. Results: The mean PET scar scores were significantly different between the small to moderate ECG scar group (13.9+/-7.3% of the left ventricle) and the large scar group (20.6+/-8.1% of the left ventricle; P=0.001). The mismatch scores for the small to moderate scar group (4.6+/-2.8%) were not significantly different from those of the large scar group (4.05+/-2.8%; P=0.7). Conclusions: ECG Q wave 'burden' was associated with the presence of scars as defined by PET but did not accurately predict the amount of hibernating myocardium. abstract_id: PUBMED:23595888 Identification of therapeutic benefit from revascularization in patients with left ventricular systolic dysfunction: inducible ischemia versus hibernating myocardium. Background: Although the recent surgical treatment of ischemic heart failure substudy reported that revascularization of viable myocardium did not improve survival, these results were limited by the viability imaging technique used and the lack of inducible ischemia information. We examined the relative impact of stress-rest rubidium-82/F-18 fluorodeoxyglucose positron emission tomography identified ischemia, scar, and hibernating myocardium on the survival benefit associated with revascularization in patients with systolic dysfunction. Methods And Results: The extent of perfusion defects and metabolism-perfusion mismatch was measured with an automated quantitative method in 648 consecutive patients (age, 65±12 years; 23% women; mean left ventricular ejection fraction, 31±12%) undergoing positron emission tomography. Follow-up time began at 92 days (to avoid waiting-time bias); deaths before 92 days were excluded from the analysis. During a mean follow-up of 2.8±1.2 years, 165 deaths (27.5%) occurred. Cox proportional hazards modeling was used to adjust for potential confounders, including a propensity score to adjust for nonrandomized treatment allocation. Early revascularization was performed within 92 days of positron emission tomography in 199 patients (33%). Hibernating myocardium, ischemic myocardium, and scarred myocardium were associated with all-cause death (P=0.0015, 0.0038, and 0.0010, respectively). An interaction between treatment and hibernating myocardium was present such that early revascularization in the setting of significant hibernating myocardium was associated with improved survival compared with medical therapy, especially when the extent of viability exceeded 10% of the myocardium. Conclusions: Among patients with ischemic cardiomyopathy, hibernating, but not ischemic, myocardium identifies which patients may accrue a survival benefit with revascularization versus medical therapy. abstract_id: PUBMED:28966019 N-Terminal Pro B-Type Natriuretic Peptide and High-Sensitivity Cardiac Troponin T Levels Are Related to the Extent of Hibernating Myocardium in Patients With Ischemic Heart Failure. Background: Increased N-terminal pro b-type natriuretic peptide (NT-proBNP) and high-sensitivity cardiac troponin T (hs-cTnT) can identify patients with heart failure (HF) who are at increased risk of cardiac events. The relationship of these biomarkers to the extent of hibernating myocardium and scar has not been previously characterized in patients with ischemic left ventricular dysfunction and HF. Methods: Patients with ischemic HF meeting recruitment criteria and undergoing perfusion and fluorodeoxyglucose-positron emission tomography to define myocardial hibernation and scar were included in the study. A total of 39 patients (mean age 67 ± 8 years) with New York Heart Association class II-IV HF and ischemic cardiomyopathy (ejection fraction [EF], 27.9% ± 8.5%) were enrolled in the study. Results: Serum NT-proBNP and hs-cTnT levels were elevated in patients with ≥ 10% hibernating myocardium compared with those with &lt; 10% (NT-pro-BNP, 7419.10 ± 7169.5 pg/mL vs 2894.6 ± 2967.4 pg/mL; hs-cTnT, 789.3 ± 1835.3 pg/mL vs 44.8 ± 78.9 pg/mL; P &lt; 0.05). The overall receiver operating characteristic under the curve value for NT-proBNP and hs-cTnT to predict hibernating myocardium was 0.76 and 0.78, respectively (P &lt; 0.05). The NT-proBNP (P = 0.02) and hs-cTnT (P &lt; 0.0001) levels also correlated with hibernation, particularly in patients with ≥ 10% scar, independent of EF, age, and estimated glomerular filtration rate. No differences were noted in biomarker levels for patients with vs those without ≥ 10% scar. Conclusions: NT-proBNP and hs-cTnT levels are elevated in patients with ischemic HF hibernation and are correlated with the degree of hibernation but not with the presence or extent of scar. Taken together, these data support the novel concept that NT-proBNP and hs-cTnT release in patients with ischemic HF reflects the presence and extent of hibernating myocardium. abstract_id: PUBMED:14975469 Ischemic and viable myocardium in patients with non-Q-wave or Q-wave myocardial infarction and left ventricular dysfunction: a clinical study using positron emission tomography, echocardiography, and electrocardiography. Objectives: We investigated whether patients with non-Q-wave myocardial infarction (NQMI) have more ischemic viable myocardium (IVM) than patients with Q-wave myocardial infarction (QMI). Background: Non-Q-wave myocardial infarction is associated with higher incidences of cardiac events than QMI, suggesting more myocardium at risk in NQMI. Methods: To identify myocardial ischemia, hibernation, and scar, the resting and stress (82)rubidium perfusion and F-18 fluorodeoxyglucose metabolic positron emission tomographic imaging (PET) was performed in 64 consecutive patients with NQMI (n = 21) or QMI (n = 43). Echocardiography was performed for assessment of left ventricular function and wall motion index (WMI). The relationships between PET, echocardiographic, and electrocardiographic findings were analyzed. Results: There were no significant differences in left ventricular ejection fraction (LVEF) between NQMI and QMI groups (28 +/- 10% vs. 25 +/- 11%, p &gt; 0.05). Ischemic and viable myocardium was more common in NQMI than in QMI (91% vs. 61%, p &lt; 0.05). The total amount of IVM was significantly higher in NQMI than in QMI (6.5 +/- 5.2 vs. 2.9 +/- 2.8 segments, p &lt; 0.001). Neither the number of Q waves, residual ST-segment depression of &gt;or=0.5 mm or elevation of &gt;or=1 mm, nor LVEF and WMI were significant predictors for IVM. Wall motion index correlated with scar segments (r = 0.54, p &lt; 0.001) and LVEF (r = -0.67, p &lt; 0.001). Conclusions: Ischemic and viable myocardium is common in patients with NQMI and left ventricular dysfunction, suggesting that aggressive approaches should be taken to salvage the myocardium at risk in such patients. abstract_id: PUBMED:7811155 Postinfarction hibernating myocardium The detection of hibernating myocardium after infarction is important because it justifies the discussion concerning the revascularisation of infarcted zones irrigated by occluded or severely stenosed vessels, but with an adequate collateral circulation to allow hibernation. The detection of hibernating myocardium is particularly important in patients without the classical indications for revascularisation, such as residual spontaneous ischaemia or ischaemia provoked by exercise or pharmacological stress testing. All techniques currently in use tend to overestimate the size of the necrosed, fibrous scar, compared with the amount of viable myocardium. Improved regional myocardial function after revascularisation is the most convincing proof of hibernating myocardium but it can only be obtained retrospectively. The detection of a reserve of contractility in the necrosed territory by an inotropic stimulus is well adapted to the demonstration of stunned myocardium but this method has not been proved in hibernating myocardium. Thallium scintigraphy is certainly useful in the prospective diagnosis of hibernating myocardium but the protocol of examination should be adapted to this specific problem. There is little available data concerning the evaluation of hibernating myocardium by positron emission tomography: the technical advantages of this method in assessing myocardial viability should enable a more accurate evaluation of post-infarction hibernating myocardium. Adequate revascularisation of necrosed territories depends on a deeper understanding and more precise prospective assessment of postinfarction hibernating myocardium. abstract_id: PUBMED:35150659 Prediction of nonviable myocardium by ECG Q-Wave parameters: A 3.0 T cardiovascular magnetic resonance study. Introduction: The presence of a Q-wave on a 12-lead electrocardiogram (ECG) has been considered a marker of a large myocardial infarction (MI). However, the correlation between the presence of Q-waves and nonviable myocardium is still controversial. The aims of this study were to 1) test QWA, a novel ECG approach, to predict transmural extent and scar volume using a 3.0 Tesla scanner, and 2) assess the accuracy of QWA and transmural extent. Methods: Consecutive patients with a history of coronary artery disease who came for myocardial viability assessment by CMR were retrospectively enrolled. Q-wave measurements parameters including duration and maximal amplitude were performed from each surface lead. A 3.0 Tesla CMR was performed to assess LGE and viability. Results: Total of 248 patients were enrolled in the study (with presence (n = 76) and absence of pathologic Q-wave (n = 172)). Overall prevalence of pathologic Q-waves was 27.2% (for LAD infarction patients), 20.0 % (for LCX infarction patients), and 16.8% (for RCA infarction patients). Q-wave area demonstrated high performance for predicting the presence of a nonviable segment in LAD territory (AUC 0.85, 0.77-0.92) and a lower, but still significant performance in LCX (0.63, 0.51-0.74) and RCA territory (0.66, 0.55-0.77). Q-wave area greater than 6 ms mV demonstrated high performance in predicting the presence of myocardium scar larger than 10% (AUC 0.82, 0.76-0.89). Conclusion: Q-wave area, a novel Q-wave parameter, can predict non-viable myocardial territories and the presence of a significant myocardial scar extension. abstract_id: PUBMED:28120237 Hibernating substrate of ventricular tachycardia: a three-dimensional metabolic and electro-anatomic assessment. Purpose: Hibernating myocardium (HM) is associated with sudden cardiac death (SCD). Little is known about the electrophysiological properties of HM and the basis of its association with SCD. We aimed to electrophysiologically characterize HM in patients with ventricular tachycardia (VT). Methods: Endocardial voltage mapping, metabolic 18FDG-positron emission tomography (PET) and perfusion 82Rb, 201Tl, or 99mTc scans were performed in 61 ischemic heart disease patients with VT. Hibernating areas were identified which was followed by three-dimensional PET reconstructions and integration with voltage maps to allow hybrid metabolic-electro-anatomic assessment of the arrhythmogenic substrate. Results: Of 61 patients with ischemic heart disease and refractory VT, 7 were found to have hibernating myocardium (13%). A total of 303 voltage points were obtained within hibernating myocardium (8.2 points per 10 cm2) and displayed abnormal voltage in 48.5 and 78.3% of bipolar and unipolar recordings, respectively, with significant heterogeneity of bipolar (p &lt; 0.0001) and unipolar voltage measurements (p = 0.0004). Hibernating areas in 6 of 7 patients contained all three categories of bipolar voltage-defined scar (&lt;0.5 mV), border zone (0.5-1.5 mV), and normal myocardium (&gt;1.5 mV). The characteristics of local electrograms were also assessed and found abnormal in most recordings (76.6, 10.2% fractionated, 5.3% isolated potentials). Exit sites of clinical VTs were determined in 6 patients, of which 3 were located within hibernating myocardium. Conclusions: Hibernating myocardium displays abnormal and heterogeneous electrical properties and seems to contribute to the substrate of VT. These observations may underlie the vulnerability to reentry and SCD in patients with hypoperfused yet viable myocardium. abstract_id: PUBMED:16647885 Prediction of arrhythmic events with positron emission tomography: PAREPET study design and methods. Background: In medically-treated patients with ischemic cardiomyopathy, myocardial viability is associated with a worse prognosis than scar. The risk is especially great with hibernating myocardium (chronic regional dysfunction with reduced resting flow), and the excess mortality appears to be due to sudden cardiac death (SCD). Hibernating myocardium also results in sympathetic nerve dysfunction, which has been independently associated with risk of SCD. Objectives: PAREPET is a prospective, observational cohort study funded by NHLBI. It is designed to determine whether hibernating myocardium and/or inhomogeneity of sympathetic innervation by positron emission tomography imaging identifies patients with ischemic cardiomyopathy who are at high risk for SCD and cardiovascular mortality. Methods: Patients with documented ischemic cardiomyopathy, an ejection fraction of &lt;or=35%, and with no plans for coronary revascularization will be recruited. Major exclusion criteria include: history of resuscitated SCD, sustained VT, ICD discharge, or unexplained syncope; recent myocardial infarction (30 days), percutaneous coronary intervention (3 months), coronary bypass surgery (1 year); or comorbidities that would be expected to reduce life expectancy to &lt;2 years. All patients will undergo transthoracic echocardiography, and dynamic cardiac positron emission tomography to quantify resting perfusion (13N-ammonia), norepinephrine uptake as an index of sympathetic innervation (11C-meta-hydroxyephedrine), and metabolic viability (18F-2-deoxyglucose during glucose-insulin clamp). The development of SCD or cardiovascular mortality will be determined by telephone follow-up every three months. In patients with an implantable cardiac defibrillator, appropriate device discharge will be considered a surrogate for SCD. Conclusion: The PAREPET study will prospectively determine whether the amount of viable dysfunction myocardium and/or cardiac sympathetic dysinnervation is associated with the risk of SCD. It is anticipated that the results of this trial will more specifically identify myocardial substrates of SCD. This will help target therapies intended to reduce arrhythmic death to those patients with the greatest likelihood of benefit. abstract_id: PUBMED:10573488 Can the surface electrocardiogram be used to predict myocardial viability? Objective: To investigate whether QRS morphology on the surface ECG can be used to predict myocardial viability. Design: ECGs of 58 patients with left ventricular impairment undergoing positron emission tomography (PET) were studied. (13)N-Ammonia (NH(3)) and (18)F-fluorodeoxyglucose (FDG) were the perfusion and the metabolic markers, respectively. The myocardium is scarred when the uptake of both markers is reduced (matched defect). Reduced NH(3) uptake with persistent FDG uptake (mismatched defect) represents hibernating myocardium. First, the relation between pathological Q waves and myocardial scarring was investigated. Second, the significance of QR and QS complexes in predicting hibernating myocardium was determined. Results: As a marker of matched PET defects, Q waves were specific (79%) but not sensitive (41%), with a 77% positive predictive accuracy and a poor (43%) negative predictive accuracy. The mean size of the matched PET defect associated with Q waves was 20% of the left ventricle. This was not significantly different from the size of the matched PET defects associated with no Q waves (18%). Among the regions associated with Q waves on the ECG, there were 16 regions with QR pattern (group A) and 23 regions with QS pattern (group B). The incidence of mismatched PET defects was 19% of group A and 30% of group B (NS). Conclusions: Q waves are specific but not sensitive markers of matched defects representing scarred myocardium. Q waves followed by R waves are not more likely to be associated with hibernating myocardium than QS complexes. abstract_id: PUBMED:27056601 Prospective Evaluation of 18F-Fluorodeoxyglucose Uptake in Postischemic Myocardium by Simultaneous Positron Emission Tomography/Magnetic Resonance Imaging as a Prognostic Marker of Functional Outcome. Background: The immune system orchestrates the repair of infarcted myocardium. Imaging of the cellular inflammatory response by (18)F-fluorodeoxyglucose ((18)F-FDG) positron emission tomography/magnetic resonance imaging in the heart has been demonstrated in preclinical and clinical studies. However, the clinical relevance of post-MI (18)F-FDG uptake in the heart has not been elucidated. The objective of this study was to explore the value of (18)F-FDG positron emission tomography/magnetic resonance imaging in patients after acute myocardial infarction as a biosignal for left ventricular functional outcome. Methods And Results: We prospectively enrolled 49 patients with ST-segment-elevation myocardial infarction and performed (18)F-FDG positron emission tomography/magnetic resonance imaging 5 days after percutaneous coronary intervention and follow-up cardiac magnetic resonance imaging after 6 to 9 months. In a subset of patients, (99m)Tc-sestamibi single-photon emission computed tomography was performed with tracer injection before revascularization. Cellular innate immune response was analyzed at multiple time points. Segmental comparison of (18)F-FDG-uptake and late gadolinium enhancement showed substantial overlap (κ=0.66), whereas quantitative analysis demonstrated that (18)F-FDG extent exceeded late gadolinium enhancement extent (33.2±16.2% left ventricular myocardium versus 20.4±10.6% left ventricular myocardium, P&lt;0.0001) and corresponded to the area at risk (r=0.87, P&lt;0.0001). The peripheral blood count of CD14(high)/CD16(+) monocytes correlated with the infarction size and (18)F-FDG signal extent (r=0.53, P&lt;0.002 and r=0.42, P&lt;0.02, respectively). (18)F-FDG uptake in the infarcted myocardium was highest in areas with transmural scar, and the standardized uptake valuemean was associated with left ventricular functional outcome independent of infarct size (Δ ejection fraction: P&lt;0.04, Δ end-diastolic volume: P&lt;0.02, Δ end-systolic volume: P&lt;0.005). Conclusions: In this study, the intensity of (18)F-FDG uptake in the myocardium after acute myocardial infarction correlated inversely with functional outcome at 6 months. Thus, (18)F-FDG uptake in infarcted myocardium may represent a novel biosignal of myocardial injury. Answer: The electrocardiographic (ECG) Q wave burden has been studied to determine if it predicts the extent of scarring or hibernating myocardium as quantified by positron emission tomography (PET). A retrospective study found that ECG Q wave burden was associated with the presence of scars as defined by PET but did not accurately predict the amount of hibernating myocardium (PUBMED:15685303). This suggests that while Q wave burden on an ECG can indicate the presence of myocardial scarring, it is not a reliable indicator of the extent of hibernating myocardium. Another study aimed to identify the therapeutic benefit from revascularization in patients with left ventricular systolic dysfunction and found that hibernating myocardium, but not ischemic myocardium, identifies which patients may accrue a survival benefit with revascularization versus medical therapy (PUBMED:23595888). This indicates that the presence of hibernating myocardium, rather than just ischemic myocardium, is a critical factor in determining the potential benefit of revascularization. Furthermore, a study on N-Terminal Pro B-Type Natriuretic Peptide and High-Sensitivity Cardiac Troponin T levels found that these biomarkers are related to the extent of hibernating myocardium in patients with ischemic heart failure, supporting the concept that these biomarkers reflect the presence and extent of hibernating myocardium (PUBMED:28966019). In contrast, a study using a 3.0 Tesla cardiovascular magnetic resonance found that a novel ECG approach, Q-wave area (QWA), can predict non-viable myocardial territories and the presence of a significant myocardial scar extension (PUBMED:35150659). This suggests that certain ECG parameters may have predictive value for myocardial viability and scar extent, although this may not be directly related to the traditional concept of Q wave burden. Overall, while ECG Q wave burden is associated with the presence of myocardial scars, it does not seem to be a reliable predictor of the extent of hibernating myocardium as quantified by PET. Other methods and biomarkers may provide more accurate predictions of hibernating myocardium and its clinical implications.
Instruction: Ultrasound changes after rotator cuff repair: is supraspinatus tendon thickness related to pain? Abstracts: abstract_id: PUBMED:23352550 Ultrasound changes after rotator cuff repair: is supraspinatus tendon thickness related to pain? Background: Little is known about the morphology of healing rotator cuffs after surgical repair. This investigation aimed to determine whether there are changes in tendon thickness, subacromial bursa, anatomical footprint, tendon vascularity, and capsular thickness after rotator cuff repair, and whether supraspinatus tendon thickness correlates with pain. Methods: Fifty-seven patients completed a validated pain questionnaire. Using a standardized protocol, their shoulders were scanned by the same ultrasonographer at 1 week, 6 weeks, 3 months, and 6 months postarthroscopic repair by a single surgeon. The contralateral shoulders, if uninjured, were also scanned. Results: Of 57 patients, 4 re-tore their tendons at 6 weeks and 4 retore at 3 months. Sixteen of the remaining 49 patients had intact contralateral supraspinatus tendons. The repaired supraspinatus tendon thickness remained unchanged throughout the 6 months. Compared to week 1, at 6 months, bursal thickness decreased from 1.9 (0.7) mm to 0.7 (0.5) mm (P &lt; .001); anatomical footprint increased from 7.0 (2.0) mm to 9.3 (1.5) mm; tendon vascularity decreased from mild to none (P &lt; .001); posterior capsule thickness decreased from 2.3 (0.8) mm to 1.3 (0.6) mm (P &lt; .001). Frequency and severity of pain and shoulder stiffness decreased (P &lt; .001). There was no correlation between tendon thickness and pain. Conclusion: After rotator cuff repair, there was an immediate increase in subacromial bursa thickness, tendon vascularity, and posterior glenohumeral capsular thickness. These normalized after 6 months. Tendon thickness was unchanged while footprint contact was comparable with the contralateral tendons. There was no correlation between tendon thickness and pain. abstract_id: PUBMED:28451606 Ultrasound and Functional Assessment of Transtendinous Repairs of Partial-Thickness Articular-Sided Rotator Cuff Tears. Background: Partial-thickness articular-sided rotator cuff tears are a frequent source of shoulder pain. Despite conservative measures, some patients continue to be symptomatic and require surgical management. However, there is some controversy as to which surgical approach results in the best outcomes for grade 3 tears. Hypothesis/purpose: The purpose of this study was to evaluate repair integrity and the clinical results of patients treated with transtendinous repair of high-grade partial-thickness articular-sided rotator cuff tears. Our hypothesis was that transtendinous repairs would result in reliable healing and acceptable functional outcomes. Study Design: Case series; Level of evidence, 4. Methods: Twenty patients with a minimum follow-up of 2 years were included in the study. All patients underwent arthroscopic repair of high-grade partial-thickness rotator cuff tears utilizing a transtendinous technique by a single surgeon. At latest follow-up, the repair integrity was evaluated using ultrasound imaging, and functional scores were calculated. Results: Ultrasound evaluation demonstrated that 18 of 20 patients had complete healing with a normal-appearing rotator cuff. Two patients had a minor residual partial tear. Sixteen of 20 patients had no pain on visual analog scale. Four patients complained of mild intermittent residual pain. All patients were rated as "excellent" by both the University of California at Los Angeles Shoulder Score and the Simple Shoulder Test. Conclusion: The transtendon technique for the repair of articular-sided high-grade partial rotator cuff tears results in reliable tendon healing and excellent functional outcomes. abstract_id: PUBMED:30897462 Non-traumatic chronic shoulder pain is not associated with changes in rotator cuff interval tendon thickness. Objective: To determine whether the thickness of the rotator interval tendons is different when comparing both symptomatic and non-symptomatic sides in people with chronic shoulder pain, and to those free of pain. Furthermore, to calculate the level of association between the rotator interval tendon thicknesses and perceived shoulder pain-function. Design: A cross-sectional, observational study. Method: The supraspinatus, subscapularis and biceps brachii tendon thickness of sixty two patients with chronic shoulder pain were determined from standardized ultrasonography measures performed on both shoulders, whereas only the dominant arm was measured for the control subjects. Findings: Supraspinatus, subscapularis and biceps brachii tendon thickness was comparable between sides in the symptomatic group and was also comparable between the symptomatic and asymptomatic participants. In addition, the correlation between the tendon thickness and shoulder pain-function was non-significant. Interpretations: Tendon thickness was unaltered in people with chronic shoulder pain. These findings do not rule out the possibility that other changes in the tendon are present such as changes in the elastic properties and cell population and this should be explored in future studies. abstract_id: PUBMED:38020508 Role of ultrasound and MRI in the evaluation of postoperative rotator cuff. Rotator cuff tears are common shoulder injuries in patients above 40 years of age, causing pain, disability, and reduced quality of life. Most recurrent rotator cuff tears happen within three months. Surgical repair is often necessary in patients with large or symptomatic tears to restore shoulder function and relieve symptoms. However, 25% of patients experience pain and dysfunction even after successful surgery. Imaging plays an essential role in evaluating patients with postoperative rotator cuff pain. The ultrasound and magnetic resonance imaging are the most commonly used imaging modalities for evaluating rotator cuff. The ultrasound is sometimes the preferred first-line imaging modality, given its easy availability, lower cost, ability to perform dynamic tendon evaluation, and reduced post-surgical artifacts compared to magnetic resonance imaging. It may also be superior in terms of earlier diagnosis of smaller re-tears. Magnetic resonance imaging is better for assessing the extent of larger tears and for detecting other complications of rotator cuff surgery, such as hardware failure and infection. However, postoperative imaging of the rotator cuff can be challenging due to the presence of hardware and variable appearance of the repaired tendon, which can be confused with a re-tear. This review aims to provide an overview of the current practice and findings of postoperative imaging of the rotator cuff using magnetic resonance imaging and ultrasound. We discuss the advantages and limitations of each modality and the normal and abnormal imaging appearance of repaired rotator cuff tendon. abstract_id: PUBMED:26614474 Proximal Biceps Tendon and Rotator Cuff Tears. The long head of biceps tendon (LHBT) is frequently involved in rotator cuff tears and can cause anterior shoulder pain. Tendon hypertrophy, hourglass contracture, delamination, tears, and tendon instability in the bicipital groove are common macroscopic pathologic findings affecting the LHBT in the presence of rotator cuff tears. Failure to address LHBT disorders in the setting of rotator cuff tear can result in persistent shoulder pain and poor satisfaction after rotator cuff repair. Tenotomy or tenodesis of the LHBT are effective options for relieving pain arising from the LHBT in the setting of reparable and selected irreparable rotator cuff tears. abstract_id: PUBMED:25489559 Classification of rotator cuff tendinopathy using high definition ultrasound. Background: ultrasound is a valid cost effective tool in screening for rotator cuff pathology with high levels of accuracy in detecting full-thickness tears. To date there is no rotator cuff tendinopathy classification using ultrasound. The aims of this study are to define a valid high-definition ultrasound rotator cuff tendinopathy classification, which has discriminate validity between groups based upon anatomical principles. Methods: 464 women, aged 65-87, from an established general population cohort underwent bilateral shoulder ultrasound and musculoskeletal assessment. Sonographer accuracy was established in a separate study by comparing ultrasound findings to the gold standard intra-operative findings. Results: there were 510 normal tendons, 217 abnormal tendons, 77 partial tears, and 124 full-thickness tears. There was no statistical difference in age or the proportion with pain between the abnormal enthesis and partial tear groups, however both groups were statistically older (p&lt;0.001) and had a greater proportion with pain (p&lt;0.001 &amp; p=0.050) than normal tendons. The full-thickness tears were statistically older than normal tendons (p&lt;0.001), but not abnormal/partially torn tendons. The proportion with pain was significantly greater than both groups (p&lt;0.001 &amp; p=0.006). Symptomatic shoulders had a larger median tear size than asymptomatic shoulders (p=0.006). Using tear size as a predictor of pain likelihood, optimum sensitivity and specificity occurred when dividing tears into groups up to 2.5cm and &gt;2.5cm, which corresponds with anatomical descriptions of the width of the supraspinatus tendon. Conclusion: the classification system is as follows: Normal Tendons; Abnormal enthesis/Partial-thickness tear; Single tendon full-thickness tears (0-2.5cm); Multi-tendon full-thickness tears (&gt;2.5cm). abstract_id: PUBMED:35145804 Ultrasound-Guided Injection of a Tendon-Compatible Hyaluronic Acid Preparation for the Management of Partial Thickness Rotator Cuff Tear: A Case Report. Partial-thickness rotator cuff (RC) tear constitutes the most common cause of shoulder pain and disability. Its management is challenging, and a conservative approach is suggested as a first-line treatment. Nonetheless, minimally invasive approaches have been described in clinical trials, such as ultrasound (US)-guided tendon-compatible hyaluronic acid (HA) injection preparation in the rupture site. HA is believed to fill the intradermal space and thus support the regeneration process by its integration in the damaged extracellular matrix. A reduced healing period required for a tendon tear when treated with a tendon-compatible HA preparation compared to placebo has been previously described in the literature, enabling a more rapid return to exercise. The current study aims to provide a thorough analysis of a regular CrossFit practitioner case with a partial-thickness bursal-side RC tear of the anterior Supraspinatus (SS) fibers with 7 mm on the anteroposterior axis and 5 mm on the longitudinal axis in magnetic resonance imaging (MRI), that caused pain, and limited functional status. Two US-guided injections of a specific high molecular weight (one million Daltons) tendon-compatible HA preparation (12 mg/1.2 mL) separated by six weeks were performed. A supervised rehabilitation protocol was then followed and training was progressively introduced. In the 12 weeks follow-up visits, a reduction in pain intensity was noticed as well as an improvement of the functional status. At the six months, one year, and two years follow-ups, no pain and a normal joint function were observed, despite engaging in continuous overload and overhead activities during CrossFit practice. MRI was performed one year after the intervention presenting a reduction of the injury size and only a partial intrasubstance tear of 4 mm was observed in the SS tendon. US imaging in the two years follow-up presented an additional reduction in tear size to 3.9 mm length. No adverse effects were reported. It is thus believed that US-guided injections of tendon-compatible HA on partial-thickness RC tears can be a feasible and effective treatment option in the management of this frequent pathology, and more studies, particularly randomized controlled trials, should be implemented to substantiate and validate this approach. abstract_id: PUBMED:19714274 Pain and stiffness in partial-thickness rotator cuff tears. To evaluate the null hypothesis of no difference in degree of pain or stiffness between patients with partial- and full-thickness tears of the rotator cuff, we measured pain and stiffness in a cohort of consecutive patients who underwent arthroscopy for rotator cuff-related conditions. Pain was measured with a visual analogue scale, and range of motion was measured with a goniometer. Included in the study were 410 shoulders (410 patients), of which 214 had no tear, 66 had articular-sided partial-thickness tears, and 83 had single-tendon full-thickness tears. There was no statistical difference for measurements of pain or stiffness between patients with partial- and full-thickness tears, and hence the null hypothesis was upheld. Neither pain nor stiffness should be used as a diagnostic indicator for differentiation of partial- and full-thickness rotator cuff tears. abstract_id: PUBMED:23908255 Tendon transfers for irreparable rotator cuff tears. Tendon transfer is one treatment option for patients with massive irreparable rotator cuff tears. Although surgical indications are not clearly defined, the traditional thought is that the ideal candidate is young and lacks significant glenohumeral arthritis. The proposed benefits of tendon transfers are pain relief and potential increase in strength. The biomechanical rationale for the procedure is to restore the glenohumeral joint force couple and possibly to restore normal shoulder kinematics. The selection of donor tendon depends on the location of the rotator cuff deficiency. Transfers of latissimus dorsi and pectoralis major tendons have been shown to consistently improve pain; however, functional benefits are unpredictable. Trapezius tendon transfer may be an alternative in patients with massive posterosuperior rotator cuff tears, although longer-term follow-up is required. abstract_id: PUBMED:30943087 The Effect of Tendon Delamination on Rotator Cuff Healing. Background: While patient age, tear size, and muscle fatty infiltration are factors known to affect the rate of tendon healing after rotator cuff repair, the effect of tendon delamination is less known. Purpose: To assess the effect of tendon delamination on rotator cuff healing after arthroscopic single-row (SR) repair. Study Design: Cohort study; Level of evidence, 3. Methods: Consecutive patients (N = 117) with chronic full-thickness rotator cuff tears underwent arthroscopic SR repair with the tension-band cuff repair. The mean ± SD age at the time of surgery was 60 ± 8 years. There were 25 small, 63 medium, and 29 large tears. Tendon delamination was assessed intraoperatively under arthroscopy with the arthroscope placed in the lateral portal. Patients were divided into 2 groups: those with nondelaminated (n = 80) and delaminated (n = 37) cuff tears. The 2 groups were comparable for age, sex, body mass index, preoperative pain, strength, and a Constant-Murley score. Repair integrity was evaluated with sonography (mean, 24 months after surgery; range, 6-62 months) and classified into 3 categories: type A, indicating complete, homogeneous, and thick coverage of the footprint; type B, partial coverage with a thin tendon; and type C, no coverage of the footprint. Results: The prevalence of tendon delamination observed under arthroscopy was 32% (37 of 117), which increased with tear size and retraction: from 15% in small tears to 32% in medium tears and 45% in large tears ( P = .028). Postoperatively, 83 patients had complete coverage of footprint (type A = 71%) and the cuff was considered healed, whereas 26 had partial coverage or a thin tendon (type B = 22%) and 8 had no coverage (type C = 7%). Overall, the rate of complete healing was 78% in nondelaminated cuff tears and 57% in the case of tendon delamination ( P = .029). In large retracted tears, the healing rate dropped from 81% in the absence of delamination to 39% when the tendons were delaminated ( P = .027). Conclusion: Tendon delamination increases with tear size and retraction. Patients with chronic delaminated and retracted rotator cuff tears (stage 2 or 3) are at risk of failure after SR cuff repair, whereas patients with small delaminated rotator cuff tears (stage 1) involving only the supraspinatus can be treated with an SR cuff repair with a high chance of tendon healing. These results suggest that SR cuff repair may be insufficient to treat delaminated chronic cuff tears. To improve the anatomic outcomes of rotator cuff repairs, surgeons should consider treating delaminated tears with a double-row or double-layer repair. Answer: After rotator cuff repair, the thickness of the supraspinatus tendon does not appear to be related to pain. A study investigating the morphology of healing rotator cuffs post-surgical repair found that while there were changes in subacromial bursa thickness, anatomical footprint, tendon vascularity, and capsular thickness over a 6-month period, the thickness of the repaired supraspinatus tendon remained unchanged. Importantly, there was no correlation between tendon thickness and pain, suggesting that other factors may be responsible for pain after rotator cuff repair (PUBMED:23352550). Additionally, other studies have looked at factors such as tendon delamination and the presence of partial-thickness tears in relation to pain and healing outcomes. For instance, tendon delamination has been shown to affect the rate of tendon healing after rotator cuff repair, with a higher risk of repair failure in chronic delaminated and retracted rotator cuff tears (PUBMED:30943087). However, these studies do not directly address the relationship between supraspinatus tendon thickness and pain. In summary, based on the available evidence, supraspinatus tendon thickness after rotator cuff repair does not correlate with the level of pain experienced by patients (PUBMED:23352550). Other factors, such as the integrity of the repair, tendon delamination, and the presence of other pathologies, may play a more significant role in postoperative pain and should be considered when evaluating and managing patients after rotator cuff repair.
Instruction: Does personality influence how people with asthma manage their condition? Abstracts: abstract_id: PUBMED:24690024 Does personality influence how people with asthma manage their condition? Objective: Personality traits have been found to be associated with the management of chronic disease, however, there is limited research on these relationships with respect to asthma. Asthma management and asthma control are often suboptimal, representing a barrier to patients achieving good health outcomes. This explorative study aimed to investigate the relationship between correlates of asthma management and personality traits. Methods: Participants completed a postal survey comprising validated self-report questionnaires measuring personality traits (neuroticism, extraversion, openness to experiences, agreeableness, conscientiousness), asthma medication adherence, asthma control and perceived control of asthma. Relationships between asthma management factors and personality traits were examined using correlations and regression procedures. Results: A total of 77 surveys were returned from 94 enrolled participants. Significant relationships were found between personality traits and (i) adherence to asthma medications, and (ii) perceived control of asthma. Participants who scored high on the conscientiousness dimension of personality demonstrated higher adherence to their asthma medications. Women who scored low on the agreeableness dimension of personality and high on the neuroticism dimension had significantly lower perceived confidence and ability to manage their asthma. No statistically significant associations were found between asthma control and personality traits. Conclusions: Three of the five personality traits were found to be related to asthma management. Future research into the role of personality traits and asthma management will assist in the appropriate tailoring of interventional strategies to optimize the health of patients with asthma. abstract_id: PUBMED:37075823 The associations between personality traits and mental health in people with and without asthma. Objective: The aim of the current study is to investigate the associations between personality traits and mental health in people with asthma and compare it with people without asthma. Methods: Data came from UKHLS with 3929 patients with asthma with a mean age of 49.19 (S.D. = 15.23) years old (40.09 % males) and 22,889 healthy controls (42.90 % males) with a mean age of 45.60 (S.D. = 17.23) years old. First, the current study investigated the difference in Big Five personality traits and mental health between people with and without asthma using a predictive normative modeling approach with one-sample t-tests. Second, a hierarchical regression accompanied by two multiple regressions was used to determine how personality traits may relate to people with and without asthma differently. Results: The current study found asthma patients have significantly higher Neuroticism, higher Openness, lower Conscientiousness, higher Extraversion, and worse mental health. Asthma status significantly moderated the association between Neuroticism and mental health with this relationship being stronger in people with asthma. Moreover, Neuroticism was positively related to worse mental health and Conscientiousness and Extraversion were negatively associated with worse mental health in people with and without asthma. However, Openness was negatively associated with worse mental health in people without asthma but not in people with asthma. Limitations: The limitations of the current study include cross-sectional designs, self-reported measured, and limited generalizability to other countries. Conclusion: Clinicians and health professionals should use findings from the current study to come up with prevention and interaction programs that promote mental health based on personality traits in asthma patients. abstract_id: PUBMED:24170656 A personality and gender perspective on adherence and health-related quality of life in people with asthma and/or allergic rhinitis. Purpose: Poor adherence to medication treatment for asthma and allergic rhinitis could challenge a positive health outcome. Health-related quality of life (HRQL) is an important measure of health outcome. Both personality and gender could influence adherence and perceptions of HRQL. The purpose was to clarify the role of personality and gender in relation to adherence and HRQL in people with asthma and/or rhinitis. Data Sources: Participants (n = 180) with asthma and allergic rhinitis, selected from a population-based study, filled out questionnaires on the five-factor model personality traits--neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness--HRQL, and adherence to medication treatment. Data were statistically analyzed using t-tests, Mann-Whitney tests, bivariate correlations, and multiple regressions. Conclusions: Personality traits were associated with adherence to medication treatment in men. The influence of personality traits on HRQL also differed between men and women. These differences suggest that both a personality and gender perspective should be considered when planning care support aimed at improving adherence and HRQL in people living with asthma and/or allergic rhinitis. Implications For Practice: It is suggested that both a personality and gender perspective be taken into account in care support aimed at improving adherence and HRQL in people with asthma and allergic rhinitis. abstract_id: PUBMED:21311839 The influence of personality traits and beliefs about medicines on adherence to asthma treatment. Aim: To explore the influence of personality traits and beliefs about medicines on adherence to treatment with asthma medication. Methods: Respondents were 35 asthmatic adults prescribed controller medication. They answered questionnaires about medication adherence, personality traits, and beliefs about medicines. Results: In gender comparisons, the personality traits "Neuroticism" in men and "adherence to medication" were associated with lower adherent behaviour. Associations between personality traits and beliefs in the necessity of medication for controlling the illness were identified. Beliefs about the necessity of medication were positively associated with adherent behaviour in women. In the total sample, a positive "necessity-concern" differential predicted adherent behaviour. Conclusion: The results imply that personality and beliefs about medicines may influence how well adults with asthma adhere to treatment with asthma medication. abstract_id: PUBMED:28211555 Personality traits, level of anxiety and styles of coping with stressin people with asthma and chronic obstructive pulmonary disease - a comparative analysis. Objectives: Chronic obstructive pulmonary disease (COPD) and asthma are a challenge to public health, with the sufferers experiencing a range of psychological factors affecting their health and behavior. The aim of the present study was to determine the level of anxiety, personality traits and stress-coping ability of patients with obstructive lung disease and comparison with a group of healthy controls. Methods: The research was conducted on a group of 150 people with obstructive lung diseases (asthma and COPD) and healthy controls (mean age = 56.0 ± 16.00). Four surveys were used: a sociodemographic survey, NEO-FFI Personality Inventory, State-Trait Anxiety Inventory (STAI), and Brief Cope Inventory. Logistic regression was used to identify the investigated variables which best differentiated the healthy and sick individuals. Results: Patients with asthma or COPD demonstrated a significantly lower level of conscientiousness, openness to experience, active coping and planning, as well as higher levels of neuroticism and a greater tendency to behavioral disengagement. Logistic regression found trait-anxiety, openness to experience, positive reframing, acceptance, humor and behavioral disengagement to be best at distinguishing people with lung diseases from healthy individuals. Conclusions: The results indicate the need for intervention in the psychological functioning of people with obstructive diseases. abstract_id: PUBMED:6678481 Evaluation of alexithymic traits in bronchial asthma patients. Use of the Schalling Sifneos Personality Scale The authors investigate the alexithymia phenomenon and the possibility of quantifying it by administering the Schalling-Sifneos Personality Scale on four groups of subjects: asthmatics, psychosomatics other than bronchial asthmatics, patients afflicted with non-psychosomatic chronic illnesses and healthy subjects. The questionnaire permits a quantitative evaluation of the alexithymia phenomenon that appears significantly more evident in asthmatics than in patients afflicted with chronic illnesses and healthy subjects. The patients afflicted with psychosomatic illnesses other than asthma attain higher scores than asthmatic patients. The authors identify certain items on the scale as being particularly associated with the alexithymia phenomenon, enough to be able to construct a partial alexithymia score, indicative in itself of the presence of alexithymia personality traits, even in the absence of particularly high total alexithymia scores. Variations in age and educational level influence scores obtained by healthy subjects and patients afflicted with chronic illnesses, while this does not occur in asthmatics and other psychosomatic patients. This seems to indicate that the higher scores of the latter are not influenced by social or statistical types of factors, but rather by illness-related factors. abstract_id: PUBMED:15500632 The influence of personality, measured by the Karolinska Scales of Personality (KSP), on symptoms among subjects in suspected sick buildings. Unlabelled: The aim was to study possible relationships between personality traits as measured by the Karolinska Scales of Personality (KSP), a self-report personality inventory based on psychobiological theory, and medical symptoms, in subjects with previous work history in suspected sick buildings. The study comprised 195 participants from 19 consecutive cases of suspected sick buildings, initially collected in 1988-92. In 1998-89, the KSP inventory and a symptoms questionnaire were administered in a postal follow-up study. There were 16 questions on symptoms, including symptoms from the eyes, nose, throat, skin, and headache, tiredness, and a symptom score (SC), ranging from 0 to 16, was calculated. The questionnaire also requested information on personal factors, including age, gender, smoking habits, allergy and diagnosed asthma. The KSP ratings in the study group did not differ from the mean personality scale norm scores, calculated from an external reference group. Females had higher scores for somatic anxiety (P &lt; 0.01), muscular tension (P &lt; 0.001), psychic anxiety (P &lt; 0.01), psychasthenia (P &lt; 0.05), indirect aggression (P &lt; 0.05), and guilt (P &lt; 0.05), while males scored higher on detachment (P &lt; 0.001). Subjects with higher SC were found to display higher degree of somatic anxiety (P &lt; 0.001), muscular tension (P &lt; 0.001), psychic anxiety (P &lt; 0.001), psychasthenia (P &lt; 0.001), inhibition of aggression (P &lt; 0.05), detachment (P &lt; 0.05), suspicion (P &lt; 0.01), indirect aggression (P &lt; 0.01), and verbal aggression (P &lt; 0.05). In addition, ocular, respiratory, dermal, and systemic symptoms (headache and tiredness) were significantly related to anxiety- and aggressivity-related scales. There were associations between personality scales and change of symptom score (SC) during the 9-year period. The associations between KSP personality traits and symptoms were more pronounced in females. In conclusion, there are gender differences in personality and SBS symptoms. Personality may play a role in the occurrence of symptoms studied in indoor environmental epidemiology. Our results support a view that measurement of personality could be of value in future studies and vulnerability to environmental stress. Practical Implications: Personality and personal vulnerability should be considered in both indoor environmental epidemiology and practical handling of building with suspected indoor problem, especially when the technical investigations fail to identify any obvious technical malfunction. Moreover, personality aspects should be considered among subjects with possible vulnerable personality exposed to environmental stress, and personality diagnosis can be a complementary tool useful when assessing 'sick building patients' in the medical services. We found no evidence of severe personality pathology in among those working in workplaces with environmental problems so called 'sick buildings'. abstract_id: PUBMED:8117581 Psychological differences between asthmatics and patients suffering from an asthma-like condition, functional breathing disorder: a comparison between the two groups concerning personality, psychosocial and somatic parameters. Fifteen patients with asthma were compared with thirteen patients with asthma-like symptoms but without physiological signs of asthma. This condition is termed Functional Breathing Disorder, FBD. All patients were examined with regard to relevant physiological variables, and to specific personality traits and psychosocial status by means of psychological tests and questionnaires. The results indicated that the patients suffering from FBD were more psychologically distressed and had lower quality of life than the asthma patients. Further, they suffered from a significantly greater variety of symptoms and more intense symptoms than the asthmatics. Such symptoms included sleeping disturbances and somatic symptoms such as chest pain, cold hands or feet, blurred vision. The FBD patients had significantly more problems in their social and family lives, at work and in their leisure time than the asthmatics. They were significantly more depressed, less hedonic and more hypochondriac than the asthmatics. Moreover, they trusted other people to a significantly lesser degree. The patients with FBD had been hospitalized less often than the asthmatics, but they had sought medical care more often. The present study indicates that it is important to identify patients suffering from FBD at as early a stage as possible in order to offer them proper treatment. abstract_id: PUBMED:25257121 Does type D personality affect symptom control and quality of life in asthma patients? Aims And Objectives: This study aims to identify the effects of type D personality on symptom control and quality of life and to explore factors influencing quality of life among asthma patients in Korea. Background: Psychological factors such as depression and stress are well known to be related to medical outcomes and quality of life in asthma patients. People with type D personality are vulnerable to stress, show poor prognosis in disease and experience low quality of life. Design: A descriptive cross-sectional design was used. Methods: A total of 144 patients with asthma participated in this study. Data were collected through face-to-face interviews using structured questionnaires: the Type D Personality Scale-14, Asthma Control Test and Asthma-Specific Quality of Life. Results: About 33% of participants were classified into the type D personality group. The type D personality group showed statistically significantly lower symptom control and asthma-specific quality of life compared to the non-type D personality group. Based on forward stepwise multiple regression, the most significant factor of quality of life was symptom control, followed by type D personality, hospitalisation within the previous one year, and lifetime hospitalisation experiences. Conclusions: The prevalence of type D personality in asthma patients was high, and type D personality was significantly associated with poor symptom control and low quality of life. Psychosocial interventions might be beneficial to improve symptom control and quality of life in asthma patients with type D personality. Relevance To Clinical Practice: Nurses should be aware of the high prevalence of type D personality and the effects on symptom control and quality of life in asthma patients. Nurses should also provide personality-specific interventions to improve quality of life in such patients. abstract_id: PUBMED:37761757 Asthma Moderates the Association between the Big Five Personality Traits and Life Satisfaction. The current study aimed to examine whether asthma moderates the association between the Big Five personality traits and life satisfaction. By analyzing data from 3934 people with asthma (40.09% males) with a mean age of 49.2 (S.D. = 16.94) years old and 22,914 people without asthma (42.9% males) with a mean age of 45.62 (S.D. = 17.25) years old using a hierarchical regression and multiple regressions, the current study found that asthma significantly moderates the link between Neuroticism and life satisfaction and Openness and life satisfaction after controlling for other covariates. Specifically, Neuroticism was negatively related to life satisfaction whereas Agreeableness, Openness, Conscientiousness, and Extraversion were positively associated with life satisfaction in people with and without asthma. However, the negative association between Neuroticism and life satisfaction and the positive association between Openness and life satisfaction were stronger in people with asthma compared to people without asthma. Answer: Yes, personality does influence how people with asthma manage their condition. Studies have found significant relationships between certain personality traits and aspects of asthma management. For instance, individuals who score high on the conscientiousness dimension of personality tend to demonstrate higher adherence to their asthma medications (PUBMED:24690024). Additionally, women who score low on agreeableness and high on neuroticism have significantly lower perceived confidence and ability to manage their asthma (PUBMED:24690024). Moreover, personality traits have been associated with adherence to medication treatment in men, and the influence of personality traits on health-related quality of life (HRQL) differs between men and women (PUBMED:24170656). The study also suggests that both a personality and gender perspective should be considered when planning care support aimed at improving adherence and HRQL in people living with asthma and/or allergic rhinitis (PUBMED:24170656). Furthermore, the presence of asthma has been found to moderate the association between certain personality traits and mental health outcomes. For example, the relationship between neuroticism and mental health is stronger in people with asthma, and while conscientiousness and extraversion are negatively associated with worse mental health in both people with and without asthma, openness is negatively associated with worse mental health only in people without asthma (PUBMED:37075823). In addition, patients with asthma or chronic obstructive pulmonary disease (COPD) have been shown to have lower levels of conscientiousness and openness to experience, as well as higher levels of neuroticism compared to healthy controls (PUBMED:28211555). These personality traits can affect how individuals cope with stress and manage their condition. Overall, these findings indicate that personality traits play a role in the management of asthma and can influence medication adherence, perceived control over the condition, and mental health outcomes. Clinicians and health professionals may use this knowledge to tailor prevention and intervention programs that promote mental health and effective asthma management based on individual personality traits (PUBMED:37075823).
Instruction: Can general dentists produce successful implant overdentures with minimal training? Abstracts: abstract_id: PUBMED:16600463 Can general dentists produce successful implant overdentures with minimal training? Objectives: This study was carried out to determine whether inexperienced dentists can provide two-implant overdentures that are as satisfactory and of the same cost as those provided by experienced prosthodontists. Methods: Edentulous elders were enrolled in a randomized controlled clinical trial to compare the effects of mandibular conventional and two-implant overdentures on nutrition. They were randomly assigned to groups that were treated by either an experienced prosthodontist or by a newly-graduated dentist with minimal training in implant treatment. Data for this study were obtained during the treatment of the first 140 subjects enrolled. The change in patient ratings of satisfaction after treatment, laboratory costs and the number of unscheduled visits up to 6 months following prosthesis delivery were compared. Results: Satisfaction was significantly higher with implant overdentures than with conventional dentures, but there were no differences in scores for either prosthesis between the groups treated by experienced specialists or new dentists. Furthermore, six of the seven inexperienced dentists reported that they found the mandibular two-implant overdenture easier to provide than the conventional denture. Conclusions: The results of this study suggest that general dentists can provide successful mandibular two-implant overdentures with minimal training. abstract_id: PUBMED:17589491 General dentists can provide successful implant overdentures with minimal training. Design: This randomised controlled trial (RCT) used a 2x2 factorial design. Intervention: Edentulous people of age &gt;65 years were enrolled in the RCT, which was designed to compare the effects on nutrition of conventional and two-implant mandibular overdentures. They were randomly assigned to groups treated either by an experienced prosthodontist or by a newly-graduated dentist who had undergone minimal training in implant treatment. Outcome Measure: The change in patient ratings of satisfaction after treatment, laboratory costs, and the number of unscheduled visits up to 6 months following delivery of the prosthesis were compared. To determine the clinicians' perception of difficulty in providing the two prosthetic treatments, a short questionnaire was administered using telephone interviews. Results: Data were gathered from the first 140 patients who had been treated by either one of the three prosthodontists or by one of the eight newly graduated dentists. The prosthodontists provided a total of 28 implant overdentures and 46 conventional dentures, whereas the inexperienced dentists provided 33 prostheses for each kind. Satisfaction was significantly higher with implant overdentures than with conventional dentures, but there were no differences in scores for either prosthesis between the groups treated by experienced specialists or new dentists. The laboratory costs of fabricating implant overdentures were significantly higher than the cost of conventional dentures. There was no significant difference between the two groups of clinicians in mean laboratory costs, however, for either conventional dentures or implant overdentures. There was no significant between-group difference in the number of unscheduled visits for either prosthesis. Furthermore, six of the seven inexperienced dentists reported that they found the mandibular two-implant overdenture easier to provide than the conventional denture. Conclusions: Inexperienced dental practitioners can provide successful mandibular two-implant overdentures for their patients with minimal training. abstract_id: PUBMED:18321260 Removable prosthodontic services, including implant-supported overdentures, provided by dentists and denturists. The aim of this study was to evaluate the provision of removable prosthodontic services, including implant-supported overdentures, by dentists and denturists. A structured questionnaire was mailed to 474 randomly chosen dentists and 156 denturists registered to practise in New Zealand. Information was sought on the range of removable prosthodontic services provided (including implant-supported overdentures) and the professional fees charged for them. From 410 respondents, there was an overall response rate of 67.43%; 290 came from the dentists (males 78.6%, n = 228; females 21.48%, n = 62) and 120 from denturists (males 91.7%, n = 110; females 8.3%, n = 10). Most respondents were over 40 years of age, with one in three denturists (but only one in seven dentists) over 60 years of age. The extent of removable prosthodontic services varied. One-third of dentists referred complete denture patients and denturists referred a similar number of immediate denture cases. Denturists' complete denture, immediate denture and single reline prices were generally lower than those from dentists. Removable partial denture prices were similar. Implant-supported overdentures were recommended for edentulous patients by one-third of the dentists and three out of four denturists. Forty per cent of denturists (but only 10% of dentists) charged &lt;NZ$1000 for complete dentures. (1NZ$ = US$ 0.75; 1NZ$ = euro 0.56; 1NZ$ = GBP 0.38) Implant-supported overdenture fees were predominantly in the range of NZ$1500-3000 for both groups, but one in four dentists and one in six denturists charged more than NZ$3000. Although both denturists and dentists both provide prosthodontic services, there is a professional fee differential between them. Denturists' lower fees provide a more economic option. Denturists are likely to steadily develop further inroads into the implant-supported overdenture market. abstract_id: PUBMED:30787808 Mandibular Implant-supported Overdentures: Prosthetic Overview. Implant-supported overdentures are becoming the treatment of choice for the completely edentulous mandible. They significantly improve the quality of life in edentulous patients. For this review article, the literature was searched to identify pertinent studies. No meta-analysis was conducted because of high heterogeneity within the literature. Accordingly, in this review article, the author provides an update on implant-supported mandible overdentures with regard to the number of implants, type of loading, stress-strain distribution, mode of implant-to-denture attachment, occlusal considerations and complications. abstract_id: PUBMED:24504317 Attitudes of general dental practitioners to the maintenance of Locator retained implant overdentures. Introduction: Locator retained implant overdentures are associated with a high incidence of prosthodontic complications. This study investigated whether general dental practitioners (GDPs) were willing to maintain these prostheses in primary dental care. Method: A questionnaire was distributed to all GDPs referring patients for an implant assessment to the Charles Clifford Dental Hospital, Sheffield between 1 January 2012 and 30 June 2012. Results: Ninety-four out of one hundred and forty-six questionnaires were returned (response rate: 64%). Thirteen GDPs (14%) were able to identify the Locator attachment system from clinical photographs. Eighty-two GDPs (87%) would adjust the fit surface of a Locator retained implant overdenture. Twenty-three GDPs (25%) would replace a retentive insert, 18 GDPs (19%) would tighten a loose abutment, 68 GDPs (72%) would debride abutments and 25 GDPs (27%) would remake a Locator-retained implant overdenture. Forty-seven GDPs (50%) felt that the maintenance of these prostheses was not their responsibility. The main barriers identified to maintenance by GDPs were a lack of training, knowledge and equipment. Seventy GDPs (74%) would like further training in this area. Conclusions: GDPs are not familiar with the Locator attachment system and are reluctant to maintain implant retained overdentures. GDPs would like further training in this area. abstract_id: PUBMED:33538341 Dentists' preferences in implant maintenance and hygiene instruction. Background: This study investigated the preferences of dentists in Australia in providing professional implant maintenance and implant-specific oral hygiene instructions (OHI). Methods: General dentists were surveyed online about their preferences in peri-implant diagnostics, maintenance provision, armamentarium used, and implant OHI techniques and frequency. Results: Most of the 303 respondents (96%) provided maintenance services; 87.6% reviewed implants regularly while 10.7% only performed diagnostics after detecting clinical signs/symptoms. Supragingival prosthesis cleaning was performed by 77.9% of respondents, 35.0% performed subgingival debridement, 41.9% treated peri-implant mucositis and 18.2% treated peri-implantitis. About 15% did not treat nor refer peri-implant disease, including significantly more non-implant providers and dentists without implant training. Maintenance armamentarium commonly included floss (76.3%), prophylaxis (73.9%), plastic curettes (43.3%) and stainless-steel ultrasonics (38.0%). Brushing (86.5%), flossing (73.9%) and interdental brush use (68.3%) were most commonly recommended. Implant OHI was repeated routinely by 57.4% of dentists who provided it. Dentists with greater implant training and experience were more likely to perform reviews and complex maintenance procedures. Conclusions: Peri-implant diagnostics performed, treatments provided and armamentarium varied among dentists. Implant providers and those with higher levels of training had more preventative approaches to implant OHI. Possible shortcomings in disease management and OHI reinforcement were identified. abstract_id: PUBMED:26800641 Interest in dental implantology and preferences for implant therapy: a survey of Victorian dentists. Background: The purpose of this study was to gauge dentists' interest, knowledge and training in implantology, and to compare their treatment preferences with current literature. Subsequently, this information may be used to evaluate implantology education pathways. Methods: A cross-sectional postal survey of 600 randomly selected dentists registered with the Dental Practice Board of Victoria was conducted. Respondents were asked about background, interest and training in implantology, and implant treatment preferences. Results were analysed according to primary practice location, decade of graduation and attendance at continuing professional development (CPD) programmes. Results: One hundred and seventy-six questionnaires were included for analysis. In general, dentists rate their implant knowledge, interest and enjoyment in restoring implants favourably. No differences were found between city and country practitioners, and different graduation decades. The level of CPD significantly influenced treatment preferences. Practitioners were generally unwilling to treat patients taking bisphosphonates, or to perform grafting procedures. Most dentists provide common services to treat peri-implant conditions. Direct-to-fixture is the most popular fixture-abutment connection. Conclusions: Overall, there is a high level of implant knowledge corresponding to current evidence in the literature. Level of CPD attendance is the most important factor in dentists' willingness to provide more implant therapy options. abstract_id: PUBMED:36605147 Comparing the functional efficiency of tooth-supported overdentures and implant-supported overdentures in patients requiring oral rehabilitation: A systematic review. The aim of this article is to compare the functional efficiency of tooth-supported overdentures and implant-supported overdentures in patients requiring oral rehabilitation. The comparative quantification of the improvement in functional efficiency is very difficult to assess because of the variations in the study designs like the age of the population studied, the male-to-female ratio, the outcome measures used, the clinical setting in which the implant therapy was provided, oral status of the subjects included and the type of implant therapy provided. In this systematic review, the articles included compared the functional efficiency by assessing the bite force, chewing efficiency, electromyographic (EMG) changes measured by EMG analysis, and patient satisfaction for subjects who have been rehabilitated with either a tooth-supported overdenture or an implant-supported overdenture. This will help the clinicians to better plan the treatment, keeping in mind the long-term prognosis for that particular patient. abstract_id: PUBMED:25828951 Maxillary Three-Implant Overdentures Opposing Mandibular Two-Implant Overdentures: 10-Year Surgical Outcomes of a Randomized Controlled Trial. Background: The surgical placement of four maxillary implants for overdentures may not be obligatory when opposing mandibular two-implant overdentures. Purpose: To determine 10-year surgical outcomes and implant success of three narrow diameter implants in edentulous maxillae with conventional loading. Materials And Methods: Forty participants with mandibular two-implant overdentures were randomly allocated for surgery for maxillary overdentures. Using osteotomes, three implants of similar systems were placed with a one-stage procedure and 12-week loading with splinted and unsplinted prosthodontic designs. Marginal bone and stability measurements were done at surgery, 12 weeks, 1-, 2-, 5-, 7-, 10 years. Results: One hundred seventeen implants were placed in 39 participants, with 35 being seen at 1 year; 29 at 2 years; 28 at 5 years; 26 at 7 years; and 23 (59%) at 10 years. Marginal bone loss was 1.35 mm between surgery and 12 weeks; 0.36 mm between 12 weeks and 1 year; 0.48 mm between 1 and 5 years; and 0.22 mm between 5 and 10 years. Implant stability quotients were 56.05, 57.54, 60.88, 58.80, 61.17 at surgery, 12 weeks, 1 year, 5 years, and 10 years. Four-field tables by implant showed success rates of 82% at 1 year; 69.2% at 2 years; 66.7% at 5 years; 61.5% at 7 years; 51.3% at 10 years. Data showed no differences between surgical technique, systems, or prosthodontic designs. Conclusions: Surgical placement with osteotomes of three narrow diameter implants for maxillary overdentures, opposing mandibular two-implant overdentures, is an acceptable approach, subject to strict patient selection. Implant success is independent of prosthodontic design. abstract_id: PUBMED:31264230 Implant education patterns and clinical practice of general dentists in Australia. Background: This study aimed to understand trends in dentists' implant training attendance and correlation to treatment provision. Implant-specific oral hygiene instruction coverage in training programs was investigated. Methods: A cross-sectional web-survey of dentists registered in Australia was conducted. Respondents were asked about their background, implant training history and treatment provision. Results were analysed by implant provision characteristics and graduation decade. Results: Three hundred and three responses from general dental practitioners (GDPs) were received and analysed. The highest implant training levels attained post-graduation were postgraduate non-specialist qualification (7.9% of respondents), continuing professional development (CPD) (73.6%) versus none (18.5%), with differences between implant providers and non-providers (P &lt; 0.001), different graduation decades (P &lt; 0.001) and those restoring implants or performing surgery as well (P &lt; 0.001). University-based CPD was attended less than dental association/society or implant company CPD. Non-providers were significantly less likely to recall implant oral hygiene instruction sources (P &lt; 0.001). Most GDPs (74.9%) provided implant services, with younger GDPs beginning earlier after graduation. About 16% of respondents did not provide implants once established career-wise. Conclusions: Dentists might be providing implant treatments increasingly earlier in their careers. Respondents with more training were significantly more likely to perform more complex procedures, while implant training attendance trends varied by graduation decade. Answer: Yes, general dentists can produce successful implant overdentures with minimal training. This conclusion is supported by a randomized controlled trial (RCT) that compared the effects of mandibular conventional and two-implant overdentures on nutrition in edentulous elders. The study found that patient satisfaction was significantly higher with implant overdentures than with conventional dentures, and there were no differences in satisfaction scores between those treated by experienced prosthodontists and those treated by newly-graduated dentists with minimal training in implant treatment. Additionally, the laboratory costs and the number of unscheduled visits up to 6 months following prosthesis delivery were comparable between the two groups. Six of the seven inexperienced dentists reported that they found the mandibular two-implant overdenture easier to provide than the conventional denture, suggesting that general dentists can indeed provide successful mandibular two-implant overdentures with minimal training (PUBMED:16600463, PUBMED:17589491).
Instruction: Do you need to clamp a patent left internal thoracic artery-left anterior descending graft in reoperative cardiac surgery? Abstracts: abstract_id: PUBMED:19231383 Do you need to clamp a patent left internal thoracic artery-left anterior descending graft in reoperative cardiac surgery? Background: Dogma suggests optimal myocardial protection in cardiac surgery after prior coronary artery bypass graft surgery (CABG) with patent left internal thoracic artery (LITA) pedicle graft requires clamping the graft. However, we hypothesized that leaving a patent LITA-left anterior descending (LAD) graft unclamped would not affect mortality from reoperative cardiac surgery. Methods: Data were collected on reoperative cardiac surgery patients with prior LITA-LAD grafts from July 1995 through June 2006 at our institution. With the LITA unclamped, myocardial protection was obtained initially with antegrade cardioplegia followed by regular, retrograde cardioplegia boluses and systemic hypothermia. The Society of Thoracic Surgeons National Database definitions were employed. The primary outcome was perioperative mortality. Variables were evaluated for association with mortality by bivariate and multivariate analyses. Results: In all, 206 reoperations were identified involving patients with a patent LITA-LAD graft. Of these, 118 (57%) did not have their LITA pedicle clamped compared with 88 (43%) who did. There were 15 nonsurvivors (7%): 8 of 188 (6.8%) in the unclamped group and 7 of 88 (8.0%) in the clamped group (p = 0.750). Nonsurvivors had more renal failure (p = 0.007), congestive heart failure (p = 0.017), and longer perfusion times (p = 0.010). When controlling for independently associated variables for mortality, namely, perfusion time (odds ratio 1.014 per minute; 95% confidence interval: 1.004 to 1.023; p = 0.004) and renal failure (odds ratio 4.146; 95% confidence interval: 1.280 to 13.427; p = 0.018), an unclamped LITA did not result in any increased mortality (odds ratio 1.370; 95% confidence interval: 0.448 to 4.191). Importantly, the process of dissecting out the LITA resulted in 7 graft injuries, 2 of which significantly altered the operation. Conclusions: In cardiac surgery after CABG, leaving the LITA graft unclamped did not change mortality but may reduce the risk of patent graft injury, which may alter an operation. abstract_id: PUBMED:22917686 The "no-dissection" technique is safe for reoperative aortic valve replacement with a patent left internal thoracic artery graft. Objective: Management of a patent left internal thoracic artery graft during reoperation is controversial. The "no-dissection" technique avoids dissection and clamping of the left internal thoracic artery graft, and myocardial protection is achieved using adjunctive systemic hypothermia and hyperkalemia. We compared the postoperative outcomes after isolated reoperative aortic valve replacement in patients with previous coronary artery bypass grafting with a patent left internal thoracic artery graft using a no-dissection technique with the outcomes of patients with previous coronary artery bypass grafting without a left internal thoracic artery graft. Methods: The outcomes were analyzed for patients who underwent isolated reoperative aortic valve replacement with previous coronary artery bypass grafting from January 1, 2002, to June, 30, 2011. Patency of the left internal thoracic artery was confirmed using either coronary angiography or computed tomography angiography. The patent left internal thoracic artery group did not undergo dissection or clamping of the left internal thoracic artery graft, and myocardial protection was obtained using systemic hypothermia and hyperkalemia. The no left internal thoracic artery group underwent isolated aortic valve replacement with previous coronary artery bypass grafting but had no left internal thoracic artery graft. Results: A total 174 patients were identified for the patent left internal thoracic artery group and 26 for the no left internal thoracic artery group. The perfusion and crossclamp times were similar. No differences were seen between the 2 groups in operative mortality (6.9% vs 7.7%, P = 1.00). The complication rates were similar, and the peak creatine kinase-MB values within 24 hours of surgery were not significantly different between the 2 groups (median, 27.4 vs 29 μ/mL; P = .72). Conclusions: Reoperative aortic valve replacement in patients with previous coronary artery bypass grafting and a patent left internal thoracic artery graft can be performed safely without dissection or clamping of the left internal thoracic artery using systemic hyperkalemia and hypothermia. We believe this method prevents unnecessary injury during dissection of the left internal thoracic artery graft. abstract_id: PUBMED:26907619 Management of a Left Internal Thoracic Artery Graft Injury during Left Thoracotomy for Thoracic Surgery. There have been some recent reports on the surgical treatment of lung cancer in patients following previous coronary artery bypass graft surgery. Use of internal thoracic artery graft is a gold standard in cardiac surgery with superior long-term patency. Left internal thoracic artery graft is usually patent during left lung resection in patients who present to the surgeon with an operable lung cancer. We have presented our institutional experience with left-sided thoracic surgery in patients who have had previous coronary artery surgery with a patent internal thoracic artery graft. abstract_id: PUBMED:11042485 Off-pump coronary artery bypass grafting for the circumflex coronary artery via the left thoracotomy in redo CABG with the patent left internal thoracic artery graft to the left anterior descending artery. Five patients had undergone off-pump coronary artery bypass grafting (CABG) as redo CABG via the left thoracotomy for the lesions of the left circumflex coronary arteries. In all patients, the internal thoracic artery (ITA) grafts to the LAD were well patent and acting significantly important in coronary circulation, however, ischemia due to the lesion of the LCX was significant. The saphenous vein grafts or the radial artery grafts were used as the materials of the grafts. The proximal ends of these grafts were anastomosed to the descending aorta. The procedures were completed successfully in all the patients and the excellent patency was shown angiographycally even in the long-term period after the surgery. Necessity of graft surgery only for the LCX lesion would be a rare occasion for a surgeon; however, these results suggest that the procedure is simple and less risky, which would encourage the surgeon to perform it in clinical situation. abstract_id: PUBMED:19436806 "Pulmonary slit" procedure for preventing tension on the left internal thoracic artery graft. The gold-standard bypass graft to the left anterior descending coronary artery is the left internal thoracic artery harvested with its pedicle. At times, however, the length of the internal thoracic artery is insufficient for distal anastomosis. Different methods of lengthening the internal thoracic artery or of reducing the distance to the anastomosis site have been described, but at times even these may be inadequate. In order to extend the benefits of the left internal thoracic artery graft to more patients, we perform the "pulmonary slit" procedure as described here. abstract_id: PUBMED:21057442 Safe approach for redo coronary artery bypass grafting--preventing injury to the patent graft to the left anterior descending artery. Objective: In redo coronary artery bypass grafting (CABG), repeat median sternotomy is a routine approach when the graft to the left anterior descending artery (LAD) is occluded. However, it is important to avoid injury to the patent graft to LAD during repeat sternotomy. We retrospectively reviewed our cases to assess our combined strategy for a safer redo CABG. Methods: The study group comprised 19 patients (18 men and 1 woman; mean age 67.7 ± 6.9 years) who underwent redo CABG operations from January 2000 to August 2008. All patients had undergone median sternotomy during previous surgery (13 ± 6 years before repeat CABG). Eighteen patients had previous graft occlusion, and 6 had developed new coronary artery disease. Five patients had patent left internal thoracic artery (LITA) and 8 had patent saphenous vein graft (SVG). We attempted to avoid median sternotomy when patients had patent graft to LAD. Results: Median sternotomy (on-pump cardiac arrest) was performed on 13 patients with occluded graft to LAD. For those with the patent graft to LAD, left thoracotomy (on-pump beating) on 4 patients, and 2 patients underwent off-pump CABG via the subxiphoid approach. The mean number of bypass grafts was 2.6 ± 1.2. Internal thoracic arteries, radial arteries, saphenous vein graft, and gastroepiploic arteries were all selected as conduits. The ascending aorta, descending aorta, and previous SVG graft were used as the proximal anastomosis site. There was no graft injury, and 1 patient died as a result of ventricular tachycardia. Conclusion: According to the circumstances, conduits and a proximal anastomosis should be selected. For redo CABG patients who have a patent graft to LAD, it is important to choose the optimal approach to avoid injury to the previous patent graft. abstract_id: PUBMED:35221559 Minimally invasive direct coronary artery bypass to the left anterior descending artery using right gastroepiploic artery graft for a redo case with poor conduits. A 64-year-old Thai woman underwent coronary artery bypass grafting (CABG) using saphenous vein grafts (SVG) for completely occluded left anterior descending artery (LAD) and mitral valve replacement with mechanical valve about a year ago. She presented with unstable angina. Three-dimensional computed tomography angiography (3DCTA) showed occlusion of all the grafts. The left subclavian artery had 99% stenosis. The patient underwent redo CABG via a minimally invasive direct approach. The chest was entered through the left fifth intercostal space. The right gastroepiploic artery (RGEA) and a small length of SVG were harvested. The RGEA was extended using the SVG with an end-to-end anastomosis and used to graft the LAD without cardiopulmonary bypass. The patient's postoperative course was uneventful. Postoperative 3DCTA revealed patent RGEA-SVG graft. Minimally invasive direct coronary artery bypass to LAD with RGEA is a useful alternative approach for redo CABG in patients with not much choice of conduits. abstract_id: PUBMED:25140469 Comparative analysis of the patency of the internal thoracic artery in the CABG of left anterior descending artery: 6-month postoperative coronary CT angiography evaluation. Objective: To assess the patency of the pedicled right internal thoracic artery with an anteroaortic course and compare it to the patency of the left internal thoracic artery , in anastomosis to the left anterior descending artery in coronary artery bypass grafting by using coronary CT angiography at 6 months postoperatively. Methods: Between December 2008 and December 2011, 100 patients were selected to undergo a prospective coronary artery bypass grafting procedure without cardiopulmonary bypass. The patients were randomly divided by a computer-generated list into Group-1 (G-1) and Group-2 (G-2), comprising 50 patients each, the technique used was known at the beginning of the surgery. In G-1, coronary artery bypass grafting was performed using the left internal thoracic artery for the left anterior descending and the free right internal thoracic artery for the circumflex, and in G-2, coronary artery bypass grafting was performed using the right internal thoracic artery pedicled to the left anterior descending and the left internal thoracic artery pedicled to the circumflex territory. Results: The groups were similar with regard to the preoperative clinical data. A male predominance of 75.6% and 88% was observed in G-1 and G-2, respectively. Five patients migrated from G-1 to G-2 because of atheromatous disease in the ascending aorta. The average number of distal anastomoses was 3.48 (SD=0.72) in G-1 and 3.20 (SD=0.76) in G-2. Coronary CT angiography in 96 re-evaluated patients showed that all ITAs, right or left, used in situ for the left anterior descending were patent. There were no deaths in either group. Conclusion: Coronary artery bypass grafting surgery involving anastomosis of the anteroaortic right internal thoracic artery to the left anterior descending artery has an outcome similar to that obtained using the left internal thoracic artery for the same coronary site. abstract_id: PUBMED:9609014 Assessment of left internal thoracic artery anastomosis with left anterior descending coronary artery by Doppler echocardiography Purpose: To study the value of Doppler echocardiography as a tool for the evaluation of left internal thoracic artery graft (LITAG) patency in patients who underwent coronary revascularization using minimally invasive bypass surgery without extracorporeal circulation. Methods: The first 12 consecutive patients were studied after coronary artery bypass surgery using a 5 MHz Doppler transducer. Doppler signals for the systolic and diastolic flow velocities were preferably obtained in the second intercostal space. All patients underwent coronary angiography while hospitalized. Results: The exam was feasible in 93% of patients. Doppler flow pattern was predominantly diastolic (pattern A) in patients with patent anastomoses (6/7). In patients with occluded anastomoses (4/4) Doppler flow pattern was predominantly systolic (pattern B) (p = 0.003). Conclusion: Internal thoracic artery flow pattern as measured by Doppler echocardiography after minimally invasive coronary artery bypass graft surgery is an accurate method for identifying LITAG patency. abstract_id: PUBMED:33137825 Computational fluid dynamics of internal mammary artery-left anterior descending artery anastomoses. Objectives: The aim of this study was to elucidate the remodelling of the internal mammary artery (IMA)-left anterior descending artery anastomosis and compare 2 different anastomosis techniques (end-to-side versus side-to-side) using computational fluid dynamics. Methods: This study included 9 patients. Computed tomography (CT) angiography was performed immediately after coronary artery bypass grafting (CABG) and at 3-6 months later. The computational fluid dynamics models were made using the CT data. The pulsatile 3-dimensional blood flow was achieved with the finite volume method to evaluate the postoperative morphological and haemodynamic changes at the anastomosis in each patient. Flow velocity distribution, wall shear stress (WSS) and its fluctuation oscillatory shear index were measured. Results: No early or mid-term graft occlusion was observed in the study series. In the side-to-side anastomosis, pouch formation at the distal end of IMA caused a vortex flow with low WSS immediately after CABG. However, at 3-6 months after surgery, this pouch disappeared. As a result, the laminar straight flow with uniform WSS distribution was achieved inside the anastomosis. In the end-to-side anastomosis, the anastomosis shape was remodelled, resulting in a laminar flow pattern with uniform WSS distribution. A patchy high oscillatory shear index was detected at the IMA wall on the top of anastomosis in either anastomosis techniques immediately after the surgery, but it disappeared at 3-6 months after surgery. Conclusions: Regardless of the anastomosis technique used, a successful remodelling of the IMA-left anterior descending artery anastomosis shape was achieved a few months after surgery, resulting in a straightforward flow streamline, with uniform WSS distribution and minimal oscillatory shear index. Answer: Based on the information provided in the abstracts, it is not necessary to clamp a patent left internal thoracic artery (LITA) to left anterior descending (LAD) graft in reoperative cardiac surgery. A study suggested that leaving the LITA graft unclamped did not change mortality rates but may reduce the risk of patent graft injury, which could alter the operation (PUBMED:19231383). Additionally, the "no-dissection" technique, which avoids dissection and clamping of the LITA graft, was found to be safe for reoperative aortic valve replacement, with outcomes comparable to patients without a LITA graft (PUBMED:22917686). These findings indicate that reoperative cardiac surgery can be performed safely without clamping the LITA graft, using systemic hyperkalemia and hypothermia for myocardial protection, thereby preventing unnecessary injury during dissection of the LITA graft.
Instruction: Are occupational stress levels predictive of ambulatory blood pressure in British GPs? Abstracts: abstract_id: PUBMED:11145636 Are occupational stress levels predictive of ambulatory blood pressure in British GPs? An exploratory study. Background: Occupational stress has been implicated as an independent risk factor in the aetiology of coronary heart disease and increased hypertensive risk in a number of occupations. Despite the large number of studies into GP stress, none have employed an objective physiological stress correlate. Objectives: We conducted an exploratory study to investigate whether self-reported occupational stress levels as measured by the General Practitioner Stress Index (GPSI) were predictive of ambulatory blood pressure (ABP) using a Spacelabs 90207 in a sample of British GPs. Method: Twenty-seven GPs (17 males, 10 females) participated in the study. Each GP wore an ABP monitor on a normal workday and non-workday. All GPs completed the GPSI before returning the ABP monitors. Demographic data were also collected. Results: Stress associated with 'interpersonal and organizational change' emerged from the stepwise multiple regression analysis as the only significant predictor of ABP, explaining 21% of the variance in workday systolic blood pressure, 26% during the workday evening and 19% during the non-workday. For diastolic blood pressure, the same variable explained 29% of the variability during the workday and 17% during the non-workday. No significant gender differences were found on any of the ABP measures. Conclusions: For the first time in GP stress research, our findings established that higher levels of self-reported occupational stress are predictive of greater ABP in British GPs. More detailed psychophysiological research and stress management interventions are required to isolate the effects of occupational stress in British GPs. abstract_id: PUBMED:21141126 Effect of occupational stress on ambulatory blood pressure Objective: To explore the effect of occupational stress on ambulatory blood pressure. Methods: 30 male healthy workers from the refrigerator assembly line in Henan province in China were investigated. Psychosocial work conditions were measured by using the Job Demand-control Model, the Effort-reward Imbalance Model questionnaires and Occupational Stress Measurement Scale. Ambulatory blood pressure(ABP) was measured by using mobile ABP monitor. The t test was utilized to analyze the difference of parameters of ABP monitoring between different groups of occupational stress and other variables scores. The stepwise regression analysis was used to analyse the effect of occupational stress factors on parameters of ABP. Results: (1) As to stressors, systolic blood pressure variability (SBPV), mean arterial blood pressure variability (MABPV) and heart rate at 30 minute after work in workers with high role conflict score were significantly higher than those in workers with low score (P &lt; 0.05). Workers with high skill utilization score had significantly lower mean systolic blood pressure (SBP) at 30 minute after work than workers with low score (P &lt; 0.05). Diastolic blood pressure variability (DBPV) and heart rate variability (HRV) in workers with high decision latitude score were significantly higher than those in workers with low score (P &lt; 0.05). Workers with high job psychological demands score had significantly higher SBPV, DBPV and MABPV than workers with low score (P &lt; 0.05). Heart rate-pressure product(RPP) and SBPV in workers with high effort score were significantly higher than those in workers with low score (P &lt; 0.05). Workers with low rewards score had higher mean heart rate and heart rate at 30 minute after work than workers with high score (P &lt; 0.05). (2) For personalities, workers with high work locus of control score had significantly higher mean diastolic blood pressure (DBP) and mean arterial blood pressure (MABP) than workers with low score (P &lt; 0.05). Workers with high patience score had significantly lower mean SBP at 30 minute after work than workers with low score (P &lt; 0.05). Heart rate at 30 minute after work in workers with high organization commitment score was significantly lower than that in workers with low score (P &lt; 0.05). (3) Concerning buffer factors, HRV in workers with high control strategies score were significantly lower than that in workers with low score (P &lt; 0.05). Workers with low supervisor support score had higher RPP and MABPV than workers with high score (P &lt; 0.05). (4) In the multiple stepwise regression, daily life stress affected SBPV (R2 = 0.12) and MABPV (R2 = 0.05), depression was related to DBPV at 30 minute after work (R2 = 0.15) and SBPV (R = 0.03), mental health was predictor of MABPV (R2 = 0.07) and negative affection was predictor of heart rate at 30 minute after work (R2 = 0.24). Conclusions: Occupational stressors, personality and social support have effect on parameters of ABP. Parameters of ABP monitoring could be used to evaluate occupational stress in the field research. abstract_id: PUBMED:32903104 Two-Year Responses of Office and Ambulatory Blood Pressure to First Occupational Lead Exposure. Lead exposure causing hypertension is the mechanism commonly assumed to set off premature death and cardiovascular complications. However, at current exposure levels in the developed world, the link between hypertension and lead remains unproven. In the Study for Promotion of Health in Recycling Lead (URL: https://www.clinicaltrials.gov; Unique identifier: NCT02243904), we recorded the 2-year responses of office blood pressure (average of 5 consecutive readings) and 24-hour ambulatory blood pressure to first occupational lead exposure in workers newly employed at lead recycling plants. Blood lead (BL) was measured by inductively coupled plasma mass spectrometry (detection limit 0.5 µg/dL). Hypertension was defined according to the 2017 American College of Cardiology/American Heart Association guideline. Statistical methods included multivariable-adjusted mixed models with participants modeled as a random effect and interval-censored Cox regression. Office blood pressure was measured in 267 participants (11.6% women, mean age at enrollment, 28.6 years) and ambulatory blood pressure in 137 at 2 follow-up visits. Geometric means were 4.09 µg/dL for baseline BL and 3.30 for the last-follow-up-to-baseline BL ratio. Fully adjusted changes in systolic/diastolic blood pressure associated with a doubling of the BL ratio were 0.36/0.28 mm Hg (95% CI, -0.55 to 1.27/-0.48 to 1.04 mm Hg) for office blood pressure and -0.18/0.11 mm Hg (-2.09 to 1.74/-1.05 to 1.27 mm Hg) for 24-hour ambulatory blood pressure. The adjusted hazard ratios of moving up across hypertension categories for a doubling in BL were 1.13 (0.93-1.38) and 0.84 (0.57-1.22) for office blood pressure and ambulatory blood pressure, respectively. In conclusion, the 2-year blood pressure responses and incident hypertension were not associated with the BL increase on first occupational exposure. abstract_id: PUBMED:1639463 Job strain and ambulatory work blood pressure in healthy young men and women. The effect of high job strain (defined as high psychological demands plus low decision latitude at work) on blood pressure was determined in 129 healthy, nonhypertensive men (n = 65) and women (n = 64). Blood pressure measures included mean screening levels obtained in a clinical environment, mean ambulatory levels from one 8-hour workday, and the change in levels from screening to mean work levels. In male workers, men with high and low job strain showed similar blood pressures at screening, but men with high job strain showed greater increases from screening to work, resulting in higher mean work blood pressure. Occupational status was unrelated to job strain or blood pressure in men. In female workers, women with high and low job strain did not differ in any measure of blood pressure; however, there were trends for higher occupational status and greater skill discretion to be associated with higher blood pressure responses at work in women. abstract_id: PUBMED:9401623 The effects of occupational stress on blood pressure in men and women. Human hypertension is the end result of a number of genetic and environmental influences, and typically develops gradually over many years. The sympathetic nervous system appears to play a role in the early stages, with structural changes in the resistance vessels becoming dominant later on. The extent to which increased sympathetic actively may be the result of environmental stress is uncertain. Animal studies have suggested that chronic stress can raise blood pressure. Human epidemiological studies have shown that the prevalence of hypertension is strongly dependent on social and cultural factors. Blood pressure tends to be highest at work, and studies using ambulatory monitoring have shown that occupational stress, measured as job strain, can raise blood pressure in men, but not women. This may be associated with increased left ventricular mass. The diurnal blood pressure pattern in men with high strain jobs shows a persistent elevation throughout the day and night, which is consistent with the hypothesis that job strain is a causal factor in the development of human hypertension. abstract_id: PUBMED:12040240 Characteristics of conventional blood pressure in studies on the predictive power of ambulatory blood pressure. Background: It is commonly believed that the associations of left ventricular mass and cardiovascular morbidity/mortality with blood pressure are stronger for 24-h ambulatory pressure than for conventional clinic or casual pressure. Methods: The investigation comprised a review of relevant studies, with particular emphasis on the characteristics of the conventional blood pressure measurement. Results: A review of 21 studies on left ventricular mass, published between 1982 and 1993, showed that the relationship between mass and blood pressure was stronger for ambulatory blood pressure than for clinic blood pressure but that the methodology and conditions of the conventional blood pressure measurements were poorly described or standardized in several reports. Between 1983 and 2001, seven studies showed that ambulatory blood pressure was superior to conventional blood pressure with regard to the prediction of cardiovascular morbidity and/or mortality. From published data and requests for additional information, it appears that recommendations for the measurement of conventional blood pressure have been reasonably well observed, although the number of measurements has not always been adequate. Conclusions: Whereas the quality of the conventional blood pressure measurements left much to be desired in the studies on left ventricular mass, the quality appeared to be reasonably good in outcome studies, even though the published details were often incomplete. abstract_id: PUBMED:9560874 Endogenous opioids inhibit ambulatory blood pressure during naturally occurring stress. Objective: Laboratory experiments suggest that endogenous opioids inhibit blood pressure responses during psychological stress. Moreover, there seem to be considerable individual differences in the efficacy of opioid blood pressure inhibition, and these differences may be involved in the expression of risk for cardiovascular disease. To further evaluate the possible role of opioid mechanisms in cardiovascular control, the present study sought to document the effects of the long-lasting oral opioid antagonist naltrexone (ReVia, DuPont, Wilmington, DE) on ambulatory blood pressure responses during naturally occurring stress. Method: Thirty male volunteers participated in a laboratory stress study using naltrexone followed by ambulatory blood pressure under placebo and during the subsequent 24-hour period. Within-subject analyses were performed on ambulatory blood pressures under placebo and naltrexone conditions. Results: Laboratory results indicate no significant group effects of naltrexone on blood pressure levels or reactivity. Ambulatory results indicate that during periods of low self-reported stress, no effect of opioid blockade was apparent. In contrast, during periods of high stress, opioid blockade increased ambulatory blood pressure. Conclusions: These findings suggest that naltrexone-sensitive opioid mechanisms inhibit ambulatory blood pressure responses during naturally occurring stress. abstract_id: PUBMED:1788530 Job strain and ambulatory blood pressure profiles. Occupational characteristics were used to study the role of job stress in the pathogenesis of hypertension. Ambulatory 24-h recordings of blood pressure were made for 161 men with borderline hypertension. From the occupational classification system scores for psychological demands, control, support, physical demands, and occupational hazards were obtained. The results indicated that the ratio between psychological demands and control (strain) was significantly associated with diastolic (but not systolic) blood pressure at night and during work. The association between job strain and diastolic blood pressure at night and during work was greatly strengthened when the subjects with occupations classified as physically demanding were excluded from the analysis. The conclusion was reached that a measure of job strain derived from the occupational classification is useful in predicting variations in diastolic blood pressure levels during sleep and work for men with borderline hypertension. abstract_id: PUBMED:17426519 High job strain and ambulatory blood pressure in middle-aged men and women from the Belgian job stress study. Objective: The aim of this study was to assess whether job strain is associated with 24-hour ambulatory blood pressure measurements within a subsample of the Belgian Job Stress Project (BELSTRESS) population. Methods: A group of 89 middle-aged male and female workers perceiving high job strain and an equally large group of workers perceiving no high job strain wore an ambulatory blood pressure monitor for 24 hours on a regular working day. Results: Mean ambulatory blood pressure at work, at home, and while asleep was significantly higher in workers with job strain as compared with others. The associations between job strain and ambulatory blood pressure were independent from the covariates. Conclusions: Within this study, high job strain was an important independent risk factor for higher ambulatory blood pressure at work, at home, and during sleep in a group of men and women. abstract_id: PUBMED:11744135 Arterial blood pressure and blood lipids as cardiovascular risk factors and occupational stress in Taiwan. Background: This study is to determine whether occupational stress (defined as high psychological demands and low decision latitude on the job) is associated with increased blood pressure and abnormal level of blood lipids as cardiovascular risk factors. Methods: We conducted a cross-sectional study at three work sites of 526 white-collar male workers aged 20 to 66 years without evidence of cardiovascular disease. Systolic, diastolic blood pressure, serum total, high-density lipoprotein cholesterol and plasma triglyceride were measured. Occupational stress index was derived from data collected in the job strain questionnaire. Results: In multiple linear regression models, occupational stress index was significantly related to diastolic blood pressure and plasma triglyceride, after adjusting for age, education, smoking, and alcohol consumption. A higher occupational stress index was directly associated with higher systolic, diastolic blood pressure and higher level of plasma triglyceride. Conclusions: These data from a white-collar working population confirm independent relations between occupational stress defined in the job demand-control model and diastolic blood pressure observed in predominantly Western populations and extend the range of associations to plasma triglyceride than do previous studies. Answer: Yes, occupational stress levels are predictive of ambulatory blood pressure in British GPs. An exploratory study conducted to investigate this relationship found that higher levels of self-reported occupational stress, as measured by the General Practitioner Stress Index (GPSI), were predictive of greater ambulatory blood pressure (ABP) in British GPs. Specifically, stress associated with 'interpersonal and organizational change' was the only significant predictor of ABP, explaining a significant portion of the variance in workday systolic blood pressure, workday evening systolic blood pressure, and non-workday systolic blood pressure. It also explained a significant portion of the variance in diastolic blood pressure during the workday and non-workday. No significant gender differences were found in any of the ABP measures (PUBMED:11145636).
Instruction: Does resveratrol prevent free radical-induced acute pancreatitis? Abstracts: abstract_id: PUBMED:15968246 Does resveratrol prevent free radical-induced acute pancreatitis? Objective: The purpose of this study was to examine protective and antioxidative effect of stilbene derivatives, resveratrol and diethylstilbestrol, in experimental acute pancreatitis (EAP). Methods: EAP was induced in male Wistar rats by retrograde injection of tert-butyl hydroperoxide (ButOOH) solution, a well-known prooxidant agent, into the common bile pancreatic duct. After a 3-hour observation, the animals were killed. Blood samples were collected. Each pancreas was removed and weighed. Tissue samples were taken for microscopic studies. The carbonyl and sulfhydryl (SH) group levels were estimated in the homogenate. Results: Examination using light microscopy revealed morphologic changes in pancreata removed from EAP rats, namely focal edema, acinar cell vacuolization, and focal necrosis of pancreatic acini. The electron microscopic analysis also showed changes in their subcellular structures: dilated cisternae of the rough endoplasmic reticulum, swollen mitochondria, and "debris" of mitochondrial cristae. These changes corresponded with higher activities of serum amylase and tissue carbonyl groups levels and decreased SH group level compared with controls. Changes in pancreata were much less pronounced in the rats that received resveratrol or diethylstilbestrol for 8 days prior to ButOOH injection. Conclusion: Stilbene derivatives prevent pancreatic cells from structural changes during ButOOH-induced acute pancreatitis. abstract_id: PUBMED:16440434 Effect of resveratrol on pancreatic oxygen free radicals in rats with severe acute pancreatitis. Aim: To investigate the therapeutic effects of resveratrol (RESV) as a free radical scavenger on experimental severe acute pancreatitis (SAP). Methods: Seventy-two male Sprague-Dawley rats were divided randomly into sham operation group, SAP group, and resveratrol-treated group. Pancreatitis was induced by intraductal administration of 0.1 mL/kg 4% sodium taurocholate. RESV was given intravenously at a dose of 20 mg/kg body weight. All animals were killed at 3, 6, 12 h after induction of the model. Serum amylase, pancreatic superoxide dismutase (SOD), malondialdehyde (MDA), and myeloperoxidase (MPO) were determined. Pathologic changes of the pancreas were observed under optical microscope. Results: The serum amylase, pancreatic MPO and the score of pathologic damage increased after the induction of pancreatitis, early (3, 6 h) SAP samples were characterized by decreased pancreatic SOD and increased pancreatic MDA. Resveratrol exhibited a protective effect against lipid peroxidation in cell membrane caused by oxygen free radicals in the early stage of SAP. This attenuation of the redox state impairment reduced cellular oxidative damage, as reflected by lower serum amylase, less severe pancreatic lesions, normal pancreatic MDA levels, as well as diminished neutrophil infiltration in pancreas. Conclusion: RESV may exert its therapeutic effect on SAP by lowering pancreatic oxidative free radicals and reducing pancreatic tissue infiltration of neutrophils. abstract_id: PUBMED:16499907 Beneficial effect of resveratrol on cholecystokinin-induced experimental pancreatitis. Resveratrol is a phytoalexin with strong antioxidant and anti-inflammatory effects reaching high concentrations in red wine. The aim of our study was to test the effects of resveratrol pretreatment on cholecystokinin-octapeptide (CCK-8)-induced acute pancreatitis in rats. Animals were divided into a control group, a group treated with CCK-8 and a group receiving 10 mg/kg resveratrol prior to CCK-8 administration. Resveratrol ameliorated the CCK-8-induced changes in the laboratory parameters, and reduced the histological damage in the pancreas. The drug failed to improve the pancreatic antioxidant state, but increased the amount of hepatic reduced glutathione and prevented the reduction of hepatic catalase activity. Resveratrol-induced inhibition of nuclear factor kappa B (NF-kappaB) activation or reduction of the pancreatic tumor necrosis factor-alpha (TNF-alpha) concentration could not be demonstrated. In conclusion, the beneficial effects of resveratrol on acute pancreatitis seem to be mediated by the antioxidant effect of resveratrol or by an NF-kappaB-independent anti-inflammatory mechanism. abstract_id: PUBMED:24234420 Chemopreventive effects of resveratrol in a rat model of cerulein-induced acute pancreatitis. In the past decades, a greater understanding of acute pancreatitis has led to improvement in mortality rates. Nevertheless, this disease continues to be a health care system problem due to its economical costs. Future strategies such as antioxidant supplementation could be very promising, regarding to beginning and progression of the disease. For this reason, this study was aimed at assessing the effect of exogenous administration of resveratrol during the induction process of acute pancreatitis caused by the cholecystokinin analog cerulein in rats. Resveratrol pretreatment reduced histological damage induced by cerulein treatment, as well as hyperamylasemia and hyperlipidemia. Altered levels of corticosterone, total antioxidant status, and glutathione peroxidase were significantly reverted to control levels by the administration of resveratrol. Lipid peroxidation was also counteracted; nevertheless, superoxide dismutase enzyme was overexpressed due to resveratrol pretreatment. Related to immune response, resveratrol pretreatment reduced pro-inflammatory cytokine IL-1β levels and increased anti-inflammatory cytokine IL-10 levels. In addition, pretreatment with resveratrol in cerulein-induced pancreatitis rats was able to reverse, at least partially, the abnormal calcium signal induced by treatment with cerulein. In conclusion, this study confirms antioxidant and immunomodulatory properties of resveratrol as chemopreventive in cerulein-induced acute pancreatitis. abstract_id: PUBMED:16273606 Ischemic preconditioning inhibits development of edematous cerulein-induced pancreatitis: involvement of cyclooxygenases and heat shock protein 70. Aim: To determine whether ischemic preconditioning (IP) affects the development of edematous cerulein-induced pancreatitis and to assess the role of cyclooxygenase-1 (COX-1), COX-2, and heat shock protein 70 (HSP 70) in this process. Methods: In male Wistar rats, IP was performed by clamping of celiac artery (twice for 5 min at 5-min intervals). Thirty minutes after IP or sham operation, acute pancreatitis was induced by cerulein. Activity of COX-1 or COX-2 was inhibited by resveratrol or rofecoxib, respectively (10 mg/kg). Results: IP significantly reduced pancreatic damage in cerulein-induced pancreatitis as demonstrated by the improvement of pancreas histology, reduction in serum lipase and poly-C ribonuclease activity, and serum concentration of pro-inflammatory interleukin (IL)-1beta. Also, IP attenuated the pancreatitis-evoked fall in pancreatic blood flow and pancreatic DNA synthesis. Serum level of anti-inflammatory IL-10 was not affected by IP. Cerulein-induced pancreatitis and IP increased the content of HSP 70 in the pancreas. Maximal increase in HSP 70 was observed when IP was combined with cerulein-induced pancreatitis. Inhibition of COXs, especially COX-2, reduced the protective effect of IP in edematous pancreatitis. Conclusion: Our results indicate that IP reduces pancreatic damage in cerulein-induced pancreatitis and this effect, at least in part, depends on the activity of COXs and pancreatic production of HSP 70. abstract_id: PUBMED:24785170 Combined effects of sivelestat and resveratrol on severe acute pancreatitis-associated lung injury in rats. Despite extensive research and clinical efforts made in the management of acute pancre-atitis during the past few decades, to date no effective cure is available and the mortality from severe acute pancre-atitis remains high. Given that lung is the primary cause of early death in acute pancreatitis patients, novel therapeutic approaches aiming to prevent lung injury have become a subject of intensive investigation. In a previous study, we demonstrated that sivelestat, a specific inhibitor of neutrophil elastase, is effective in protecting against lung failure in rats with taurocholate-induced acute pancreatitis. As part of the analyses extended from that study, the present study aimed to evaluate the role of sivelestat and/or resveratrol in the protection against acute pancreatitis-associated lung injury. The extended analyses demonstrated the following: (1) sodium taurocholate induced apparent lung injury and dysfunction manifested by histological anomalies, including vacuolization and apoptosis of the cells in the lung, as well as biochemical aberrations in the blood (an increase in amylase concentration and a decrease in partial arterial oxygen pressure) and increases in activities of reactive oxygen species, interleukin 6, myeloperoxidase, neutrophil elastase, lung edema, bronchotracho alveolar lavage protein concentration, and bronchotracho alveolar lavage cell infiltration in the lung; and (2) in lung tissues, either sivelestat or resveratrol treatment effectively attenuated the taurocholate-induced abnormalities in all parameters analyzed except for serum amylase concentration. In addition, combined treatment with both sivelestat and resveratrol demonstrated additive protective effects on pancreatitis-associated lung injury compared with single treatment. abstract_id: PUBMED:24549589 The effects of resveratrol on tissue injury, oxidative damage, and pro-inflammatory cytokines in an experimental model of acute pancreatitis. Acute pancreatitis (AP) is an acute inflammatory condition that results from the digestion of pancreatic tissue by its own enzymes released from the acinar cells. The objective of this study was to investigate the effects of resveratrol on oxidative damage, pro-inflammatory cytokines, and tissue injury involved with AP induced in a rat model using sodium taurocholate (n = 60). There were three treatment groups with 20 rats per group. Groups I and II received 3% sodium taurocholate solution, while group III underwent the same surgical procedure yet did not receive sodium taurocholate. In addition, group II received 30 mg/kg resveratrol solution. Rats were sacrificed at 2, 6, 12, and 24 h time points following the induction of AP. Blood and pancreatic tissue samples were collected and subjected to biochemical assays, Western blot assays, and histopathologic evaluations. Resveratrol did not reduce trypsin levels and prevent tissue damage. Resveratrol prevented IκB degradation (except for 6 h) and decreased nuclear factor-κB (NF-κB), activator protein-1 (AP-1) (except for 24 h), and levels of TNF-α, IL-6 (except for 24 h), and iNOS in the pancreatic tissue at all time points (P &lt; 0.05). Serum nitric oxide (NO) levels were reduced as well (P &lt; 0.05). Thus, we concluded that resveratrol did not reduce trypsin levels and did not prevent tissue injury despite the reduction in oxidative damage and pro-inflammatory cytokine levels detected in this model of AP. abstract_id: PUBMED:19696693 Protective effect of resveratrol in severe acute pancreatitis-induced brain injury. Objectives: The aim of this study was to study the effects of resveratrol on severe acute pancreatitis (SAP)-induced brain injury. Methods: Ninety-six male Sprague-Dawley rats were randomly divided into 4 equal groups: sham operation, SAP, resveratrol-treated (RES), and dexamethasone-treated. Each group was evaluated at 3, 6, and 12 hours. Levels of serum myelin basic protein and zonula occludens 1 (Zo-1) were determined by enzyme-linked immunosorbent assay. The brain and pancreatic tissues were examined using electron microscopy. Expressions of Bax, Bcl-2, and caspase-3 were observed using immunohistochemistry, reverse transcriptase polymerase chain reaction, and Western blotting. Cytochrome c was detected using Western blotting alone. Results: Myelin basic protein and Zo-1 levels of the RES group were lower than the SAP group at all time points (P &lt; 0.05). The RES group had significantly improved pathologic brain, increase in Bcl-2 expression, and decrease in Bax and caspases-3 expressions compared with the SAP group. Conclusions: The degradation of Zo-1 is involved in the pathophysiology of brain injury in SAP; MBP can be used as a marker of brain injury in SAP. The protective effect of resveratrol might be associated with the up-regulation of Bcl-2 and down-regulation of Bax and caspase-3. abstract_id: PUBMED:30165701 Cigarette Smoke Toxins-Induced Mitochondrial Dysfunction and Pancreatitis Involves Aryl Hydrocarbon Receptor Mediated Cyp1 Gene Expression: Protective Effects of Resveratrol. We previously reported that mitochondrial CYP1 enzymes participate in the metabolism of polycyclic aromatic hydrocarbons and other carcinogens leading to mitochondrial dysfunction. In this study, using Cyp1b1-/-, Cyp1a1/1a2-/-, and Cyp1a1/1a2/1b1-/- mice, we observed that cigarette and environmental toxins, namely benzo[a]pyrene (BaP) and 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD), induce pancreatic mitochondrial respiratory dysfunction and pancreatitis. Our results suggest that aryl hydrocarbon receptor (AhR) activation and resultant mitochondrial dysfunction are associated with pancreatic pathology. BaP treatment markedly inhibits pancreatic mitochondrial oxygen consumption rate (OCR), ADP-dependent OCR, and also maximal respiration, in wild-type mice but not in Cyp1a1/1a2-/- and Cyp1a1/1a2/1b1-/- mice. In addition, both BaP and TCDD treatment markedly affected mitochondrial complex IV activity, in addition to causing marked reduction in mitochondrial DNA content. Interestingly, the AhR antagonist resveratrol, attenuated BaP-induced mitochondrial respiratory defects in the pancreas, and reversed pancreatitis, both histologically and biochemically in wild-type mice. These results reveal a novel role for AhR- and AhR-regulated CYP1 enzymes in eliciting mitochondrial dysfunction and cigarette toxin-mediated pancreatic pathology. We propose that increased mitochondrial respiratory dysfunction and oxidative stress are involved in polycyclic aromatic hydrocarbon associated pancreatitis. Resveratrol, a chemo preventive agent and AhR antagonist, and CH-223191, a potent and specific AhR inhibitor, confer protection against BaP-induced mitochondrial dysfunction and pancreatic pathology. abstract_id: PUBMED:26833708 Dihydro-Resveratrol Ameliorates Lung Injury in Rats with Cerulein-Induced Acute Pancreatitis. Acute pancreatitis is an inflammatory process originated in the pancreas; however, it often leads to systemic complications that affect distant organs. Acute respiratory distress syndrome is indeed the predominant cause of death in patients with severe acute pancreatitis. In this study, we aimed to delineate the ameliorative effect of dihydro-resveratrol, a prominent analog of trans-resveratrol, against acute pancreatitis-associated lung injury and the underlying molecular actions. Acute pancreatitis was induced in rats with repetitive injections of cerulein (50 µg/kg/h) and a shot of lipopolysaccharide (7.5 mg/kg). By means of histological examination and biochemical assays, the severity of lung injury was assessed in the aspects of tissue damages, myeloperoxidase activity, and levels of pro-inflammatory cytokines. When treated with dihydro-resveratrol, pulmonary architectural distortion, hemorrhage, interstitial edema, and alveolar thickening were significantly reduced in rats with acute pancreatitis. In addition, the production of pro-inflammatory cytokines and the activity of myeloperoxidase in pulmonary tissues were notably repressed. Importantly, nuclear factor-kappaB (NF-κB) activation was attenuated. This study is the first to report the oral administration of dihydro-resveratrol ameliorated acute pancreatitis-associated lung injury via an inhibitory modulation of pro-inflammatory response, which was associated with a suppression of the NF-κB signaling pathway. Answer: Yes, resveratrol has been shown to prevent free radical-induced acute pancreatitis. Studies have demonstrated that resveratrol exhibits protective effects against oxidative damage in the pancreas, which is a key feature of acute pancreatitis. For instance, resveratrol was found to prevent structural changes in pancreatic cells during tert-butyl hydroperoxide (ButOOH)-induced acute pancreatitis in rats, corresponding with decreased levels of tissue carbonyl groups and increased sulfhydryl (SH) group levels, suggesting an antioxidative effect (PUBMED:15968246). Additionally, resveratrol has been shown to exert therapeutic effects on severe acute pancreatitis by lowering pancreatic oxidative free radicals and reducing pancreatic tissue infiltration of neutrophils, as evidenced by lower serum amylase, less severe pancreatic lesions, normal pancreatic malondialdehyde (MDA) levels, and diminished neutrophil infiltration in the pancreas (PUBMED:16440434). Furthermore, resveratrol pretreatment ameliorated cholecystokinin-octapeptide (CCK-8)-induced changes in laboratory parameters and reduced histological damage in the pancreas, although it did not improve the pancreatic antioxidant state, suggesting that its beneficial effects might be mediated by an NF-kappaB-independent anti-inflammatory mechanism (PUBMED:16499907). In a rat model of cerulein-induced acute pancreatitis, resveratrol pretreatment reduced histological damage, hyperamylasemia, hyperlipidemia, and altered levels of corticosterone, total antioxidant status, and glutathione peroxidase, confirming its antioxidant and immunomodulatory properties (PUBMED:24234420). In summary, the evidence from these studies supports the notion that resveratrol can prevent and mitigate free radical-induced acute pancreatitis through its antioxidative and anti-inflammatory effects.
Instruction: Can government policies help adolescents avoid risky behavior? Abstracts: abstract_id: PUBMED:15533529 Can government policies help adolescents avoid risky behavior? Background: This study examines the extent to which policies influence participation of adolescents in alcohol and tobacco consumption and in unsafe sex. Methods: Data were obtained from the 1995 Youth Risk Behavior Surveys (YRBS) conducted by 20 different states and cities in the U.S. These data were combined with state data on cigarette taxes, vending machine laws, beer taxes, and family planning clinic availability. A model of teenage risk taking suggested that the three risk behaviors were codetermined by a common latent risk-taking propensity. We used a structural equation model (SEM) accounting for this shared latent propensity to estimate the extent of participation in terms of frequency of smoking, drinking, and the number of sex partners. Results: Estimating simultaneous equations for all three risk behaviors was statistically more efficient than equation-by-equation estimates of each behavior. Estimates indicated significant deterrent effects of beer taxes, vending machine restrictions, and increased density of family planning clinics on teenage risk behavior. Conclusions: State policies, such as taxes on beer, and restrictions on location of cigarette vending machines, and placement of family planning clinics influence adolescents' behavior. Because there is interrelationship between these behaviors, systems estimators, can offer improved estimates of these effects. abstract_id: PUBMED:26054443 Risky behavior among Black Caribbean and Black African adolescents in England: How do they compare? Objectives: Black Caribbean and Black African adolescents in England face academic and social challenges that might predisposition them to engaging in more risky behavior. This study explored the growth trajectories of risky behavior among adolescents in England over 3 years (14/15, 15/16, and 16/17 years of age) to determine the extent to which ethnic groups differed. Design: Data were taken from the Longitudinal Study of Young People in England database (N = 15,770). This database contained eight different ethnic groups. Risky behavior was defined by an 8-item scale that represented three classes of risky behavior. Individual theta scores for risky behavior were calculated for individuals at each time point and modeled over time. Interaction terms between sex, year, ethnicity, and class were also examined. Results: Findings confirmed previous research that showed ethnic group differences in means. They also demonstrated that there are differences in slopes as well, even after controlling for class. In fact, class appeared to have a reverse effect on the risky behavior of black adolescents. Further, Black adolescent groups were not engaging in higher levels of risky behavior as compared to white adolescents (the dominant population). In actuality, Mixed adolescents engaged in the highest levels of risky behavior which was a notable finding given that the Mixed group has recently began to receive a more focused attention by scholars and the government of England. Conclusion: It is important that social workers and policy-makers recognize ethnicity in making general preventative decisions for adolescents. Second, class does not have a common effect on adolescent problem behaviors as often believed. Finally, black adolescents' communities might contain important protective factors that ought to be extensively explored. Conversely, Mixed adolescents' communities might contain more risk factors that ought to be addressed. abstract_id: PUBMED:26167134 Testing a Risky Sex Behavior Intervention Pilot Website for Adolescents. Background And Purpose: Each year, teenagers account for about one-fifth of all unintended pregnancies in the United States. As such, delivering sexual risk reduction educational materials to teens in a timely fashion is of critical importance. Web-based delivery of these materials shows promise for reaching and persuading teens away from risky sexual and substance abuse behaviors. The purpose of this study was to pilot test a web-based program aimed at reducing risky sexual behavior and related outcomes among adolescents in a high school setting. Methods: A beta-test of the website was conducted in three public schools in New Mexico, USA with 173 students in 9th and 10th grades recruited from existing health education classes. Participants spent approximately three hours over a period of two days completing the online program in school computer labs. Results: Pretest to posttest results indicated that self-efficacy for condom use and condom use intentions, two theoretical mediators of changes in condom use behavior, were significantly changed. Adolescents also reported high satisfaction with the website content. Conclusion: BReady4it provided an innovative sex and substance abuse education to teenagers that revealed promising positive changes in cognitive constructs that are inversely related to risky sexual behavior among users. abstract_id: PUBMED:35968731 The Relationship of Risky Online Behaviors and Adverse Childhood Experiences to Online Sexual Victimization Among Korean Female Adolescents. Prior research has demonstrated that online sexual victimization (OSV) is a significant social problem and is associated with adolescents' negative developmental outcomes. However, it remains unclear whether adolescents' risky online behaviors and offline victimization are related to the risk of OSV. The present study examined whether female adolescents' risky online behaviors (mood regulation through the Internet, ingratiating behavior, disclosure of personal information, harassing behavior, talking with someone met online, and sexual behavior) and offline victimization (adverse childhood experiences [ACEs]) would be associated with OSV. This study recruited female adolescents and their mothers within six metropolitan cities and provinces of residential areas of South Korea. A total of 509 female adolescents participated in the survey (aged 13-18 years). The present study employed multivariate regression to examine the relationship of risky online behaviors and offline victimization to the experience of OSV. Female adolescents' risky online behaviors (harassing behavior, talking with someone met online, and sexual behavior) were significantly associated with OSV, and those with high exposure to maltreatment and family dysfunction during childhood were more at risk of OSV than adolescents with low exposure to ACEs. The results suggest that it is important to address the effects of risky online behaviors and exposure to offline victimization on female adolescents' sexual victimization online. Identifying risky online behaviors and offline victimization related to OSV can help researchers and practitioners further understand female adolescents' online victimizations in the context of offline and online dynamics. abstract_id: PUBMED:26718543 Mechanisms That Link Parenting Practices to Adolescents' Risky Sexual Behavior: A Test of Six Competing Theories. Risky sexual behavior, particularly among adolescents, continues to be a major source of concern. In order to develop effective education and prevention programs, there is a need for research that identifies the antecedents of such behavior. This study investigated the mediators that link parenting experiences during early adolescence to subsequent risky sexual behaviors among a diverse sample of African American youth (N = 629, 55 % female). While there is ample evidence that parenting practices (e.g., supportive parenting, harsh parenting, parental management) are antecedent to risky sexual behavior, few studies have examined whether one approach to parenting is more strongly related to risky sex than others. Using a developmental approach, the current study focused on factors associated with six theories of risky sexual behavior. While past research has provided support for all of the theories, few studies have assessed the relative contribution of each while controlling for the processes proposed by the others. The current study addresses these gaps in the literature and reports results separately by gender. Longitudinal analyses using structural equation modeling revealed that the mediating mechanisms associated with social learning and attachment theories were significantly related to the risky sexual behavior of males and females. Additionally, there was support for social control and self-control theories only for females and for life history theory only for males. We did not find support for problem behavior theory, a perspective that dominates the risky sex literature, after controlling for the factors associated with the other theories. Finally, supportive parenting emerged as the parenting behavior most influential with regard to adolescents' risky sexual behavior. These results provide insight regarding efficacious approaches to education and preventative programs designed to reduce risky sexual behaviors among adolescents. abstract_id: PUBMED:37555002 The influence of depressive symptoms and school-going status on risky behaviors: a pooled analysis among adolescents in six sub-Saharan African countries. Background: Evidence from sub-Saharan Africa (SSA) regarding risky behaviors among adolescents remains scarce, despite the large population (approximately 249 million out of 1.2 billion globally in 2019) of adolescents in the region. We aimed to examine the potential influence of depressive symptoms and school-going status on risky behaviors among adolescents in six SSA countries. Methods: We used individual cross-sectional data from adolescents aged 10-19 based in eight communities across six SSA countries, participating in the ARISE Network Adolescent Health Study (N = 7,661). Outcomes of interest were cigarette or tobacco use, alcohol use, other substance use, getting into a physical fight, no condom use during last sexual intercourse, and suicidal behavior. We examined the proportion of adolescents reporting these behaviors, and examined potential effects of depressive symptoms [tertiles of 6-item Kutcher Adolescent Depression Scale (KADS-6) score] and school-going status on these behaviors using mixed-effects Poisson regression models. We also assessed effect modification of associations by sex, age, and school-going status. Results: The proportion of adolescents reporting risky behaviors was varied, from 2.2% for suicidal behaviors to 26.2% for getting into a physical fight. Being in the higher tertiles of KADS-6 score was associated with increased risk of almost all risky behaviors [adjusted risk ratio (RR) for highest KADS-6 tertile for alcohol use: 1.70, 95% confidence interval (95% CI): 1.48-1.95, p &lt; 0.001; for physical fight: 1.52, 95% CI: 1.36-1.70, p &lt; 0.001; for suicidal behavior: 7.07, 95% CI: 2.69-18.57, p &lt; 0.001]. Being in school was associated with reduced risk of substance use (RR for alcohol use: 0.73, 95% CI: 0.53-1.00, p = 0.047), and not using a condom (RR: 0.81, 95% CI: 0.66-0.99, p = 0.040). There was evidence of modification of the effect of school-going status on risky behaviors by age and sex. Conclusion: Our findings reinforce the need for a greater focus on risky behaviors among adolescents in SSA. Addressing depressive symptoms among adolescents, facilitating school attendance and using schools as platforms to improve health may help reduce risky behaviors in this population. Further research is also required to better assess the potential bidirectionality of associations. abstract_id: PUBMED:34306799 Is Delinquency Associated With Subsequent Victimization by Community Violence in Adolescents? A Test of the Risky Behavior Model in a Primarily African American Sample. Objective: Victimization is common in adolescence and is associated with negative outcomes, including school failure, and poor emotional, behavioral, and physical health. A deeper understanding of the risk of victimization can inform prevention and intervention efforts. This study tests the risky behavior model in adolescents, examining prospective associations between mean levels of and changes in delinquency and risk for victimization over four annual data collections. Method: Low-income adolescent (53.6% female; Mage = 12.13 years, SD = 1.62 years; 91.9% African American) and maternal caregiver dyads (N = 358) residing in urban neighborhoods in the mid-Atlantic region of the United States that had moderate-to-high levels of violence and/or poverty completed separate annual home interviews for 4 years. Maternal caregivers reported on adolescents' delinquent behavior; adolescents reported on their victimization by community violence experiences. Results: Using a latent difference score model, results supported the risky behavior model for the first 2 years, but not the final data collection period. That is, levels of and changes in delinquent behavior were associated with more victimization by community violence at the subsequent time point for the first 2 study years. In contrast, there was no evidence for the opposite, specifically that victimization by community violence predicted delinquency. Conclusion: Knowing that both levels of delinquency and increases in delinquency place youth at heightened risk for victimization by community violence provides impetus to intervene. Screening for increases in delinquency among youth may be one way to target youth at high risk for victimization by community violence for fast-tracked intervention. abstract_id: PUBMED:29642938 Social network correlates of risky sexual behavior among adolescents in Bahir Dar and Mecha Districts, North West Ethiopia: an institution-based study. Background: Behaviors established during adolescence such as risky sexual behaviors have negative effects on future health and well-being. Extant literature indicated that individual attributes such as peer pressure and substance use have impacts on healthy development of young peoples' sexual behavior. The patterns of relationships (social network structure) and the social network content (members' norm regarding sexual practice) established by adolescents' network on adolescents' risky sexual behaviors are not well investigated. Methods: This cross-sectional study assessed the roles of social networks on sexual behavior of high school adolescents in Bahir Dar and Mecha district, North West Ethiopia. Data were collected from 806 high school adolescents using a pretested anonymously self administered questionnaire. Hierarchical logistic regression model was used for analysis. Results: The results indicated that more than 13% had risky sexual behavior. Taking social networks into account improved the explanation of risky sexual behavior over individual attributes. Adolescents embedded within increasing sexual practice approving norm (AOR 1.61; 95%CI: 1.04 - 2.50), increasing network tie strength (AOR 1.12; 95% CI: 1.06 - 1.19), and homogeneous networks (AOR 1.58; 95% CI: .98 - 2.55) were more likely to had risky sexual behavior. Engaging within increasing number of sexuality discussion networks was found protective of risky sexual behavior (AOR .84; 95% CI: .72 - .97). Conclusion: Social networks better predict adolescent's risky sexual behavior than individual attributes. The findings indicated the circumstances or contexts that social networks exert risks or protective effects on adolescents' sexual behavior. Programs designed to reduce school adolescents' sexual risk behavior should consider their patterns of social relationships. abstract_id: PUBMED:31714886 Prevalence and personal predictors of risky sexual behaviour among in-school adolescents in the Ikenne Local Government Area, Ogun State, Nigeria. Risky sexual behaviour increases the vulnerability of an adolescents to reproductive health problems like sexually transmitted infections (STIs), unintended pregnancy and abortion. This study therefore investigated the prevalence and personal predictors of risky sexual behaviour among in-school adolescents in the Ikenne Local Government Area, of Ogun State, Nigeria. The study employed a descriptive cross-sectional design. A multi-stage sampling technique was used to select 716 participants for the study. A validated semi-structured questionnaire with a Cronbach's alpha score of 0.78 was used to collect data. Frequency and logistic regression analysis were conducted to give statistical responses to the research question and hypotheses using SPSS version 23. The mean age of the participants was 15.2 ± 1.4 years and 57.3% of the respondents were female. Above a quarter (35.5%) of the respondents were in senior secondary one (SS1) in high school. More than half (53.1%) of the respondents had a good level of knowledge about risky sexual behaviour and 54% of the respondents had a fair perception of it. The respondents' attitudes towards risky sexual behaviour was moderately high (61.2%). The respondents had moderately high (63.7%) self-esteem. The prevalence of risky sexual behaviour was 19.2%. The personal predictors of risky sexual behaviour are age [odds ratio (OR) = 3.21; p &lt; 0.05); gender [OR = 1.86; confidence interval (CI): 1.26-2.69; p &lt; 0.05]; perception (OR = 2.58; CI: 1.55-4.30; p &lt; 0.05); attitude (OR = 4.58; CI: 1.61-13.05; p &lt; 0.05); self-esteem (OR = 7.39; p &lt; 0.05). In conclusion, the respondents' risky sexual behaviours are predicted by age, gender, attitude, perception and self-esteem. The study therefore recommends that educational materials about the negative effects of the risky sexual behaviours should be included in the curriculum of secondary schools. abstract_id: PUBMED:28608773 Predictors of Risky Behavior and Offending for Adolescents With Mild Intellectual Disability. Adolescents with intellectual disability (ID) engage in risky behavior and offending. However, little is known on the impact school-related predictors have on engagement in risky behaviors for adolescents with ID. This study analyzed secondary data from the National Longitudinal Transition Study-2 (NLTS2) to determine levels of engagement in risky behaviors and offending for adolescents with mild and moderate/severe ID. School-related predictors of engagement for adolescents with mild ID were also explored. Results indicated adolescents with mild ID engage in risky behaviors and offending at significantly higher rates as compared to adolescents with moderate/severe ID. Participation in a social skills or life skills class was a significant predictor of less engagement in risky behaviors for individuals with mild ID. Answer: Yes, government policies can help adolescents avoid risky behavior. A study examining the influence of state policies on adolescent behavior in the United States found that certain policies, such as taxes on beer, restrictions on the location of cigarette vending machines, and the placement of family planning clinics, had significant deterrent effects on teenage risk behavior. The study used a structural equation model (SEM) to account for a shared latent risk-taking propensity among different risky behaviors and found that estimating simultaneous equations for all three risk behaviors was statistically more efficient than estimating each behavior separately. The results indicated that state policies could influence adolescents' behavior, suggesting that government interventions can be effective in reducing risky behaviors among adolescents (PUBMED:15533529).
Instruction: Are intravitreal bevacizumab and ranibizumab effective in a rat model of choroidal neovascularization? Abstracts: abstract_id: PUBMED:18781316 Are intravitreal bevacizumab and ranibizumab effective in a rat model of choroidal neovascularization? Background: Vascular endothelial growth factor (VEGF) is an important stimulator of choroidal neovascularization (CNV). Bevacizumab (Avastin), ranibizumab (Lucentis) and pegaptanib sodium (Macugen) are anti-VEGF medications that have been used in the treatment of CNV. The purpose of our study is to evaluate the efficacy and safety of intravitreal injections of bevacizumab, ranibizumab and pegaptanib sodium in the treatment of CNV in a rat model. Methods: Multiple CNV lesions were induced by laser photocoagulation of the retina in Brown-Norway rats. After 3 weeks, 17 rats were divided into three groups and received intravitreal injections of bevacizumab, ranibizumab or pegaptanib sodium in different dosages. The lesions were evaluated by fluorescein angiography 1, 7, 14, and 28 days later to assess the efficacy of these medications. Results: Different doses of bevacizumab did not show any effect on stopping the leakage on fluorescein angiography on days 1, 7, 14, and 28. Ranibizumab and pegaptanib sodium did not stop the leakage of CNV either. No angiographic or histopathologic toxicity was observed. Conclusions: These three anti-VEGF agents did not show any therapeutic effect on stopping CNV leakage in rats. Previous experiments with ranibizumab in monkeys resulted in a significant decrease in leakage of CNV. The difference may be due to the fact that both ranibizumab and bevacizumab are humanized and species-specific. There are several studies evaluating the effect of bevacizumab in non-primates. Since bevacizumab is humanized, the results of studies on non-primates may not be similar to humans and non-human primates. abstract_id: PUBMED:22791965 Treatment of peripheral exudative hemorrhagic chorioretinopathy by intravitreal injections of ranibizumab. Peripheral exudative hemorrhagic chorioretinopathy (PEHCR) is a rare disorder that sometimes causes sudden subretinal and/or vitreous hemorrhage. Choroidal neovascularization is involved in the pathogenesis, but the etiology is unknown. Treatments with photocoagulation, cryopexy, and intravitreal bevacizumab injection have been reported. However, the therapeutic effect of intravitreal injection with ranibizumab for PEHCR is unclear. A 70-year-old woman visited our department because of sudden loss of superior visual field in her left eye. She had a history of surgical removal of hematoma due to subretinal hemorrhage associated with age-related macular degeneration 5 years ago. Peripheral subretinal hemorrhage was observed in the left eye, and fluorescein and indocyanine green angiography revealed choroidal neovascularization in the subretinal hemorrhagic region. PEHCR was diagnosed. Considering her past history, intravitreal ranibizumab injection was used for treatment. After three injections in the left eye, subretinal hemorrhage and choroidal neovascularization resolved completely. No recurrence was observed during 1 year of follow-up. This case demonstrates that intravitreal injection of ranibizumab is an effective treatment for PEHCR with subretinal hemorrhage. abstract_id: PUBMED:21708088 Efficacy of intravitreal bevacizumab after unresponsive treatment with intravitreal ranibizumab. Objective: To evaluate visual outcomes of eyes with choroidal neovascular membrane secondary to age-related macular degeneration that were initially treated with intravitreal ranibizumab then switched to intravitreal bevacizumab due to treatment failure. Design: Retrospective chart review. Participants: Fifty eyes of 50 patients presenting to the Barnes Retina Institute. Methods: Patients unresponsive to treatment with intravitreal ranibizumab were switched to intravitreal bevacizumab. Main outcome measures included number of intravitreal injections, visual acuity (VA), and resolution of leakage. Mean follow-up was 6 months after the final intravitreal bevacizumab injection. On average, each patient received 3.5 ranibizumab injections and 2.5 bevacizumab injections. Each patient received an average of 6 injections. Results: Resolution of leakage on fluorescein angiography and optical coherence tomography was achieved in 44 eyes (88%). Initial VA ranged from 20/30 to counting fingers (CF) (median VA 20/125). Final VA ranged from 20/20 to CF (median VA 20/100). Change in VA varied from loss of 2 lines to gain of 4 lines, but overall, remained stable (average gain 0.3 lines). Eighteen eyes (36%) had a final VA of ≥ 20/50 and 18 eyes (36%) had a final VA of ≤20/200. Conclusions: Treatment with intravitreal bevacizumab may be effective, as measured by visual and anatomic criteria, in patients who are unresponsive to treatment with intravitreal ranibizumab. abstract_id: PUBMED:25761547 Rescue therapy with intravitreal aflibercept for choroidal neovascularization secondary to choroidal osteoma non-responder to intravitreal bevacizumab and ranibizumab. To investigate the effect of aflibercept in a rare case of choroidal neovascularization (CNV) secondary to choroidal osteoma (CO) and refractory to ranibizumab and bevacizumab. A 45-year-old male with CO-related CNV in his left eye received prior two intravitreal 1.25 mg bevacizumab injections and three intravitreal 0.5 mg ranibizumab injections without visual and anatomic improvement. Best-corrected visual acuity assessment, ophthalmic examination, fundus photography, and optical coherence tomography (OCT) were performed. Switching to intravitreal injection of 2.0 mg aflibercept was performed. After three loading doses of intravitreal aflibercept, visual acuity of the left eye improved from 20/50 to 20/32. Resolution of the persistent subfoveal fluid and reduction of retinal hemorrhage were confirmed according to ophthalmoscopy and OCT findings. No serious adverse events were observed. The treatment effect persisted during a 10-month follow-up period. In choroidal osteoma, switching to intravitreal aflibercept injection may be an effective therapeutic option for treatment of CNV refractory to ranibizumab and bevacizumab. abstract_id: PUBMED:26903722 Experience of intravitreal injections in a tertiary Hospital in Oman. Aim: To find out statistical data regarding intravitreal injections in an outpatient department setup at a tertiary center in Oman. Design: Retrospective chart review. Methods: Data collection of patients who underwent intravitreal injections from November 2009 to May 2013 at Sultan Qaboos University Hospital. Results: Throughout a period of 42 months, a total of 711 intravitreal injections were performed. That included 214 patients (275 eyes). Around one-third of the eyes received two injections or more. The injected agents were bevacizumab (59.8%), ranibizumab (32.3%), triamcinolone (7.5%), and very few patients with endophthalmitis received intravitreal antibiotics and antifungal agents. The three most common indications for the injection therapy were diabetic macular edema (50.9%), choroidal neovascularization (24.3%), and retinal vein occlusive diseases (11.5%). Serious adverse events were rare, and they occurred as ocular (0.9% per patient) and systemic (3.3% per patient). There were 42 eyes received intravitreal triamcinolone, and 24% of them developed intraocular hypertension that required only medical treatment. Conclusion: Different intravitreal agents are currently used to treat many ocular diseases. Currently, therapy with intravitreal agents is very popular, and it carries a promising outcome with more efficiency and safety. abstract_id: PUBMED:31771544 Intravitreal anti-VEGF treatment for choroidal neovascularization secondary to traumatic choroidal rupture. Background: So far only single cases with short follow-up have been reported on the use of intravitreal anti-VEGF for traumatic choroidal neovascularizations (CNV). This paper reports a large case series of patients with CNV secondary to choroidal rupture after ocular trauma receiving intravitreal anti-VEGF (vascular endothelial growth factor) injections. Methods: Fifty-four patients with unilateral choroidal rupture after ocular trauma diagnosed between 2000 and 2016 were retrospectively evaluated. Eleven patients with CNV secondary to choroidal rupture were identified. Five eyes with traumatic secondary CNV were treated with anti-VEGF and were systematically analysed. The other 4 patients with inactive CNV underwent watchful observation. Results: Four men and one woman with a mean age of 29 years (SD 12.4; range 19-45) had intravitreal anti-VEGF therapy for traumatic CNV. Another 4 patients with a mean age of 37 years (SD 6.6; range 31-46) presented with inactive CNV and did not receive specific treatment. In all 9 cases the mean interval between the ocular trauma and the diagnosis of CNV was 5.7 months (SD 4.75; range 2-12). In the treatment group per eye 4.2 injections (SD 3.2; range 1-8) were given on average. Four eyes were treated with bevacizumab and one eye with ranibizumab. Regression of CNV was noted in all eyes. In 4 eyes visual acuity (VA) improved, one eye kept stable visual acuity. Conclusions: Here, we present the up to now largest case series of traumatic CNV membranes treated with anti-VEGF injections with a mean follow-up period of 5 years. Intravitreal anti-VEGF therapy seems to be safe and effective for secondary CNV after choroidal rupture. Compared to exudative age-related macular degeneration fewer injections are needed to control the disease. Trial Registration: Retrospective registration with local ethics committee on 21 March 2019. Trial registration number is 19-1368-104. abstract_id: PUBMED:28424992 Intravitreal anti-VEGF treatment for choroidal neovascularization secondary to punctate inner choroidopathy. Purpose: To assess the outcome of patients with choroidal neovascularization (CNV) secondary to punctate inner choroidopathy (PIC) receiving intravitreal anti-VEGF (vascular endothelial growth factor) injections. Methods: Sixteen eyes of 16 patients diagnosed with CNV secondary to PIC were retrospectively assessed. Results: Eleven women and five men with a mean age of 35 years (SD 11, range 16-56 years) received intravitreal anti-VEGF for PIC-related CNV. On average, 3.5 injections (SD 2.7, range 1-9) were given per eye. Thirteen eyes were treated with bevacizumab, two eyes with ranibizumab and one eye received both substances. The mean follow-up was 15 months (SD 11, range 6-40 months). BCVA improved in eight eyes (mean Δ +2.8 lines), remained stable in four eyes and decreased in four eyes (mean Δ -4.3 lines). Conclusions: CNV development is a frequent complication of PIC. Intravitreal anti-VEGF therapy seems to be safe and effective for PIC-related CNV. abstract_id: PUBMED:22922846 Intravitreal ranibizumab versus bevacizumab for treatment of myopic choroidal neovascularization. Purpose: To compare intravitreal bevacizumab (IVB) and intravitreal ranibizumab (IVR) in the treatment of subfoveal choroidal neovascularization associated with pathologic myopia. Methods: Fifty-five patients fulfilling inclusion and exclusion criteria were randomized either to IVB or to IVR. After the first injection, re-treatments were performed on a pro re nata basis in monthly examinations over an 18-month follow-up. Primary outcome measures were the change in mean best-corrected visual acuity and the proportion of eyes improving in best-corrected visual acuity by &gt;1 and &gt;3 lines at the 18-month examination. Results: Forty-eight eyes received the treatment and were subsequently included in the analysis. At the 18-month examination, a significant improvement of 1.7 lines and 1.8 lines compared with baseline were noticed in the IVR and IVB subgroups, respectively. The difference in the final mean best-corrected visual acuity between the groups was not significant. A 3-line gain or higher was noted in 30% of eyes in the IVR subgroup and 44% of eyes in the IVB subgroup. Although both groups attained a significant improvement in central macular thickness, the IVR subgroup achieved a faster central macular thickness reduction. A significantly lower number of injections were administered in the IVR subgroup (2.5) compared with the IVB subgroup (4.7; P &lt; 0.001). Conclusion: Intravitreal ranibizumab and IVB are effective in the treatment of subfoveal myopic choroidal neovascularization. Intravitreal ranibizumab achieved greater efficacy than IVB in terms of the mean number of injections administered. abstract_id: PUBMED:21817958 Intravitreal anti-vascular endothelial growth factor therapy for choroidal neovascularization secondary to ocular histoplasmosis syndrome. Background: Intravitreal anti-vascular endothelial growth factor (anti-VEGF) therapy is beneficial in treating choroidal neovascularization from age-related macular degeneration, but few long-term studies have shown its efficacy in choroidal neovascularization from ocular histoplasmosis syndrome. Intravitreal anti-VEGF therapy may be effective in cases of choroidal neovascularization because of ocular histoplasmosis syndrome. Methods: Retrospective chart review of 54 eyes treated with intravitreal anti-VEGF therapy for choroidal neovascularization in ocular histoplasmosis syndrome with &gt;1 year of follow-up after initiation of anti-VEGF treatment was performed. Previous treatment and demographic information were recorded. Visual acuity was recorded for each injection treatment and at the last follow-up visit. The anti-VEGF agent was recorded for each injection treatment. Visual acuity was recorded at the last follow-up visit. Results: Mean visual acuity improved from 20/53 to 20/26 over an average of 26.8 months. Either bevacizumab or ranibizumab were administered on an average of 4.5 injections per patient per year of follow-up. Vision loss was seen in only three eyes with loss limited to a single line of vision. Patients experienced no serious complications from treatment. Conclusion: Long-term intravitreal anti-VEGF therapy with bevacizumab or ranibizumab is beneficial in treatment of choroidal neovascularization in ocular histoplasmosis syndrome. abstract_id: PUBMED:23581613 Intravitreal anti-vascular endothelial growth factor therapy for choroidal neovascularization due to Sorsby macular dystrophy. Purpose: To report the first case of intravitreal bevacizumab and ranibizumab to treat choroidal neovascularization secondary to Sorsby macular dystrophy. Case: A 57-year-old male with metamorphopsia, color vision deficits, and ocular family history of Sorsby macular dystrophy was found to have a choroidal neovascular membrane (CNVM) in his left eye. He was initially treated with intravitreal bevacizumab and had visual acuity improvement and resolution of the subretinal fluid on OCT. After 8 injections, he developed presumed mild inflammation secondary to intravitreal bevacizumab and was switched to combination intravitreal bevacizumab/dexamethasone in his left eye, which consistently demonstrated efficacy in stabilizing his vision and the CNVM without producing intraocular inflammation. The right eye later developed the CNVM and he was started on intravitreal bevacizumab in this eye as well. After 8 injections in the right eye, he experienced a similar inflammatory reaction following intravitreal bevacizumab injections and was switched to combination intravitreal bevacizumab/dexamethasone in the right eye as well. Subsequently, he was switched to intravitreal ranibizumab in the left eye alone, which continued to stabilize his vision and OCT and did not cause an inflammatory reaction as he previously experienced with bevacizumab. After 5 ranibizumab injections, he experienced no inflammatory response that he appeared to have with bevacizumab, but chose to switch back to combination intravitreal bevacizumab and dexamethasone due to financial reasons. Initially, in his clinical course, he experienced consistent visual acuity improvements with intravitreal antivascular endothelial growth factor therapy and continues to enjoy functional vision nearly 7 years after his initial symptoms. Conclusions: Intravitreal bevacizumab and ranibizumab demonstrated efficacy in this case in the treatment of CNVM associated with Sorsby macular dystrophy. Answer: Intravitreal bevacizumab and ranibizumab did not show any therapeutic effect on stopping choroidal neovascularization (CNV) leakage in a rat model. In a study where multiple CNV lesions were induced by laser photocoagulation of the retina in Brown-Norway rats, different doses of bevacizumab, ranibizumab, and pegaptanib sodium were administered. The lesions were evaluated by fluorescein angiography at various time points to assess the efficacy of these medications. The results indicated that none of the anti-VEGF agents, including bevacizumab and ranibizumab, stopped the leakage of CNV in rats. No angiographic or histopathologic toxicity was observed. The lack of efficacy in rats was suggested to be due to the fact that both ranibizumab and bevacizumab are humanized and species-specific, which may explain the difference in results compared to previous experiments with ranibizumab in monkeys that showed a significant decrease in CNV leakage (PUBMED:18781316).
Instruction: Is the cortical thickness index a valid parameter to assess bone mineral density in geriatric patients with hip fractures? Abstracts: abstract_id: PUBMED:25801811 Is the cortical thickness index a valid parameter to assess bone mineral density in geriatric patients with hip fractures? Introduction: Reduced bone quality is a common problem during surgical fixation of geriatric hip fractures. The cortical thickness index (CTI) was proposed to assess the bone mineral density (BMD) of the proximal femur on the basis of plain X-rays. The purpose of this study was to evaluate the inter- and intraobserver reliability of the CTI and to investigate correlation between CTI and BMD in geriatric patients. Methods: 60 patients (20 pertrochanteric fractures, 20 femoral neck fractures, 20 without fractures) were included. All patients had ap and lateral hip X-rays and measurement of BMD by Dual Energy X-ray Absorptiometry at different areas of the hip. The ap and lateral CTI was measured twice by four blinded observers and correlation between mean CTI and BMD was calculated. Results: Mean ap CTI was 0.52 and mean lateral CTI was 0.45. Inter- and intraobserver reliability was good for ap CTI (ICC 0.71; 0.79) and lateral CTI (ICC 0.65; 0.69). A significant correlation between CTI and overall BMD was found in patients without fractures (r = 0.74; r = 0.67). No significant correlation between CTI and overall BMD was found in patients with proximal femoral fractures. Conclusion: The CTI has sufficient reliability for the use in daily practice. It showed significant correlation with BMD in patients without hip fractures. In patients with proximal femoral fractures, no correlation between CTI and BMD was found. We do not recommend the CTI as parameter to assess the BMD of the proximal femur in geriatric patients with hip fractures. abstract_id: PUBMED:27558243 Hip fractures in the elderly: The role of cortical bone. Introduction: Osteoporosis is characterised by poor bone quality arising from alterations to trabecular bone. However, recent studies have also described an important role of alterations to cortical bone in the physiopathology of osteoporosis. Although dual-energy X-ray absorptiometry (DXA) is a valid method to assess bone mineral density (BMD), real bone fragility in the presence of comorbidities cannot be evaluated with this method. The aim of this study was to evaluate if cortical thickness could be a good parameter to detect bone fragility in patients with hip fracture, independent of BMD. Methods: A retrospective study was conducted on 100 patients with hip fragility fractures. Cortical index was calculated on fractured femur (femoral cortical index [FCI]) and, when possible, on proximal humerus (humeral cortical index [HCI]). All patients underwent densitometric evaluation by DXA. Results: Average value of FCI was 0.43 and of HCI was 0.25. Low values of FCI were found in 21 patients with normal or osteopenic values of BMD, while low values of HCI were found in three patients with non-osteoporotic values of BMD. Discussion And Conclusion: Cortical thinning measured from X-Ray of the femur identifies 21% additional fracture cases over that identified by a T-score &lt;-2.5 (57%). FCI could be a useful tool to evaluate bone fragility and to predict fracture risk even in patients with normal and osteopenic BMD. abstract_id: PUBMED:25541355 Independent measurement of femoral cortical thickness and cortical bone density using clinical CT. The local structure of the proximal femoral cortex is of interest since both fracture risk, and the effects of various interventions aimed at reducing that risk, are associated with cortical properties focused in particular regions rather than dispersed over the whole bone. Much of the femoral cortex is less than 3mm thick, appearing so blurred in clinical CT that its actual density is not apparent in the data, and neither thresholding nor full-width half-maximum techniques are capable of determining its width. Our previous work on cortical bone mapping showed how to produce more accurate estimates of cortical thickness by assuming a fixed value of the cortical density for each hip. However, although cortical density varies much less over the proximal femur than thickness, what little variation there is leads to errors in thickness measurement. In this paper, we develop the cortical bone mapping technique by exploiting local estimates of imaging blur to correct the global density estimate, thus providing a local density estimate as well as more accurate estimates of thickness. We also consider measurement of cortical mass surface density and the density of trabecular bone immediately adjacent to the cortex. Performance is assessed with ex vivo clinical QCT scans of proximal femurs, with true values derived from high resolution HRpQCT scans of the same bones. We demonstrate superior estimation of thickness than is possible with alternative techniques (accuracy 0.12 ± 0.39 mm for cortices in the range 1-3mm), and that local cortical density estimation is feasible for densities &gt;800 mg/cm(3). abstract_id: PUBMED:26226860 Femoral cortical index: an indicator of poor bone quality in patient with hip fracture. Background: Osteoporosis is a common disease in elderly, characterized by poor bone quality as a result of alterations affecting trabecular bone. However, recent studies have described also an important role of alterations of cortical bone in the physiopathology of osteoporosis. Although dual-energy X-ray absorptiometry (DXA) is a valid method to assess bone mineral density, in the presence of comorbidities real bone fragility is unable to be evaluated. The number of hip fractures is rising, especially in people over 85 years old. Aims: The aim is to evaluate an alternative method so that it can indicate fracture risk, independent of bone mineral density (BMD). Femoral cortical index (FCI) assesses cortical bone stock using femur X-ray. Methods: A retrospective study has been conducted on 152 patients with hip fragility fractures. FCI has been calculated on fractured femur and on the opposite side. The presence of comorbidities, osteoporosis risk factors, vitamin D levels, and BMD have been analyzed for each patient. Results: Average values of FCI have been 0.42 for fractured femurs and 0.48 at the opposite side with a statistically significant difference (p = 0.002). Patients with severe hypovitaminosis D had a minor FCI compared to those with moderate deficiency (0.41 vs. 0.46, p &lt; 0.011). 42 patients (27.6%) with osteopenic or normal BMD have presented low values of FCI. Discussion And Conclusion: A significant correlation among low values of FCI, comorbidities, severe hypovitaminosis D. and BMD in patients with hip fractures has been found. FCI could be a useful tool to evaluate bone fragility and to predict fracture risk even in the normal and osteopenic BMD patients. abstract_id: PUBMED:36684322 Prediction of osteoporosis from proximal femoral cortical bone thickness and Hounsfield unit value with clinical significance. Background: Utilizing dual-energy x-ray absorptiometry (DXA) to assess bone mineral density (BMD) was not routine in many clinical scenarios, leading to missed diagnoses of osteoporosis. The objective of this study is to obtain effective parameters from hip computer tomography (CT) to screen patients with osteoporosis and predict their clinical outcomes. Methods: A total of 375 patients with hip CT scans for intertrochanteric fracture were included. Among them, 56 patients possessed the data of both hip CT scans and DXA and were settled as a training group. The cortical bone thickness (CTh) and Hounsfield unit (HU) values were abstracted from 31 regions of interest (ROIs) of the proximal femur. In the training group, the correlations between these parameters and BMD were investigated, and their diagnostic efficiency of osteoporosis was assessed. Finally, 375 patients were divided into osteoporotic and nonosteoporotic groups based on the optimal cut-off values, and the clinical difference between subgroups was evaluated. Results: The CTh value of ROI 21 and the HU value of ROI 14 were moderately correlated with the hip BMD [r = 0.475 and 0.445 (p &lt; 0.001), respectively]. The best diagnostic effect could be obtained by defining osteoporosis as CTh value &lt; 3.19 mm in ROI 21 or HU value &lt; 424.97 HU in ROI 14, with accuracies of 0.821 and 0.883, sensitivities of 84% and 76%, and specificities of 71% and 87%, respectively. The clinical outcome of the nonosteoporotic group was better than that of the osteoporotic group regardless of the division criteria. Conclusion: The CTh and HU values of specific cortex sites in the proximal femur were positively correlated with BMD of DXA at the hip. Thresholds for osteoporosis based on CTh and HU values could be utilized to screen osteoporosis and predict clinical outcomes. abstract_id: PUBMED:21084915 Texture analysis, bone mineral density, and cortical thickness of the proximal femur: fracture risk prediction. Objective: The objectives of this study were to perform a clinical study analyzing bone quality in multidetector computed tomographic images of the femur using bone mineral density (BMD), cortical thickness, and texture algorithms in differentiating osteoporotic fracture and control subjects; to differentiate fracture types. Methods: Femoral head, trochanteric, intertrochanteric, and upper and lower neck were segmented (fracture, n = 30; control, n = 10). Cortical thickness, BMD, and texture analysis were obtained using co-occurrence matrices, Minkowski dimension, and functional and scaling index method. Results: Bone mineral density and cortical thickness performed best in the neck region, and texture measures performed best in the trochanter. Only cortical thickness and texture measures differentiated femoral neck and intertrochanteric fractures. Conclusions: This study demonstrates that differentiation of osteoporotic fracture subjects and controls is achieved with texture measures, cortical thickness, and BMD; however, performance is region specific. abstract_id: PUBMED:28377953 Study of DXA-derived lateral-medial cortical bone thickness in assessing hip fracture risk. The currently available clinical tools have limited accuracy in predicting hip fracture risk in individuals. We investigated the possibility of using normalized cortical bone thickness (NCBT) estimated from the patient's hip DXA (dual energy X-ray absorptiometry) as an alternative predictor of hip fracture risk. Hip fracture risk index (HFRI) derived from subject-specific DXA-based finite element model was used as a guideline in constructing the mathematical expression of NCBT. We hypothesized that if NCBT has stronger correlations with HFRI than the single risk factors such as areal BMD (aBMD), then NCBT can be a better predictor. The hypothesis was studied using 210 clinical cases, including 60 hip fracture cases, obtained from the Manitoba Bone Mineral Density Database. The results showed that, in general HFRI has much stronger correlations with NCBT than any of the single risk factors; the strongest correlation was observed at the superior side of the narrowest femoral neck with r2 = 0.81 (p &lt; 0.001), which is much higher than the correlation with femoral aBMD, r2 = 0.50 (p &lt; 0.001). The capability of aBMD, NCBT, and HFRI in discriminating the hip fracture cases from the non-fracture ones, expressed as the area under the curve with 95% confidence interval, AUC (95% CI), is respectively 0.627 (0.593-0.657), 0.714 (0.644-0.784) and 0.839 (0.787-0.892). The short-term repeatability of aBMD, NCBT, and HFRI, measured by the coefficient of variation (CV, %), was found to be in the range of (0.64-1.22), (1.93-3.41), (3.10-4.16), respectively. We thus concluded that NCBT is potentially a better predictor of hip fracture risk. abstract_id: PUBMED:38398294 Cortical Thickness Index and Canal Calcar Ratio: A Comparison of Proximal Femoral Fractures and Non-Fractured Femora in Octogenarians to Centenarians. Background: The cortical thickness index (CTI) is a measure of bone quality and it correlates with the risk of proximal femoral fractures. The purpose of this study was to investigate the CTI in femoral neck, trochanteric fractures and non-fractured femora in geriatric patients and to determine whether there is a correlation between the CTI and the presence of a fracture. Methods: One hundred and fifty patients (fifty femoral neck- (FNFx), fifty trochanteric fractures (TFx) and fifty non-fractured (NFx)) with a mean age of 91 (range 80-104) years were included. Hip radiographs (antero-posterior (ap), lateral) were evaluated retrospectively. Measurements on the proximal femoral inner and outer cortices, including CTI and Dorr's canal calcar ratio (CCR), were assessed for inter-observer reliability (ICC), differences of each fracture and correlation of parameters. Results: The mean ap CTI on the affected side was 0.43, 0.45 and 0.55 for FNFx, TFx and NFx, respectively. There was a significant difference of the ap CTI and CCR comparing the injured and healthy side for both fracture cohorts (p &lt; 0.001). Patients with FNFx or TFx had significantly lower CTI on both sides compared to the NFx group (p &lt; 0.05). There was no difference for CTI (p = 0.527) or CCR (p = 0.291) when comparing both sides in the NFx group. The mean inter-observer reliability was good to excellent (ICC 0.88). Conclusions: In proximal femoral fractures, the CTI and CCR are reduced compared with those in non-fractured femora. Both parameters are reliable and show a good correlation in geriatric patients. Therefore, especially for geriatric patients, the CTI and CCR may help to predict fracture risk and consult patients in daily practice. abstract_id: PUBMED:30637307 Lower Bone Mineral Density is Associated with Intertrochanteric Hip Fracture. Background: A better understanding of how bone mineral density and vitamin D levels are associated with femoral neck and intertrochanteric hip fractures may help inform healthcare providers. We asked: 1) In patients age ≥ 55 years, is there a difference in quantitative ultrasound of the heel (QUS) t-score between patients with fractures of the femoral neck and those with fractures of the intertrochanteric region, accounting for other factors 2) In patients age ≥ 55 years, is there a difference in vitamin D level between those with fractures of the femoral neck and those with fractures of the intertrochanteric region, accounting for other factors? 3) Is there an association between vitamin D level and QUS t-score? Methods: In this retrospective cohort study, 1,030 patients were identified using CPT codes for fixation of hip fractures between December 2010 and September 2013. Patients ≥ 55 years of age who underwent operative management for a hip fracture following a fall from standing height were included. Three orthopaedic surgeons categorized fracture type using patient radiographs. Upon hospital admission, QUS t-scores and vitamin D levels were determined. Descriptive statistics, bivariate analyses and multivariable regression were performed. Results: Accounting for potential confounders, patients with lower QUS t-scores were more likely to have intertrochanteric femur fractures than femoral neck fractures. In a bivariate analysis, there was no association between vitamin D level and either fracture type. There was no association between vitamin D level and bone mineral density. Conclusion: Patients with lower bone density that fracture their hips are more likely to fracture in the intertrochanteric region than the femoral neck, but vitamin D levels are unrelated. Awareness of this association emphasizes the importance of bone mineral density screening to assist with intertrochanteric hip fracture prevention. Level Of Evidence: III. abstract_id: PUBMED:28407298 Spatial Differences in the Distribution of Bone Between Femoral Neck and Trochanteric Fractures. There is little knowledge about the spatial distribution differences in volumetric bone mineral density and cortical bone structure at the proximal femur between femoral neck fractures and trochanteric fractures. In this case-control study, a total of 93 women with fragility hip fractures, 72 with femoral neck fractures (mean ± SD age: 70.6 ± 12.7 years) and 21 with trochanteric fractures (75.6 ± 9.3 years), and 50 control subjects (63.7 ± 7.0 years) were included for the comparisons. Differences in the spatial distributions of volumetric bone mineral density, cortical bone thickness, cortical volumetric bone mineral density, and volumetric bone mineral density in a layer adjacent to the endosteal surface were investigated using voxel-based morphometry (VBM) and surface-based statistical parametric mapping (SPM). We compared these spatial distributions between controls and both types of fracture, and between the two types of fracture. Using VBM, we found spatially heterogeneous volumetric bone mineral density differences between control subjects and subjects with hip fracture that varied by fracture type. Interestingly, femoral neck fracture subjects, but not subjects with trochanteric fracture, showed significantly lower volumetric bone mineral density in the superior aspect of the femoral neck compared with controls. Using surface-based SPM, we found that compared with controls, both fracture types showed thinner cortices in regions in agreement with the type of fracture. Most outcomes of cortical and endocortical volumetric bone mineral density comparisons were consistent with VBM results. Our results suggest: 1) that the spatial distribution of trabecular volumetric bone mineral density might play a significant role in hip fracture; 2) that focal cortical bone thinning might be more relevant in femoral neck fractures; and 3) that areas of reduced cortical and endocortical volumetric bone mineral density might be more relevant for trochanteric fractures in Chinese women. © 2017 American Society for Bone and Mineral Research. Answer: The cortical thickness index (CTI) has been investigated as a parameter to assess bone mineral density (BMD) in geriatric patients with hip fractures. However, the validity of CTI as a measure for BMD in this patient population is not consistent across studies. One study found that while the CTI has sufficient reliability for use in daily practice and showed significant correlation with BMD in patients without hip fractures, no significant correlation between CTI and overall BMD was found in patients with proximal femoral fractures. Therefore, the authors do not recommend CTI as a parameter to assess BMD of the proximal femur in geriatric patients with hip fractures (PUBMED:25801811). Another study suggested that cortical thinning measured from X-ray of the femur identifies additional fracture cases over that identified by a T-score <-2.5, indicating that the femoral cortical index (FCI) could be a useful tool to evaluate bone fragility and to predict fracture risk even in patients with normal and osteopenic BMD (PUBMED:27558243). Further research indicated that the CTI and canal calcar ratio (CCR) are reduced in proximal femoral fractures compared with non-fractured femora in geriatric patients, suggesting that both parameters may help to predict fracture risk in this patient population (PUBMED:38398294). However, another study found that patients with lower bone density who fracture their hips are more likely to fracture in the intertrochanteric region than the femoral neck, but vitamin D levels are unrelated, emphasizing the importance of bone mineral density screening to assist with intertrochanteric hip fracture prevention (PUBMED:30637307). In summary, while some studies suggest that CTI may have a role in assessing bone fragility and predicting fracture risk, its correlation with BMD in geriatric patients with hip fractures is not consistently supported. Therefore, CTI may not be a universally valid parameter for assessing BMD in this patient group, and additional factors such as bone density screening and consideration of fracture type may be important.
Instruction: Can MRI predict which patients are most likely to benefit from percutaneous positioning of volume-adjustable balloon devices? Abstracts: abstract_id: PUBMED:16601387 Can MRI predict which patients are most likely to benefit from percutaneous positioning of volume-adjustable balloon devices? Purpose: To assess whether magnetic resonance imaging (MRI) is useful in predicting which patients affected by stress urinary incontinence (SUI) will benefit from a new anti-incontinence therapy named adjustable continence therapy (AC). Methods: We prospectively evaluated a group of 25 female patients affected by SUI and treated with ACT. Before and after treatment all patients were clinically assessed by physical examination, urodynamic evaluation and pad test. All patients had an MR examination before and 3 and 12 months after ACT surgery to compare the position of the bladder neck in relation to the pubococcygeal line (PCL). Results: 21/25 (84%) patients were improved; 16 (64%) of these patients were dry and 5 (20%) significantly improved. Before treatment, the mean PCL distance was significantly different (p&lt;0.01) between the responsive and the non-responsive groups. Conclusions: MRI provides an effective radiological method to predict the efficacy of the ACT. abstract_id: PUBMED:27441179 The Reverse Thomas Position for Thoracolumbar Fracture Height Restoration: Relative Contribution of Patient Positioning in Percutaneous Balloon Kyphoplasty for Acute Vertebral Compressions. Background: Standard positioning for percutaneous balloon kyphoplasty requires placing a patient prone with supports under the iliac crests and upper thorax. The authors believe that hip hyperextension maximises pelvic anteversion creating anterior longitudinal ligamentotaxis, thus facilitating restoration of vertebral height. Methods: Radiographic imaging including pre-operative, post-positioning, post balloon tamp inflation and post-operative lateral radiographs were analysed for anterior and posterior column height, wedge angle of the affected vertebra and 3-level Cobb angle in patients with recent fractures of T11-L1. Fracture dimensions of the index vertebra were expressed as percentage of the analogous dimension of the referent vertebra. Results: From a total of 149 patients, a full imaging sequence was available on 21 cases of vertebral compression fractures. The described positioning technique created a mean anterior column height increase from 68.3% to 75.3% with positioning (p = 0.15), increasing to 82.3% post balloon inflation. Average Cobb and wedge angle improvement of 4.7° (p = 0.004)and 3.6° (p = 0.002) from positioning along were also recorded. Conclusion: The Reverse Thomas Position is a safe and effective technique for augmenting thoracolumbar fracture height restoration in percutaneous balloon kyphoplasty. abstract_id: PUBMED:25740967 Clinical and economic effectiveness of percutaneous ventricular assist devices for high-risk patients undergoing percutaneous coronary intervention. Background: Comparative effectiveness research (CER) is taking a more prominent role in formalizing hospital treatment protocols and health-care coverage policies by having health-care providers consider the impact of new devices on costs and outcomes. CER balances the need for innovation with fiscal responsibility and evidence-based care. This study compared the clinical and economic impact of percutaneous ventricular assist devices (pVAD) with intraaortic balloon pumps for high-risk patients undergoing percutaneous coronary intervention (PCI). Methods: This study conducted a review of all comparative randomized control trials of the pVADS (Impella and TandemHeart) vs IABP for patients undergoing high-risk percutaneous coronary intervention (PCI). A retrospective analysis of the 2010 and 2011 Medicare MEDPAR data files was also performed to compare procedural costs and hospital length of stay (LOS). Readmission rates between the devices were also studied. Results: Based on available trials, there is no significant clinical benefit with pVAD compared to IABP. Use of pVADs is associated with increased length of Intensive Care Unit stay and a total longer LOS. The incremental budget impact for pVADs was $33,957,839 for the United States hospital system (2010-2011). Conclusions: pVADs are not associated with improved clinical outcomes, reduced hospital length of stay, or reduced readmission rates. Management of high-risk PCI and cardiogenic shock patients with IABP is more cost effective than a routine use of pVADS. Use of IABP as initial therapy in high-risk PCI and cardiogenic shock patients may result in savings of up to $2.5 billion annually of incremental costs to the hospital system. abstract_id: PUBMED:37954908 A Comparison of Adjustable Positioning and Free Positioning After Pars Plana Vitrectomy for Rhegmatogenous Retinal Detachment: A Prospective Randomized Controlled Study. Purpose: To compare the effectiveness and safety of adjustable and free postoperative positioning after pars plana vitrectomy (PPV) for rhegmatogenous retinal detachment (RRD). Design: Prospective, randomized controlled study. Methods: A total of 94 eyes with RRD were enrolled from April 2020 to April 2023 and monitored postoperatively for at least 3 months. All patients underwent PPV combined with silicone oil injection or gas tamponade and were randomly divided postoperatively into two groups: an adjustable positioning group and a free positioning group. The success of the outcome was based on the retinal reattachment rate, best corrected visual acuity (BCVA), postoperative complications, and ocular biometric parameters such as anterior chamber depth (ACD) and lens thickness (LT). Results: The initial retinal reattachment rate was 97.9% in the adjustable positioning group and 95.7% in the free positioning group, manifesting no statistical difference between the two groups. Similarly, no statistical difference was observed between the two groups in the final BCVA, which was significantly improved compared to the preoperative BCVA. The comparison of the 1-month postoperative ACD and LT with the preoperative values showed no statistically significant differences in the two groups. The rates of complications were not statistically different in the two groups. Conclusion: After treating RRD using PPV, neither the adjustable nor the free postoperative positioning affected the retinal reattachment rate or the incidence of complications. Therefore, our study showed that it is safe and effective to adopt free positioning postoperatively, which may provide more options for patients with RRD undergoing PPV. abstract_id: PUBMED:35252235 A Comparison of Face-Down Positioning and Adjustable Positioning After Pars Plana Vitrectomy for Macular Hole Retinal Detachment in High Myopia. Purpose: To compare the anatomical and functional outcomes of macular hole retinal detachment (MHRD) in high myopia after pars plana vitrectomy (PPV) with face-down positioning and adjustable positioning. Methods: Fifty-three eyes from 53 patients with MHRD were analyzed in this study. All patients received PPV with silicon oil for tamponade and then subdivided into 2 groups: 28 were included in a face-down positioning group and 25 were included in the adjustable positioning group. Patients were followed up for at least 6 months. The main outcome was the rate of anatomical macular hole (MH) closure and retinal reattachment. Secondary outcome measures were the best-corrected visual acuity and postoperative complications. Results: There was no significant difference in the rate of MH closure (53.6 vs. 72.0%, p = 0.167) and retinal reattachment (100 vs. 96%, p = 0.472) between the face-down group and adjustable group. Compared with the mean preoperative best-corrected visual acuity (BCVA), the mean postoperative BCVA at the 6-month follow-up improved significantly in both groups (p = 0, both). But there was no significant difference in the mean postoperative BCVA (p = 0.102) and mean BCVA improvement (p = 0.554) at 6 months after surgery between the two groups. There was no significant difference in the high intraocular pressure (IOP) after surgery between the two groups (53.6 vs. 44%, p = 0.487). There were no other complications that occurred during the follow-up. Conclusion: Adjustable positioning after PPV with silicon oil tamponade for MHRD repair is effective and safe. Face-down positioning does not seem to be necessary for all patients with MHRD. abstract_id: PUBMED:29090379 Response Rates with the Spatz3 Adjustable Balloon. Background: Intragastric balloons (IGBs) have demonstrated efficacy; however, the percent of "responders" (&gt; 25% estimated weight loss (EWL) or &gt; 10% total body weight loss (TBWL)-as suggested by FDA) have been less reported. The Spatz3 adjustable intragastric balloon (AIGB) extends implantation to 1 year, decreases balloon volume for intolerance, and increases volume for diminishing effect. Aim: The aim of this study is to determine the efficacy/responder rate of the Spatz3 AIGB. Methods: Implantations of Spatz3 in 165 consecutive patients (pts) in 2 centers were retrospectively reviewed. Mean BMI is 35.7, mean weight (wt) 99.1 kg, and mean balloon volume 495 ml (400-600 ml). Balloon volume adjustments were offered for intolerance and for wt loss plateau. Results: In total, 165 pts were implanted yielding mean wt loss of 16.3 kg, 16.4% TBWL, and 67.4% EWL. Response (&gt; 25% EWL; 10% TBWL) was achieved in 146/165 (88.5%) of patients. Response rates differed for 136 pts with BMI &lt; 40 (91.2%) and 29 pts with BMI &gt; 40 (69%). Down adjustments in 20 patients (mean - 150 ml) allowed 16/20 (80%) to continue IGB therapy. Up adjustments in 64 patients (mean 5.4 months; mean + 260 ml) yielded additional mean wt loss of 5.7 kg. One gastric perforation (0.6%) occurred in a patient who experienced abdominal pain for 2 weeks. Five patients with small ulcers did not require balloon extraction. Conclusions: (1) Within the limitations of a retrospective review, the Spatz3 balloon appears to be an effective wt loss balloon with better response rates in BMI &lt; 40. (2) Up adjustments yielded a mean 5.7 kg extra wt loss. (3) Down adjustments alleviated early intolerance in 80% of patients. (4) These two adjustment functions may be instrumental in yielding a responder rate of 88.5%. abstract_id: PUBMED:32619112 Efficacy of and risk factors for percutaneous balloon compression for trigeminal neuralgia in elderly patients. Objective: To investigate the efficacy and safety of percutaneous balloon compression (PBC) for the treatment of trigeminal neuralgia in elderly patients. Methods: We retrospectively analysed data of 105 elderly patients with primary trigeminal neuralgia who were over 70 years and underwent percutaneous balloon compression using anatomic positioning and imaging guidance from January 2019 to November 2019. Results: The immediate cure rate of pain in this group of patients was 97.1% (Barrow Neurological Institute (BNI) pain scores: class I and II; numbness score: class II). Postoperative keratitis was reported in 1 patient, masticatory muscle weakness and muscle atrophy in 1 patient, herpes labialis in 8 patients and lacunar infarction in 2 patients. Facial numbness and decreased sensation occurred in patients with significant pain relief. No serious complications were reported. There was no statistically significant difference in efficacy between the short compression and long compression time groups. Conclusion: PBC is a safe and effective approach to treat trigeminal neuralgia. abstract_id: PUBMED:37674171 Impact of MRI on target volume definition in head and neck cancer patients. Background: Target volume definition for curative radiochemotherapy in head and neck cancer is crucial since the predominant recurrence pattern is local. Additional diagnostic imaging like MRI is increasingly used, yet it is usually hampered by different patient positioning compared to radiotherapy. In this study, we investigated the impact of diagnostic MRI in treatment position for target volume delineation. Methods: We prospectively analyzed patients who were suitable and agreed to undergo an MRI in treatment position with immobilization devices prior to radiotherapy planning from 2017 to 2019. Target volume delineation for the primary tumor was first performed using all available information except for the MRI and subsequently with additional consideration of the co-registered MRI. The derived volumes were compared by subjective visual judgment and by quantitative mathematical methods. Results: Sixteen patients were included and underwent the planning CT, MRI and subsequent definitive radiochemotherapy. In 69% of the patients, there were visually relevant changes to the gross tumor volume (GTV) by use of the MRI. In 44%, the GTV_MRI would not have been covered completely by the planning target volume (PTV) of the CT-only contour. Yet, median Hausdorff und DSI values did not reflect these differences. The 3-year local control rate was 94%. Conclusions: Adding a diagnostic MRI in RT treatment position is feasible and results in relevant changes in target volumes in the majority of patients. abstract_id: PUBMED:25045075 Review of MRI positioning devices for guiding focused ultrasound systems. Background: This article contains a review of positioning devices that are currently used in the area of magnetic resonance imaging (MRI) guided focused ultrasound surgery (MRgFUS). Methods: The paper includes an extensive review of literature published since the first prototype system was invented in 1991. Results: The technology has grown into a fast developing area with application to any organ accessible to ultrasound. The initial design operated using hydraulic principles, while the latest technology incorporates piezoelectric motors. Although, in the beginning there were fears regarding MRI safety, during recent years, the deployment of MR-safe positioning devices in FUS has become routine. Many of these positioning devices are now undergoing testing in clinical trials. Conclusion: Existing MRgFUS systems have been utilized mostly in oncology (fibroids, brain, liver, kidney, bone, pancreas, eye, thyroid, and prostate). It is anticipated that, in the near future, there will be a positioning device for every organ that is accessible by focused ultrasound. abstract_id: PUBMED:35904015 Diffusion of Percutaneous Ventricular Assist Devices in US Markets. Background: Percutaneous ventricular assist devices (PVADs) have been replacing intra-aortic balloon pumps for hemodynamic support during percutaneous coronary intervention (PCI), even though data supporting a benefit for hard clinical end points remain limited. We evaluated diffusion of PVADs across US markets and examined the association of market utilization of PVAD with mortality and cost. Methods: Using the 2013 to 2019 Medicare data, we identified all patients aged ≥65 years who underwent PCI with either a PVAD or intra-aortic balloon pump. We used hospital referral region to define regional health care markets and categorized them in quartiles based on the proportional use of PVADs during PCI. Multilevel models were constructed to determine the association of patient, hospital, and market factors with utilization of PVADs and the association of PVAD utilization with 30-day mortality and cost. Results: A total of 79 176 patients underwent PCI with either intra-aortic balloon pump (47 514 [60.0%]) or PVAD (31 662 [40.0%]). The proportion of PCI procedures with PVAD increased over time (17% in 2013 to 57% in 2019; P for trend, &lt;0.001), such that PVADs overtook intra-aortic balloon pump for hemodynamic support during PCI in 2018. There was a large variation in PVAD utilization across markets (range, 0%-85%), which remained unchanged after adjustment of patient characteristics (median odds ratio, 2.05 [95% CI, 1.91-2.17]). The presence of acute myocardial infarction, cardiogenic shock, and emergent status was associated with a 45% to 50% lower odds of PVAD use suggesting that PVADs were less likely to be used in the sickest patients. Greater utilization of PVAD at the market level was not associated with lower risk mortality but was associated with higher cost. Conclusions: Although utilization of PVADs for PCI continues to increase, there is large variation in PVAD utilization across markets. Greater market utilization of PVADs was not associated with lower mortality but was associated with higher cost. Answer: Yes, MRI can predict which patients are most likely to benefit from percutaneous positioning of volume-adjustable balloon devices. A study assessing the use of magnetic resonance imaging (MRI) to predict the efficacy of adjustable continence therapy (ACT) for patients affected by stress urinary incontinence (SUI) found that MRI is an effective radiological method for this purpose. The study involved a prospective evaluation of 25 female patients treated with ACT, where all patients underwent an MRI examination before and 3 and 12 months after ACT surgery. The position of the bladder neck in relation to the pubococcygeal line (PCL) was compared, and the results showed that the mean PCL distance before treatment was significantly different between the responsive and the non-responsive groups. This suggests that MRI can be used to predict which patients will benefit from the ACT, with 84% of patients in the study showing improvement after treatment (PUBMED:16601387).
Instruction: Does delayed gastric emptying shorten the H pylori eradication period? Abstracts: abstract_id: PUBMED:17072954 Does delayed gastric emptying shorten the H pylori eradication period? A double blind clinical trial. Aim: To evaluate the gastric emptying inhibitory effects of sugar and levodopa on H pylori eradication period. Methods: A total of 139 consecutive patients were randomized into 6 groups. The participants with peptic ulcer disease or non-ulcer dyspepsia non-responding to other medications who were also H pylori-positive patients either with positive rapid urease test (RUT) or positive histology were included. All groups were pretreated with omeprazole for 2 d and then treated with quadruple therapy regimen (omeprazole, bismuth, tetracycline and metronidazole); all drugs were given twice daily. Groups 1 and 2 were treated for 3 d, groups 3, 4 and 5 for 7 d, and group 6 for 14 d. Groups 1 to 4 received sugar in the form of 10% sucrose syrup. Levodopa was prescribed for groups 1 and 3. Patients in groups 2 and 4 were given placebo for levodopa and groups 5 and 6 received placebos for both sugar and levodopa. Upper endoscopy and biopsies were carried out before treatment and two months after treatment. Eradication of H pylori was assessed by RUT and histology 8 wk later. Results: Thirty patients were excluded. Per-protocol analysis showed successful eradication in 53% in group 1, 56% in group 2, 58% in group 3, 33.3% in group 4, 28% in group 5, and 53% in group 6. Eradication rate, patient compliance and satisfaction were not significantly different between the groups. Conclusion: It seems that adding sugar or levodopa or both to anti H pylori eradication regimens may lead to shorter duration of treatment. abstract_id: PUBMED:29564060 Comparison of Helicobacter pylori eradication regimens in patients with end stage renal disease. Aim: The aim of this study was to compare the Helicobacter pylori (HP) eradication regimens in patients with end stage renal disease. Background: In patients undergoing hemodialysis, the pathologic changes seen in the stomach may be the result of high serum levels of gastrin, delayed gastric emptying or HP infection. Methods: Our study was a randomized clinical trial in which 120 patients with ESRD (Patients who undergo hemodialysis) confirmed HP infection, were divided to four groups having 2-week eradication regimens; Group I: LCA (lansoprazole 30 mg-BD,clarithromycin 250 mg-BD, amoxicillin 500 mg-BD), Group II: LCM (lansoprazole 30 mg-BD,clarithromycin 250 mg-BD, metronidazole 500 mg-BD), Group III: LCAM (lansoprazole 30 mg-BD,clarithromycin 250 mg-BD,amoxicillin 500 mg-BD, metronidazole 500 mg-BD) and Group IV: Sequential (lansoprazole 30 mg-BDfor two weeks; first week: amoxicillin 500 mg-BD and second week: clarithromycin 250 mg-BD, metronidazole 500 mg-BD).6 weeks after treatment, Urea Breath Test (UBT) was performed for all patients. Results: The mean age of patients was 43.1±11.2 years. 55.8% of patients were male. The success rates of HP eradication in 4 groups were76.7%, 70%, 90% and 90%, respectively. HP eradication rates were not statistically different among the regimens (p=0.11). There were not significant differences among the groups regarding demographic and anthropometric variables. Conclusion: The results showed there was no significant difference between the success rates of HP eradication regimens for ESRD patients. According to approved regimen for 90% eradication rate, with a lower number of medications and given the less risk of side effects and drug interactions, the sequential regimen is the best. abstract_id: PUBMED:18220610 The role of H. pylori infection in diabetes. Helicobacter pylori [H. pylori], one of the most common chronic infections worldwide, is the main etiologic agent of gastritis, peptic ulcer and gastric cancer. Patients with diabetes mellitus are often affected by chronic infections. Many studies have evaluated the prevalence of H. pylori infection in diabetic patients and the possible role of this condition in their metabolic control. Some studies found a higher prevalence of the infection in diabetic patients and a reduced glycaemic control, while others did not support any correlation between metabolic control and H. pylori infection. There are only a few studies on the eradication rate of H. pylori in diabetic patients. Most of these papers concluded that standard antibiotic therapy allows a significantly lower H. pylori eradication rate than is observed in control groups matched for sex and age. Changes in the microvasculature of the stomach with a possible reduction of antibiotic absorption, the presence of gastroparesis and the frequent use of antibiotics for recurrent bacterial infections with the development of resistant strains could be some of the mechanisms underlying this phenomenon. A quadruple therapy may be used as the second line approach with a good eradication rate, even if an antibiotic selected according to a specific H. pylori antibiogram is considered the gold standard in these patients. As regards the gastrointestinal symptoms of H. pylori infected individuals, many studies showed that they are as frequent in patients with type 1 diabetes as in the general population. The incidence of H. pylori recurrence after 12 months follow-up is significantly higher in type 1 diabetic subjects when compared to controls. Reduced lymphocyte activity, neutrophil dysfunction with failure of chemotaxis and a possible reservoir of H. pylori in dental plaque may explain the higher rate of re-infection in these patients. abstract_id: PUBMED:9732919 Reversal of fundic atrophy after eradication of Helicobacter pylori. Objectives: We sought to evaluate the effect of Helicobacter pylori eradication in patients with fundic atrophic gastritis. Methods: Acid secretion, gastric emptying, and histology were evaluated in 20 patients with fundic atrophic gastritis and H. pylori infection. After investigation, 10 patients (Group 1) received an eradicating treatment and 10 (Group 2) did not receive any treatment. One year later, the baseline investigations were repeated. Subsequently, patients in Group 2 received the same treatment given to patients in Group 1 and were reevaluated 12 months later. A further follow-up was performed in both groups 36 months after the treatment. Results: At 1-yr follow-up, all the patients in Group 1 were H. pylori negative whereas all the patients in Group 2 were still infected. In Group 1, there was a significant improvement of both fundic atrophy and acid secretion, compared with baseline (p &lt; 0.01). In Group 2, no substantial modification of either histological or functional parameters was observed at the first follow-up; conversely, a significant (p &lt; 0.01) improvement of fundic atrophy and acid secretion was detected in these patients 12 months after eradication of the bacterium. Histological pattern remained unchanged at 36 months of follow-up in both groups. Gastric emptying remained, on the average, unaffected by the treatment; however, three patients with delayed gastric emptying at entry had normal gastric emptying after eradication of H. pylori. Conclusions: Our data suggest that mucosal atrophy can be reduced or even reversed by the eradication of H. pylori, and this is associated with a recovery of gastric function. abstract_id: PUBMED:8574747 Influence of Helicobacter pylori infection and the effects of its eradication on gastric emptying in non-ulcerative dyspepsia. Aim: The aim of the present study was to clarify the effects of Helicobacter pylori infection and its eradication on gastric emptying. Subjects And Methods: Out of a total of 52 patients with non-ulcerative dyspepsia, 34 H.pylori-positive patients were enrolled. Antimicrobial drugs for the eradication of H. pylori were administered to 19 out of the 34 H. pylori-positive patients. Gastric emptying was evaluated according to the acetaminophen method. Inflammatory changes and intracellular periodic acid-Schiff-positive substances in the antral mucosa were examined in biopsy specimens. Results: Although gastric emptying was significantly prolonged in the patients with non-ulcerative dyspepsia compared with the control group (P &lt; 0.01), there was no difference in gastric emptying between H. pylori-positive and -negative patients, with all patients showing significantly less gastric emptying than the control group. The H. pylori eradication rate was 58% (11 out of 19) and gastric emptying improved significantly in seven patients whose infection was eradicated and whose dyspeptic symptoms disappeared. The ammonia concentration in gastric juice, inflammatory changes in the gastric mucosa and the index of periodic acid-Schiff-positive substances improved significantly when H. pylori was successfully eradicated compared with patients in whom eradication was unsuccessful. As gut hormones may affect gastroduodenal motility associated with H. pylori infection, we also studied the levels of serum gastrin and cholecystokinin. In the patients whose infection was eradicated, serum gastrin decreased significantly, but the cholecystokinin level did not change significantly, although there was a non-significant trend for cholecystokinin to increase. Conclusions: These results suggest that delayed gastric emptying is partly associated with H. pylori infection and that the infection may contribute to the development of non-ulcerative dyspepsia. abstract_id: PUBMED:10540044 The effect of Helicobacter pylori eradication therapy on gastric antral myoelectrical activity and gastric emptying in patients with non-ulcer dyspepsia. Background: Dysmotility of the gastroduodenal region and delayed gastric emptying have been considered to play roles in non-ulcer dyspepsia (NUD). Helicobacter pylori-induced inflammation of the gastric mucosa may affect gastric motility. Aim: To evaluate the effects of H. pylori eradication therapy on gastrointestinal motility and symptoms in NUD patients. Methods: : Forty-six NUD patients were examined for gastric emptying, antral myoelectrical activity, H. pylori infection, and symptom scores. In H. pylori-positive NUD patients, gastric emptying, antral myoelectrical activity, and symptom scores were also analysed 2 months after cure of H. pylori infection. Results: Sixty-seven per cent of NUD patients were H. pylori-positive. Both abnormal gastric emptying and antral myoelectrical activity were observed in NUD patients. H. pylori-positive NUD patients were divided into three groups according to their gastric emptying: the delayed group, the normal group, and the rapid group. In the delayed and rapid gastric emptying groups, the emptying and symptom scores were improved significantly by eradication. There was no improvement in symptom scores in the normal gastric emptying NUD group by the eradication therapy. Conclusions: Disturbed gastric emptying and antral myoelectrical activity play roles in NUD. H. pylori-induced disturbed gastric emptying may cause some NUD symptoms. Gastric emptying and symptom scores are improved by H. pylori eradication therapy in NUD patients with disturbed gastric emptying; H. pylori eradication therapy is effective in H. pylori-positive NUD patients with disturbed gastric emptying. abstract_id: PUBMED:10571604 The effect of helicobacter pylori eradication therapy on gastric antral myoelectrical activity and gastric emptying in patients with non-ulcer dyspepsia. Background: Dysmotility of the gastroduodenal region and delayed gastric emptying have been considered to play roles in non-ulcer dyspepsia. In addition, it has been reported that Helicobacter pylori induced inflammation of the gastric mucosa may affect gastric motility. Aim: To evaluate the effects of H. pylori eradication therapy on gastrointestinal motility and symptoms in non-ulcer dyspepsia patients. Methods: A total of 46 non-ulcer dyspepsia patients were examined for gastric emptying, antral myoelectrical activity, H. pylori infection, and symptom scores. In H. pylori-positive non-ulcer dyspepsia patients, gastric emptying, antral myoelectrical activity, and symptom scores were also analysed 2 months after being cured of H. pylori infection. Results: A total of 67.4% of the non-ulcer dyspepsia patients were H. pylori-positive. Both abnormal gastric emptying and antral myoelectrical activity were observed in non-ulcer dyspepsia patients. H. pylori-positive non-ulcer dyspepsia patients were divided into three groups according to their gastric emptying: the delayed gastric emptying group, the normal gastric emptying group, and the rapid gastric emptying group. In the delayed and rapid gastric emptying groups, the gastric emptying and symptom scores were improved significantly by the eradication therapy. However, there was no improvement in symptom scores in the normal gastric emptying non-ulcer dyspepsia group by the eradication therapy. Conclusions: Disturbed gastric emptying and antral myoelectrical activity play roles in non-ulcer dyspepsia. Helicobacter pylori infection, inducing disturbed gastric emptying, may cause some non-ulcer dyspepsia symptoms. Gastric emptying and symptom scores are improved by H. pylori eradication therapy in non-ulcer dyspepsia patients with disturbed gastric emptying. H. pylori eradication therapy is effective in H. pylori-positive non-ulcer dyspepsia patients with disturbed gastric emptying. abstract_id: PUBMED:29133755 Analysis of the Relationship between Helicobacter pylori Infection and Diabetic Gastroparesis. Background: Whether Helicobacter pylori infection is associated with diabetic gastroparesis (DGP) is unclear. This study aimed to investigate the potential correlation between H. pylori infection and DGP. Methods: In this study, 163 patients with type 2 diabetes mellitus and 175 nondiabetic patients who were treated in our department were divided into DGP, simple diabetes, non-DGP (NDG), and normal groups based on their conditions. The H. pylori infection rate in each group was calculated. H. pylori eradication therapy was performed for patients with H. pylori infection in each group. The eradication rates were compared between the groups, and the improvements in gastroparesis-associated symptoms were compared before and after treatment in patients with DGP. Results: The H. pylori infection rate was 74.6% in the DGP group, which was significantly higher than that in the simple diabetes (51.1%, P &lt; 0.01), NDG (57.7%, P &lt; 0.05), and normal groups (48.0%, P &lt; 0.01). With increased disease course, the incidence of DGP and the H. pylori infection rate gradually increased (P &lt; 0.05). In the DGP group, the incidences of upper abdominal pain and distention, early satiety, and anorexia were 75.5%, 66.0%, and 67.9%, respectively, before eradication treatment; and 43.4%, 35.8%, and 39.6%, respectively, after eradication treatment, and the difference was statistically significant (P &lt; 0.01). In patients with DGP with successful H. pylori eradication, the number of barium strips discharged after eradication was 5.9 ± 1.0, which was significantly larger than that before treatment (4.1 ± 0.7, P &lt; 0.01). In addition, the number of barium strips discharged was significantly larger in patients with DGP with successful H. pylori eradication than those with failed H. pylori eradication (P &lt; 0.01). Conclusions: DGP development might be associated with H. pylori infection. H. pylori eradication can effectively improve dyspepsia-associated symptoms and delayed gastric emptying in patients with DGP. abstract_id: PUBMED:19129375 Role of gut-brain axis in persistent abnormal feeding behavior in mice following eradication of Helicobacter pylori infection. Bacterial infection can trigger the development of functional GI disease. Here, we investigate the role of the gut-brain axis in gastric dysfunction during and after chronic H. pylori infection. Control and chronically H. pylori-infected Balb/c mice were studied before and 2 mo after bacterial eradication. Gastric motility and emptying were investigated using videofluoroscopy image analysis. Gastric mechanical viscerosensitivity was assessed by cardioautonomic responses to distension. Feeding patterns were recorded by a computer-assisted system. Plasma leptin, ghrelin, and CCK levels were measured using ELISA. IL-1beta, TNF-alpha, proopiomelanocortin (POMC), and neuropeptide Y mRNAs were assessed by in situ hybridizations on frozen brain sections. Gastric inflammation was assessed by histology and immunohistochemistry. As shown previously, H. pylori-infected mice ate more frequently than controls but consumed less food per bout, maintaining normal body weight. Abnormal feeding behavior was accompanied by elevated plasma ghrelin and postprandial CCK, higher TNF-alpha (median eminence), and lower POMC (arcuate nucleus) mRNA. Infected mice displayed delayed gastric emptying and visceral hypersensitivity. Eradication therapy normalized gastric emptying and improved gastric sensitivity but had no effect on eating behavior. This was accompanied by persistently increased TNF-alpha in the brain and gastric CD3(+) T-cell counts. In conclusion, chronic H. pylori infection in mice alters gastric emptying and mechanosensitivity, which improve after bacterial eradication. A feeding pattern reminiscent of early satiety persists after H. pylori eradication and is accompanied by increased TNF-alpha in the brain. The results support a role for altered gut-brain pathways in the maintenance of postinfective gut dysfunction. abstract_id: PUBMED:12144568 Review article: Helicobacter pylori infection and gastric outlet obstruction - prevalence of the infection and role of antimicrobial treatment. The prevalence of Helicobacter pylori infection in peptic ulcer disease complicated by gastric outlet obstruction seems to be, overall, lower than that reported in non-complicated ulcer disease, with a mean value of 69%. However, H. pylori infection rates in various studies range from 33% to 91%, suggesting that differences in variables, such as the number and type of diagnostic methods used or the frequency of non-steroidal anti-inflammatory drug intake, may be responsible for the low prevalence reported in some studies. The resolution of gastric outlet obstruction after the eradication of H. pylori has been demonstrated by several studies. It seems that the beneficial effect of H. pylori eradication on gastric outlet obstruction is observed early, just a few weeks after the administration of antimicrobial treatment. Furthermore, this favourable effect seems to remain during long-term follow-up. Nevertheless, gastric outlet obstruction does not always resolve after H. pylori eradication treatment and an explanation for the failures is not completely clear, non-steroidal anti-inflammatory drug intake perhaps playing a major role in these cases. Treatment should start pharmacologically with the eradication of H. pylori even when stenosis is considered to be fibrotic, or when there is some gastric stasis. In summary, H. pylori eradication therapy should be considered as the first step in the treatment of duodenal or pyloric H. pylori-positive stenosis, whereas dilation or surgery should be reserved for patients who do not respond to such medical therapy. Answer: The available abstracts do not provide direct evidence that delayed gastric emptying shortens the H. pylori eradication period. Instead, they discuss various aspects of H. pylori infection, its treatment, and its relationship with gastric emptying. For instance, one study aimed to evaluate the gastric emptying inhibitory effects of sugar and levodopa on H. pylori eradication period but concluded that adding sugar or levodopa or both to anti H. pylori eradication regimens may lead to a shorter duration of treatment, although the results were not statistically significant (PUBMED:17072954). This suggests that the study did not find a clear link between delayed gastric emptying and a shortened eradication period. Another study examined the effects of H. pylori eradication on gastric emptying in non-ulcer dyspepsia patients and found that gastric emptying improved significantly in patients whose infection was eradicated and whose dyspeptic symptoms disappeared (PUBMED:8574747). However, this does not directly address whether delayed gastric emptying itself shortens the eradication period. Similarly, other studies investigated the effects of H. pylori eradication on gastric motility and symptoms in non-ulcer dyspepsia patients, with some showing improvement in gastric emptying and symptoms after eradication therapy (PUBMED:10571604, PUBMED:19129375). One study specifically noted that gastric emptying and symptom scores improved by H. pylori eradication therapy in non-ulcer dyspepsia patients with disturbed gastric emptying (PUBMED:10540044). In the context of diabetic gastroparesis, one study found that H. pylori eradication can effectively improve dyspepsia-associated symptoms and delayed gastric emptying in patients with diabetic gastroparesis (PUBMED:29133755). However, this does not imply that delayed gastric emptying itself would shorten the eradication period. Overall, while there is evidence that H. pylori eradication can improve gastric emptying and related symptoms, the abstracts provided do not support the notion that delayed gastric emptying shortens the H. pylori eradication period.
Instruction: The oxidation states of DJ-1 dictate the cell fate in response to oxidative stress triggered by 4-hpr: autophagy or apoptosis? Abstracts: abstract_id: PUBMED:26645690 Trypanosome Lytic Factor-1 Initiates Oxidation-stimulated Osmotic Lysis of Trypanosoma brucei brucei. Human innate immunity against the veterinary pathogen Trypanosoma brucei brucei is conferred by trypanosome lytic factors (TLFs), against which human-infective T. brucei gambiense and T. brucei rhodesiense have evolved resistance. TLF-1 is a subclass of high density lipoprotein particles defined by two primate-specific apolipoproteins: the ion channel-forming toxin ApoL1 (apolipoprotein L1) and the hemoglobin (Hb) scavenger Hpr (haptoglobin-related protein). The role of oxidative stress in the TLF-1 lytic mechanism has been controversial. Here we show that oxidative processes are involved in TLF-1 killing of T. brucei brucei. The lipophilic antioxidant N,N'-diphenyl-p-phenylenediamine protected TLF-1-treated T. brucei brucei from lysis. Conversely, lysis of TLF-1-treated T. brucei brucei was increased by the addition of peroxides or thiol-conjugating agents. Previously, the Hpr-Hb complex was postulated to be a source of free radicals during TLF-1 lysis. However, we found that the iron-containing heme of the Hpr-Hb complex was not involved in TLF-1 lysis. Furthermore, neither high concentrations of transferrin nor knock-out of cytosolic lipid peroxidases prevented TLF-1 lysis. Instead, purified ApoL1 was sufficient to induce lysis, and ApoL1 lysis was inhibited by the antioxidant DPPD. Swelling of TLF-1-treated T. brucei brucei was reminiscent of swelling under hypotonic stress. Moreover, TLF-1-treated T. brucei brucei became rapidly susceptible to hypotonic lysis. T. brucei brucei cells exposed to peroxides or thiol-binding agents were also sensitized to hypotonic lysis in the absence of TLF-1. We postulate that ApoL1 initiates osmotic stress at the plasma membrane, which sensitizes T. brucei brucei to oxidation-stimulated osmotic lysis. abstract_id: PUBMED:1907636 Regulation of the oxidative stress response by the hpr gene in Bacillus subtilis. Bacillus subtilis mutants with null mutations in the spo0 A gene are resistant to oxidative stress during the exponential phase of growth. This resistance phenotype can be suppressed by mutations in the abrB gene, or in the hpr gene. Both of these gene products are negative regulatory proteins which are over-produced in a spo0 A strain, and the over-production of the hpr gene product results from over-production of the abrB gene product. The results suggested that the resistance to oxidative stress in a spo0 A strain is due to the lack of a protein directly controlled by the hpr negative regulator. Other mutations in the spo0 A gene conferring resistance to ethanol stress (eth) or suppressors of sporulation phenotypes (sof) had no effect on the sensitivity to oxidative stress of strains bearing them. abstract_id: PUBMED:32385381 Label-free plasma proteomics identifies haptoglobin-related protein as candidate marker of idiopathic pulmonary fibrosis and dysregulation of complement and oxidative pathways. Idiopathic pulmonary fibrosis (IPF) is a lung parenchymal disease of unknown cause usually occurring in older adults. It is a chronic and progressive condition with poor prognosis and diagnosis is largely clinical. Currently, there exist few biomarkers that can predict patient outcome or response to therapies. Together with lack of markers, the need for novel markers for the detection and monitoring of IPF, is paramount. We have performed label-free plasma proteomics of thirty six individuals, 17 of which had confirmed IPF. Proteomics data was analyzed by volcano plot, hierarchical clustering, Partial-least square discriminant analysis (PLS-DA) and Ingenuity pathway analysis. Univariate and multivariate statistical analysis overlap identified haptoglobin-related protein as a possible marker of IPF when compared to control samples (Area under the curve 0.851, ROC-analysis). LXR/RXR activation and complement activation pathways were enriched in t-test significant proteins and oxidative regulators, complement proteins and protease inhibitors were enriched in PLS-DA significant proteins. Our pilot study points towards aberrations in complement activation and oxidative damage in IPF patients and provides haptoglobin-related protein as a new candidate biomarker of IPF. abstract_id: PUBMED:30712426 Urine Haptoglobin and Haptoglobin-Related Protein Predict Response to Spironolactone in Patients With Resistant Hypertension. Resistant hypertension prevalence is progressively increasing, and prolonged exposure to suboptimal blood pressure control results in higher cardiovascular risk and end-organ damage. Among various antihypertensive agents, spironolactone seems the most effective choice to treat resistant hypertension once triple therapy including a diuretic fails. However success in blood pressure control is not guaranteed, adverse effects are not negligible, and no clinical tools are available to predict patient's response. Complementary to our previous study of resistant hypertension metabolism, here we investigated urinary proteome changes with potential capacity to predict response to spironolactone. Twenty-nine resistant hypertensives were included. A prospective study was conducted and basal urine was collected before spironolactone administration. Patients were classified in responders or nonresponders in terms of blood pressure control. Protein quantitation was performed by liquid chromatography-mass spectrometry; ELISA and target mass spectrometry analysis were performed for confirmation. Among 3310 identified proteins, HP (haptoglobin) and HPR (haptoglobin-related protein) showed the most significant variations, with increased levels in nonresponders compared with responders before drug administration (variation rate, 5.98 and 7.83, respectively). Protein-coordinated responses were also evaluated by functional enrichment analysis, finding oxidative stress, chronic inflammatory response, blood coagulation, complement activation, and regulation of focal adhesions as physiopathological mechanisms in resistant hypertension. In conclusion, protein changes able to predict patients' response to spironolactone in basal urine were here identified for the first time. These data, once further confirmed, will support clinical decisions on patients' management while contributing to optimize the rate of control of resistant hypertensives with spironolactone. abstract_id: PUBMED:26206798 Genetic variants and cell-free hemoglobin processing in sickle cell nephropathy. Intravascular hemolysis and hemoglobinuria are associated with sickle cell nephropathy. ApoL1 is involved in cell-free hemoglobin scavenging through association with haptoglobin-related protein. APOL1 G1/G2 variants are the strongest genetic predictors of kidney disease in the general African-American population. A single report associated APOL1 G1/G2 with sickle cell nephropathy. In 221 patients with sickle cell disease at the University of Illinois at Chicago, we replicated the finding of an association of APOL1 G1/G2 with proteinuria, specifically with urine albumin concentration (β=1.1, P=0.003), observed an even stronger association with hemoglobinuria (OR=2.5, P=4.3×10(-6)), and also replicated the finding of an association with hemoglobinuria in 487 patients from the Walk-Treatment of Pulmonary Hypertension and Sickle cell Disease with Sildenafil Therapy study (OR=2.6, P=0.003). In 25 University of Illinois sickle cell disease patients, concentrations of urine kidney injury molecule-1 correlated with urine cell-free hemoglobin concentrations (r=0.59, P=0.002). Exposing human proximal tubular cells to increasing cell-free hemoglobin led to increasing concentrations of supernatant kidney injury molecule-1 (P=0.01), reduced viability (P=0.01) and induction of HMOX1 and SOD2. HMOX1 rs743811 associated with chronic kidney disease stage (OR=3.0, P=0.0001) in the University of Illinois cohort and end-stage renal disease (OR=10.0, P=0.0003) in the Walk-Treatment of Pulmonary Hypertension and Sickle cell Disease with Sildenafil Therapy cohort. Longer HMOX1 GT-tandem repeats (&gt;25) were associated with lower estimated glomerular filtration rate in the University of Illinois cohort (P=0.01). Our findings point to an association of APOL1 G1/G2 with kidney disease in sickle cell disease, possibly through increased risk of hemoglobinuria, and associations of HMOX1 variants with kidney disease, possibly through reduced protection of the kidney from hemoglobin-mediated toxicity. abstract_id: PUBMED:31140594 The effects of hydroxycarbamide on the plasma proteome of children with sickle cell anaemia. We investigated changes in the plasma proteome of children with sickle cell anaemia (SCA) associated with hydroxycarbamide (HC) use, to further characterize the actions of HC. Fifty-one children with SCA consented to take part in this study. Eighteen were taking HC at a median dose of 22 mg/kg, and 33 were not on HC. Plasma was analysed using an unbiased proteomic approach and a panel of 92 neurological biomarkers. HC was associated with increased haemoglobin (Hb) (89·8 vs. 81·4 g/l, P = 0·007) and HbF (6·7 vs. 15·3%, P &lt; 0·001). Seventeen proteins were decreased on HC compared to controls by a factor of &lt;0·77, and six proteins showed &gt;1·3 increased concentration. HC use was associated with reduced haemolysis (lower α, β, δ globin chains, haptoglobin-related protein, complement C9; higher haemopexin), reduced inflammation (lower α-1-acid glycoprotein, CD5 antigen-like protein, ceruloplasmin, factor XII, immunoglobulins, cysteine-rich secretory protein 3, vitamin D-binding protein) and decreased activation of coagulation (lower factor XII, carboxypeptidase B2, platelet basic protein). There was a significant correlation between the increase in HbF% on HC and haemopexin levels (r = 0·603, P = 0·023). This study demonstrated three ways in which HC may be beneficial in SCA, and identified novel proteins that may be useful to monitor therapeutic response. abstract_id: PUBMED:9673399 Cell cycle inhibition in human BE-13 T cell leukemia cells by haptoglobin-related (HPR) antisense cDNA. We have recently cloned and sequenced a human haptoglobin-related cDNA. Hpr expression was found in various tumor cell lines. To determine whether the haptoglobin related protein (hpr) affects the growth of an established T-cell leukemia cell line, an Hpr antisense expression vector that specifically reduces hpr production was constructed. The vector was transfected into BE-13 cells, an established T-cell leukemia cell line in which Hpr is expressed. Three stable clones were isolated in which hpr protein expression was reduced. These established cell lines proliferated more slowly than vector transfected cells in proportion to Hpr antisense mRNA expression and the reduction in hpr protein production. Following a BrdU pulse, flow cytometric analysis was performed to estimate the fraction of cells in S phase. Hpr antisense transfected cells contained less cells in S phase compared to vector transfected cells. Also in soft agar, cells expressing the antisense cDNA insert, formed on average at least 7-fold fewer colonies than cells transfected with the vector alone. The data suggest that Hpr inhibitors might be of therapeutic value for T-cell leukemia. abstract_id: PUBMED:14769473 Hpr (ScoC) and the phosphorelay couple cell cycle and sporulation in Bacillus subtilis. Bacillus subtilis sporulation is a developmental process that culminates in the formation of a highly resistant and persistent endospore. Inhibiting DNA synthesis prior to the completion of the final round of DNA replication blocks sporulation at an early stage. Conditions that prevent compartmentalization of gene expression, i.e. inhibition of asymmetric septum formation or chromosome partitioning, also block sporulation at an early stage. Multiple mechanisms including a RecA-dependent, a RecA-independent, and the soj-spo0J operon have been implicated in signal transduction, connecting DNA replication and chromosome partitioning to the onset of sporulation in B. subtilis. We suggest that a single mechanism involving Hpr (ScoC) and Sda couple cell cycle signaling to sporulation initiation. We show that transcription of phosphorelay sensory chain genes is adversely affected by post-exponential perturbation of the cell cycle. DNA replication arrest by chemical treatments, such as hydroxyphenylazouracil, hydroxyurea, nalidixic acid, and through genetic means using dnaA1ts and dnaB19ts temperature-sensitive mutants caused substantial down-regulation of spo0F and kinA expression and elevated the expression of spo0A and spo0H (sigH). Despite the elevation in spo0A expression, Spo0A approximately P-dependent sinI expression was substantially down-regulated indicating that in vivo Spo0A approximately P levels may be diminished. Similar alterations in gene expression patterns were observed in an ftsA279ts mutant background, indicating that cytokinesis and sporulation may also be coupled by a similar mechanism. Loss of function mutation in hpr (scoC) restored sporulation in a dnaA1ts mutant, blocked the DNA replication arrest induction of spo0A expression and restored expression of spo0F, kinA and sinI. Moreover, hpr expression was up-regulated in response to DNA replication arrest. The increase in hpr expression required Sda. These results suggest a role for Hpr (ScoC) in mediating the coupling of cell cycle events to the onset of sporulation. abstract_id: PUBMED:18424265 Identification of tumor antigens that elicit a humoral immune response in breast cancer patients' sera by serological proteome analysis (SERPA). Background: In this study we applied a serological proteomics-based approach (SERPA) to identify tumor antigens that commonly induce a humoral immune response in patients with infiltrating ductal breast carcinomas. Methods: Sera obtained at the time of diagnosis from 40 patients with invasive breast cancer and 42 healthy controls were screened individually for the presence of IgG antibodies to MCF-7 cell line proteins. Immunoreactive proteins were isolated and subsequently identified by MALDI-TOF mass spectrometry. Results: We identified 26 proteins that reacted with antibodies in the sera from breast cancer patients. Among these antigens, a significantly higher frequency occurs against the molecular chaperone HSP60, the tumor suppressor prohibitin, beta-tubulin, the haptoglobin-related protein and peroxiredoxin-2. Immunoreactivity to hnRNPK, Mn-SOD and F1-ATPase was also clearly detected in the patients group, whereas scarcely in control sera. By contrast, two other antigens identified as cytokeratins 8 and 18, as well as, F1-actin were found to elicit humoral immune responses in both control and breast cancer patients' sera. Conclusions: The immunoproteomic approach implemented here offers a powerful tool for determining novel tumor antigens that elicit a humoral immune response in patients with invasive breast cancer. These antigens and/or their related circulating antibodies may display clinical usefulness as potential diagnostic markers and provide a means for a better understanding of the molecular mechanisms underlying breast cancer development. abstract_id: PUBMED:7716520 Killing of trypanosomes by the human haptoglobin-related protein. African trypanosomes cause disease in humans and animals. Trypanosoma brucei brucei affects cattle but not humans because of its sensitivity to a subclass of human high density lipoproteins (HDLs) called trypanosome lytic factor (TLF). TLF contains two apolipoproteins that are sufficient to cause lysis of T. b. brucei in vitro. These proteins were identified as the human haptoglobin-related protein and paraoxonase-arylesterase. An antibody to haptoglobin inhibited TLF activity. TLF was shown to exhibit peroxidase activity and to be inhibited by catalase. These results suggest that TLF kills trypanosomes by oxidative damage initiated by its peroxidase activity. Answer: The provided abstracts do not contain information directly related to DJ-1 or its role in determining cell fate in response to oxidative stress triggered by 4-hydroxyphenylretinamide (4-HPR). DJ-1 is a protein associated with various cellular processes, including response to oxidative stress, and it can influence whether a cell undergoes autophagy or apoptosis. However, the abstracts focus on different aspects of haptoglobin-related proteins (HPR), their role in the immune response, disease biomarkers, and their involvement in the lysis of Trypanosoma brucei brucei, among other topics. For instance, abstract PUBMED:26645690 discusses the role of Trypanosome Lytic Factor-1 (TLF-1), which includes haptoglobin-related protein (Hpr), in the lysis of Trypanosoma brucei brucei through oxidative processes. It does not mention DJ-1 or the specific effects of 4-HPR on autophagy or apoptosis. Similarly, other abstracts such as PUBMED:1907636, PUBMED:32385381, PUBMED:30712426, PUBMED:26206798, PUBMED:31140594, PUBMED:9673399, PUBMED:14769473, PUBMED:18424265, and PUBMED:7716520 discuss various aspects of haptoglobin-related proteins and their interactions or effects in different contexts, such as Bacillus subtilis, idiopathic pulmonary fibrosis, resistant hypertension, sickle cell nephropathy, the effects of hydroxycarbamide on sickle cell anaemia, cell cycle inhibition in T cell leukemia, the coupling of cell cycle and sporulation in Bacillus subtilis, identification of tumor antigens in breast cancer, and the killing of trypanosomes by the human haptoglobin-related protein. None of these abstracts provide information on DJ-1 or its oxidation states in relation to cell fate decisions under oxidative stress conditions induced by 4-HPR. To answer the question about DJ-1 and its role in autophagy or apoptosis in response to 4-HPR-induced oxidative stress, one would need to consult research specifically focused on DJ-1 and its function in oxidative stress response pathways.
Instruction: Are there any sensitive and specific sex steroid markers for polycystic ovary syndrome? Abstracts: abstract_id: PUBMED:20016048 Are there any sensitive and specific sex steroid markers for polycystic ovary syndrome? Context: Despite the high prevalence of hyperandrogenemia, the principal biochemical abnormality in women with polycystic ovary syndrome (PCOS), a definitive endocrine marker for PCOS has so far not been identified. Objective: To identify a tentative diagnostic marker for PCOS, we compared serum levels of sex steroids, their precursors, and main metabolites in women with PCOS and controls. Design And Methods: In this cross-sectional study of 74 women with PCOS and 31 controls, we used gas and liquid chromatography/mass spectrometry to analyze serum sex steroid precursors, estrogens, androgens, and glucuronidated androgen metabolites; performed immunoassays of SHBG, LH, and FSH; and calculated the LH/FSH ratio. Results: Androgens and estrogens, sex steroid precursors, and glucuronidated androgen metabolites were higher in women with PCOS than in controls. In multivariate logistic regression analyses, estrone and free testosterone were independently associated with PCOS. The odds ratios per sd increase were 24.2 for estrone [95% confidence interval (CI), 4.0-144.7] and 12.8 for free testosterone (95% CI, 3.1-53.4). In receiver operating characteristic analyses, the area under curve was 0.93 for estrone (95% CI, 0.88-0.98) and 0.91 for free testosterone (95% CI, 0.86-0.97), indicating high sensitivity and specificity. Conclusion: Women with PCOS have elevated levels of sex steroid precursors, estrogens, androgens, and glucuronidated androgen metabolites as measured with a specific and sensitive mass spectrometry-based technique. The combination of elevated estrone (&gt;50 pg/ml) and free testosterone (&gt;3.3 pg/ml) appeared to discriminate with high sensitivity and specificity between women with and without PCOS. abstract_id: PUBMED:35885010 Sex Steroid Receptors in Polycystic Ovary Syndrome and Endometriosis: Insights from Laboratory Studies to Clinical Trials. Polycystic ovary syndrome (PCOS) and endometriosis are reproductive disorders that may cause infertility. The pathology of both diseases has been suggested to be associated with sex steroid hormone receptors, including oestrogen receptors (ER), progesterone receptors (PRs) and androgen receptors (ARs). Therefore, with this review, we aim to provide an update on the available knowledge of these receptors and how their interactions contribute to the pathogenesis of PCOS and endometriosis. One of the main PCOS-related medical conditions is abnormal folliculogenesis, which is associated with the downregulation of ER and AR expression in the ovaries. In addition, metabolic disorders in PCOS are caused by dysregulation of sex steroid hormone receptor expression. Furthermore, endometriosis is related to the upregulation of ER and the downregulation of PR expression. These receptors may serve as therapeutic targets for the treatment of PCOS-related disorders and endometriosis, considering their pathophysiological roles. Receptor agonists may be applied to increase the expression of a specific receptor and treat endometriosis or metabolic disorders. In contrast, receptor antagonist functions to reduce receptor expression and can be used to treat endometriosis and induce ovulation. Understanding PCOS and the pathological roles of endometriosis sex steroid receptors is crucial for developing potential therapeutic strategies to treat infertility in both conditions. Therefore, research should be continued to fill the knowledge gap regarding the subject. abstract_id: PUBMED:30503354 Sex, Microbes, and Polycystic Ovary Syndrome. Recent studies have shown that sex and sex steroids influence the composition of the gut microbiome. These studies also indicate that steroid regulation of the gut microbiome may play a role in pathological situations of hormonal excess, such as PCOS. Indeed, studies demonstrated that PCOS is associated with decreased alpha diversity and changes in specific Bacteroidetes and Firmicutes, previously associated with metabolic dysregulation. These studies suggest that androgens may regulate the gut microbiome in females and that hyperandrogenism may be linked with a gut 'dysbiosis' in PCOS. Future mechanistic studies will be required to elucidate how sex steroids regulate the composition and function of the gut microbial community and what the consequences of this regulation are for the host. abstract_id: PUBMED:31016162 Utility of a Commercially Available Blood Steroid Profile in Endocrine Practice. Background: A blood steroid profile has recently become available on commercial basis in India. In this study, we report our initial experience with the use of steroid profile in the evaluation of disorders of sex development (DSD) and suspected cases of congenital adrenal hyperplasia (CAH) and discuss the potential scenarios in endocrine practice that may benefit from this steroid profile. Materials And Methods: The study included six subjects. Patient 1 was a 46, XX girl who presented with peripubertal virilization, patient 2 was a girl who presented with normal pubertal development, secondary amenorrhea, and virilization, and patient 3 was a girl who presented with primary amenorrhea and virilization. These three patients were suspected to have CAH but had non-diagnostic serum 17 OH-progesterone levels. Patient 4 and 5 were 46, XY reared as girls who presented with primary amenorrhea alone and primary amenorrhea and virilization, respectively, and sixth subject was a heathy volunteer. All subjects were evaluated with blood steroid profile by Liquid chromatography tandem mass spectrometry (LC-MS/MS). Results: Patient 1 and 2 were diagnosed to have 11 β-hydroxylase deficiency by using the steroid profile. Patient 3 was suspected to have CAH, but the steroid profile excluded the diagnosis and helped to confirm the diagnosis as polycystic ovary syndrome. In patient 4 and patient 5, although steroid profile ruled out the possibility of steroidogenesis defects, it did not help to reach at the specific diagnosis. Conclusion: The blood steroid profile used in this study is most useful for the diagnosis of 11 β-hydroxylase deficiency. The utility of this test is limited in the evaluation of 46, XY patients with under-virilization. abstract_id: PUBMED:6769612 Abnormal sex steroid secretion and binding in massively obese women. We have measured the plasma concentrations of sex steroids and sex hormone-binding globulin (SHBG) in twenty-three massively obese women and ten age-matched lean female volunteers. In the obese women increased plasma testosterone (obese 3.2 +/- 0.5 nmol/l controls 1.7 +/- 0.5 nmol/l, P less than 0.3) and androstenedione concentrations (obese 9.7 +/- 1.2 nmol/l, controls 4.4 +/- 0.6 nmol/l, P = less than 0.01) an increased ratio of oestrone:oestradiol (obese 2.4 +/- 0.4, controls 1.0 +/- 0.1, P = less than 0.1) and decreased SHBG levels (obese 30 +/- 4 nmol/l, controls 60 +/- 8 nmol/l, P = less than 0.001) were found. Obesity differed from the polycystic ovary syndrome (in which a similar pattern of changes of sex steroid concentrations and binding are seen) in that it was associated with normal increases in serum luteinizing hormone (LH) follicle stimulating hormone (FSH) levels in response to the administration of LHRH. We conclude that the common occurrence of menstrual abnormalities in obesity results from abnormal secretion and binding of sex steroids. In addition, the unaltered secretion of LH and FSH in the presence of such changes is evidence for a disorder of hypothalamic function. abstract_id: PUBMED:21795737 Sex steroid hormones and reproductive disorders: impact on women's health. The role of sex steroid hormones in reproductive function in women is well established. However, in the last two decades it has been shown that receptors for estrogens, progesterone and androgens are expressed in non reproductive tissue /organs (bone, brain, cardiovascular system) playing a role in their function. Therefore, it is critical to evaluate the impact of sex steroid hormones in the pathophysiology of some diseases (osteoporosis, Alzheimer, atherosclerosis). In particular, women with primary ovarian insufficiency, polycystic ovary syndrome, endometriosis and climacteric syndrome may have more health problems and therefore an hormonal treatment may be crucial for these women. abstract_id: PUBMED:32326591 Review: Sex-Specific Aspects in the Bariatric Treatment of Severely Obese Women. This systematic literature review aims to point out sex-specific special features that are important in the bariatric treatment of women suffering from severe obesity. A systematic literature search was carried out according to Cochrane and Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) guidelines. After the literature selection, the following categories were determined: sexuality and sexual function; contraception; fertility; sex hormones and polycystic ovary syndrome; menopause and osteoporosis; pregnancy and breastfeeding; pelvic floor disorders and urinary incontinence; female-specific cancer; and metabolism, outcome, and quality of life. For each category, the current status of research is illuminated and implications for bariatric treatment are determined. A summary that includes key messages is given for each subsection. An overall result of this paper is an understanding that sex-specific risks that follow or result from bariatric surgery should be considered more in aftercare. In order to increase the evidence, further research focusing on sex-specific differences in the outcome of bariatric surgery and promising treatment approaches to female-specific diseases is needed. Nevertheless, bariatric surgery shows good potential in the treatment of sex-specific aspects for severely obese women that goes far beyond mere weight loss and reduction of metabolic risks. abstract_id: PUBMED:32668269 Emerging roles for noncoding RNAs in female sex steroids and reproductive disease. The "central dogma" of molecular biology, that is, that DNA blueprints encode messenger RNAs which are destined for translation into protein, has been challenged in recent decades. In actuality, a significant portion of the genome encodes transcripts that are transcribed into functional RNA. These noncoding RNAs (ncRNAs), which are not transcribed into protein, play critical roles in a wide variety of biological processes. A growing body of evidence derived from mouse models and human data demonstrates that ncRNAs are dysregulated in various reproductive pathologies, and that their expression is essential for female gametogenesis and fertility. Yet in many instances it is unclear how dysregulation of ncRNA expression leads to a disease process. In this review, we highlight new observations regarding the roles of ncRNAs in the pathogenesis of disordered female steroid hormone production and disease, with an emphasis on long noncoding RNAs (lncRNAs) and microRNAs (miRNAs). We will focus our discussion in the context of three ovarian disorders which are characterized in part by altered steroid hormone biology - diminished ovarian reserve, premature ovarian insufficiency, and polycystic ovary syndrome. We will also discuss the limitations and challenges faced in studying noncoding RNAs and sex steroid hormone production. An enhanced understanding of the role of ncRNAs in sex hormone regulatory networks is essential in order to advance the development of potential diagnostic markers and therapeutic targets for diseases, including those in reproductive health. Our deepened understanding of ncRNAs has the potential to uncover new applications and therapies; however, in many cases, the next steps will involve distinguishing critical ncRNAs from those which are merely changing in response to a particular disease state, or which are altogether unrelated to disease pathophysiology. abstract_id: PUBMED:28948821 Sex differences in the effect of prenatal testosterone exposure on steroid hormone production in adult rats. Maternal hyperandrogenism during pregnancy might have metabolic and endocrine consequences on the offspring as shown for the polycystic ovary syndrome. Despite numerous experiments, the impact of prenatal hyperandrogenic environment on postnatal sex steroid milieu is not yet clear. In this study, we investigated the effect of prenatal testosterone excess on postnatal concentrations of luteinizing hormone, corticosterone and steroid hormones including testosterone, pregnenolone, progesterone, estradiol and 7beta-hydroxy-epiandrosterone in the offspring of both sexes. Pregnant rats were injected daily with either testosterone propionate or vehicle from gestational day 14 until parturition. The hormones were evaluated in plasma of the adult offspring. As expected, females had lower testosterone and higher pregnenolone, progesterone and estradiol in comparison to males. In addition, corticosterone was higher in females than in males, and it was further elevated by prenatal testosterone treatment. In males, prenatal testosterone exposure resulted in higher 7beta-hydroxy-epiandrosterone in comparison to control group. None of the other analyzed hormones were affected by prenatal testosterone. In conclusion, our results did not show major effects on sex hormone production or luteinizing hormone release in adult rats resulting from testosterone excess during their fetal development. However, maternal hyperandrogenism seems to partially affect steroid biosynthesis in sex-specific manner. abstract_id: PUBMED:38380176 Update on sex specific risk factors in cardiovascular disease. Cardiovascular disease (CVD) is the leading cause of death worldwide and accounts for roughly 1 in 5 deaths in the United States. Women in particular face significant disparities in their cardiovascular care when compared to men, both in the diagnosis and treatment of CVD. Sex differences exist in the prevalence and effect of cardiovascular risk factors. For example, women with history of traditional cardiovascular risk factors including hypertension, tobacco use, and diabetes carry a higher risk of major cardiovascular events and mortality when compared to men. These discrepancies in terms of the relative risk of CVD when traditional risk factors are present appear to explain some, but not all, of the observed differences among men and women. Sex-specific cardiovascular disease research-from identification, risk stratification, and treatment-has received increasing recognition in recent years, highlighting the current underestimated association between CVD and a woman's obstetric and reproductive history. In this comprehensive review, sex-specific risk factors unique to women including adverse pregnancy outcomes (APO), such as hypertensive disorders of pregnancy (HDP), gestational diabetes mellitus, preterm delivery, and newborn size for gestational age, as well as premature menarche, menopause and vasomotor symptoms, polycystic ovarian syndrome (PCOS), and infertility will be discussed in full detail and their association with CVD risk. Additional entities including spontaneous coronary artery dissection (SCAD), coronary microvascular disease (CMD), systemic autoimmune disorders, and mental and behavioral health will also be discussed in terms of their prevalence among women and their association with CVD. In this comprehensive review, we will also provide clinicians with a guide to address current knowledge gaps including implementation of a sex-specific patient questionnaire to allow for appropriate risk assessment, stratification, and prevention of CVD in women. Answer: Yes, there are sensitive and specific sex steroid markers for polycystic ovary syndrome (PCOS). A study comparing serum levels of sex steroids, their precursors, and main metabolites in women with PCOS and controls found that androgens, estrogens, sex steroid precursors, and glucuronidated androgen metabolites were higher in women with PCOS. In particular, estrone and free testosterone were independently associated with PCOS, with high odds ratios per standard deviation increase. The study also reported that the combination of elevated estrone (>50 pg/ml) and free testosterone (>3.3 pg/ml) could discriminate with high sensitivity and specificity between women with and without PCOS (PUBMED:20016048).
Instruction: Varicella complications: is it time to consider a routine varicella vaccination? Abstracts: abstract_id: PUBMED:33326320 Acceptance of varicella vaccination. Varicella is a common vaccine-preventable disease that usually presents in children as a mild infection; however, severe complications also occur. The burden of varicella is significant in the terms of incidence, complication, and hospitalization rate related to varicella and economic disease burden. Despite the evidence of overall positive effects of varicella vaccination, there are great differences in the implementation of varicella vaccination and in the uptake of the vaccine from country to country. Improving acceptance of varicella vaccination on the national and on the individual level would decrease the burden of the disease on the health of children and on health-care resources. In studies determining factors of parental acceptance of varicella vaccination questions specific for varicella vaccination were highlighted. Addressing these issues with open, evidence based communication is important to improve and maintain the trust of the public in varicella vaccination. abstract_id: PUBMED:25087675 Varicella hospitalizations in Los Angeles during the varicella vaccination era, 2003-2011: are they preventable? Characteristics of varicella-related hospitalizations in the mature varicella vaccination era, including the proportion vaccinated and the severity of disease, are not well described. We present the vaccination status, severity and reasons for hospitalization of the hospitalized varicella cases reported to the Los Angeles County Health Department from 2003 to 2011, the period which includes the last 4 years of the mature one-dose program and the first 5 years after introduction of the routine two-dose program. A total of 158 hospitalized varicella cases were reported overall, of which 52.5% were potentially preventable and eligible for vaccination, 41.8% were not eligible for vaccination, and 5.7% were vaccinated. Most hospitalizations (72.2%) occurred among healthy persons, 54.4% occurred among persons ≥20 years of age, and 3.8% of hospitalizations resulted in death. Our data suggest that as many as half of the hospitalized varicella cases, including half of the deaths, may have been preventable given that they occurred in persons who were eligible for vaccination. More complete implementation of the routine varicella vaccination program could further reduce the disease burden of severe varicella. abstract_id: PUBMED:38480101 UK healthcare professionals' attitudes towards the introduction of varicella vaccine into the routine childhood vaccination schedule and their preferences for administration. Background: Varicella (chickenpox) is a highly contagious disease caused by the varicella-zoster virus. Although typically mild, varicella can cause complications leading to severe illness and even death. Safe and effective varicella vaccines are available. The Joint Committee on Vaccination and Immunisation has reviewed the evidence and recommended the introduction of varicella vaccine into the UK's routine childhood immunisation schedule. Objectives: To explore UK healthcare professionals' (HCPs) knowledge and attitudes towards varicella vaccination, its introduction to the UK routine childhood immunisation schedule, and their preferences for how it should be delivered. Design: We conducted an online cross-sectional survey exploring HCPs' attitudes towards varicella, varicella vaccine, and their preferences for delivery of the vaccine between August and September 2022 prior to the recommendation that varicella vaccine should be introduced. Participants: 91 HCPs working in the UK (81 % nurses/health visitors, 9 % doctors, 10 % researcher/other, mean age 48.7 years). Results: All respondents agreed or strongly agreed that vaccines are important for a child's health. However, only 58% agreed or strongly agreed that chicken pox was a disease serious enough to warrant vaccination. Gaps in knowledge about varicella were revealed: 21.0% of respondents disagreed or were unsure that chickenpox can cause serious complications, while 41.8% were unsure or did not believe chickenpox was serious enough to vaccinate against. After receiving some basic information about chickenpox and the vaccine, almost half of the HCPs (47.3%) in our survey would prefer to administer the varicella vaccine combined with MMR. Conclusions: Given the positive influence of HCPs on parents' decisions to vaccinate their children, it is important to understand HCPs' views regarding the introduction of varicella vaccine into the routine schedule. Our findings highlighted areas for training and HCPs' preferences which will have implications for policy and practice when the vaccine is introduced. abstract_id: PUBMED:27769206 Coverage, efficacy or dosing interval: which factor predominantly influences the impact of routine childhood vaccination for the prevention of varicella? A model-based study for Italy. Background: Varicella is a highly infectious disease with a significant public health and economic burden, which can be prevented with childhood routine varicella vaccination. Vaccination strategies differ by country. Some factors are known to play an important role (number of doses, coverage, dosing interval, efficacy and catch-up programmes), however, their relative impact on the reduction of varicella in the population remains unclear. This paper aims to help policy makers prioritise the critical factors to achieve the most successful vaccination programme with the available budget. Methods: Scenarios assessed the impact of different vaccination strategies on reduction of varicella disease in the population. A dynamic transmission model was used and adapted to fit Italian demographics and population mixing patterns. Inputs included coverage, number of doses, dosing intervals, first-dose efficacy and availability of catch-up programmes, based on strategies currently used or likely to be used in different countries. The time horizon was 30 years. Results: Both one- and two-dose routine varicella vaccination strategies prevented a comparable number of varicella cases with complications, but two-doses provided broader protection due to prevention of a higher number of milder varicella cases. A catch-up programme in susceptible adolescents aged 10-14 years old reduced varicella cases by 27-43 % in older children, which are often more severe than in younger children. Coverage, for all strategies, sustained at high levels achieved the largest reduction in varicella. In general, a 20 % increase in coverage resulted in a further 27-31 % reduction in varicella cases. When high coverage is reached, the impact of dosing interval and first-dose vaccine efficacy had a relatively lower impact on disease prevention in the population. Compared to the long (11 years) dosing interval, the short (5 months) and medium (5 years) interval schedules reduced varicella cases by a further 5-13 % and 2-5 %, respectively. Similarly, a 10 % increase in first-dose efficacy (from 65 to 75 % efficacy) prevented 2-5 % more varicella cases, suggesting it is the least influential factor when considering routine varicella vaccination. Conclusions: Vaccination strategies can be implemented differently in each country depending on their needs, infrastructure and healthcare budget. However, ensuring high coverage remains the critical success factor for significant prevention of varicella when introducing varicella vaccination in the national immunisation programme. abstract_id: PUBMED:24183712 Long-term clinical studies of varicella vaccine at a regional hospital in Japan and proposal for a varicella vaccination program. In 1974, a live varicella vaccine (Oka strain) was developed in Japan for the prevention of varicella. It has been commercially available since 1987 for the voluntary vaccination program, in which children over the age of 1 year with no history of previous varicella infection receive a single dose. From before approval up to the present, we have been carrying out long-term studies in healthy children at a regional hospital to assess the immunogenicity, safety, and efficacy of the varicella vaccine. This vaccine is very safe, and serious adverse reactions have not been observed since the year 2000 when it changed gelatin-free. In the past three studies, seroconversion was detected in around 95% of subjects by the immune adherence hemagglutination (IAHA) test, and this high rate was considered to indicate good immunogenicity. Breakthrough varicella is observed in approximately 20-30% of children who receive a single dose of the vaccine, but most cases are mild. Although recent vaccination has generally been effective, the IAHA test has shown that immunogenicity is somewhat lower than was previously demonstrated. The sensitivity of the IAHA test has been shown to be adequate when compared with the neutralization test, so the current testing system is sufficient for the maintenance of immunity levels. An additional vaccination increased the IAHA antibody level in subjects who failed to seroconvert after a single dose vaccination. According to another clinical study, additional varicella vaccination at 3-5 years after the initial vaccination achieved stronger immunogenicity. Because it is administered as part of the voluntary vaccination program, the varicella vaccination coverage rate has remained low in Japan, with no sign of a decrease in the number of varicella patients. We consider that implementation of routine varicella vaccination program based on the Preventive Vaccination Law would be the most effective approach for improvement of the coverage rate. Along with this, introduction of a two-dose schedule would also be desirable. In addition to decreasing the prevalence of characteristic breakthrough varicella infection, the vaccination coverage rate would also be expected to improve with a two-dose schedule due to an increase in opportunities for vaccination. abstract_id: PUBMED:25721380 Cost-effectiveness of routine varicella vaccination using the measles, mumps, rubella and varicella vaccine in France: an economic analysis based on a dynamic transmission model for varicella and herpes zoster. Purpose: Each year in France, varicella and zoster affect large numbers of children and adults, resulting in medical visits, hospitalizations for varicella- and zoster-related complications, and societal costs. Disease prevention by varicella vaccination is feasible, wherein a plausible option involves replacing the combined measles, mumps, and rubella (MMR) vaccine with the combined MMR and varicella (MMRV) vaccine. This study aimed to: (1) assess the cost-effectiveness of adding routine varicella vaccination through MMRV, using different vaccination strategies in France; and (2) address key uncertainties, such as the economic consequences of breakthrough varicella cases, the waning of vaccine-conferred protection, vaccination coverage, and indirect costs. Methods: Based on the outputs of a dynamic transmission model that used data on epidemiology and costs from France, a cost-effectiveness model was built. A conservative approach was taken regarding the impact of varicella vaccination on zoster incidence by assuming the validity of the hypothesis of an age-specific boosting of immunity against varicella. Findings: The model determined that routine MMRV vaccination is expected to be a cost-effective option, considering a cost-effectiveness threshold of €20,000 per quality-adjusted life-year saved; routine vaccination was cost-saving from the societal perspective. Results were driven by a large decrease in varicella incidence despite a temporary initial increase in the number of zoster cases due to the assumption of exogenous boosting. In the scenario analyses, despite moderate changes in assumptions about incidence and costs, varicella vaccination remained a cost-effective option for France. Implications: Routine vaccination with MMRV was associated with high gains in quality-adjusted life-years, substantial reduction in the occurrences of varicella- and zoster-related complications, and few deaths due to varicella. Routine MMRV vaccination is also expected to provide reductions in costs related to hospitalizations, medication use, and general-practitioner visits, as well as indirect costs, and it is expected to be a cost-effective intervention in France (GSK study identifier: HO-12-6924). abstract_id: PUBMED:24101763 Impact of a routine two-dose varicella vaccination program on varicella epidemiology. Objective: One-dose varicella vaccination for children was introduced in the United States in 1995. In 2006, a second dose was recommended to further decrease varicella disease and outbreaks. We describe the impact of the 2-dose vaccination program on varicella incidence, severity, and outbreaks in 2 varicella active surveillance areas. Methods: We examined varicella incidence rates and disease characteristics in Antelope Valley (AV), CA, and West Philadelphia, PA, and varicella outbreak characteristics in AV during 1995-2010. Results: In 2010, varicella incidence was 0.3 cases per 1000 population in AV and 0.1 cases per 1000 population in West Philadelphia: 76% and 67% declines, respectively, since 2006 and 98% declines in both sites since 1995; incidence declined in all age groups during 2006-2010. From 2006-2010, 61.7% of case patients in both surveillance areas had been vaccinated with 1 dose of varicella vaccine and 7.5% with 2 doses. Most vaccinated case patients had &lt;50 lesions with no statistically significant differences among 1- and 2-dose cases (62.8% and 70.3%, respectively). Varicella-related hospitalizations during 2006-2010 declined &gt;40% compared with 2002-2005 and &gt;85% compared with 1995-1998. Twelve varicella outbreaks occurred in AV during 2007-2010, compared with 47 during 2003-2006 and 236 during 1995-1998 (P &lt; .01). Conclusions: Varicella incidence, hospitalizations, and outbreaks in 2 active surveillance areas declined substantially during the first 5 years of the 2-dose varicella vaccination program. Declines in incidence across all ages, including infants who are not eligible for varicella vaccination, and adults, in whom vaccination levels are low, provide evidence of the benefit of high levels of immunity in the population. abstract_id: PUBMED:25444825 Varicella vaccination coverage inverse correlation with varicella hospitalizations in Spain. Varicella vaccines available in Spain were marketed in 1998 and 2003 for non-routine use. Since 2006 some regions included universal varicella vaccination in their regional routine vaccination programs at 15-18 months of age. Regions without universal vaccination in toddlers, but instead with the strategy of vaccinating susceptible adolescents, reached different varicella vaccination coverage through private market. This study shows the correlation between severe varicella zoster virus infections requiring hospitalization and the varicella vaccination coverage by region. A total of 3009 hospital discharges related to varicella were reported in 2009-2010. The overall annual rate of hospitalization was 3.27 cases per 100,000. In children younger than 5 years old varicella hospitalization rate was 30.73 cases per 100,000. Varicella related hospitalizations were significantly lower in the regions with universal vaccination. In those regions without universal vaccination at 15-18 months of age, those with higher coverage in private market showed lower hospitalization rates. abstract_id: PUBMED:36457198 Global varicella vaccination programs. Varicella (chickenpox) is an infectious disease caused by the highly contagious varicella zoster virus with a secondary attack rate greater than 90%. From this perspective, we aimed to establish the basis for a national varicella vaccine policy by reviewing vaccination programs and policies of countries that have introduced universal varicella vaccinations. As a result of the spread of varicella, an increasing number of countries are providing 2-dose vaccinations and universally expanding their use. In practice, the efficacy and effectiveness of vaccination differ among vaccines and vaccination programs. Optimized vaccination strategies based on each country's local epidemiology and health resources are required. Accordingly, it is necessary to evaluate the effectiveness of varicella vaccines in different settings. Given the short-term and fragmented vaccine effectiveness evaluation in Korea, it is necessary to evaluate its effectiveness at the national level and determine its schedule based on the evidence generated through these studies. abstract_id: PUBMED:36631357 Anti-varicella-zoster virus antibody titers and seroprotection status from before the first dose of varicella vaccination to before entering elementary school in one region in Japan. Objective: We aimed to examine changes in anti-varicella-zoster virus (VZV) antibody titers and seroprotection status from before the first dose of vaccination to before 7 years old entering elementary school in children who received the routine two-dose varicella vaccination. Methods: Participants were 37 healthy children who received the routine two-dose varicella vaccination at our hospital. A total of five serum samples per child were collected immediately before and 4-6 weeks after each dose of the vaccination and in the year before entry to elementary school. We measured anti-VZV antibody titers by immune adherence hemagglutination (IAHA) method and glycoprotein-based enzyme-linked immunosorbent assay (gpELISA). A positive antibody titer and the seroprotection level were set as ≥2-fold and ≥16-fold, respectively, for IAHA antibody and as ≥50 units and ≥105 units, respectively, for gpELISA-IgG antibody. Results: The rates of IAHA antibody positivity in the five samples (in order of collection) were 0%, 65%, 38%, 100%, and 59%, and the rates of seroprotection were 0%, 43%, 8%, 100%, and 43%. The rates of gpELISA-IgG antibody positivity were 8%, 81%, 89%, 100%, and 100%, and the rates of seroprotection were 5%, 54%, 70%, 100%, and 89%. The mean IAHA antibody titer and mean gpELISA-IgG antibody titer before entering elementary school were both lower than the respective titers obtained after the second vaccination (both p &lt; 0.01). Conclusions: Routine two-dose varicella vaccination leads to good antibody production, but titers of acquired antibodies decrease before children enter elementary school. Answer: The evidence from the provided abstracts suggests that it may indeed be time to consider a routine varicella vaccination. Varicella, commonly known as chickenpox, is a highly infectious disease that can lead to severe illness and even death, despite being typically mild in many cases (PUBMED:38480101). The burden of the disease is significant in terms of incidence, complications, hospitalization rates, and economic impact (PUBMED:33326320). Several studies have demonstrated the positive effects of varicella vaccination. For instance, in Los Angeles, more than half of the varicella-related hospitalizations between 2003 and 2011 were potentially preventable as they occurred in individuals eligible for vaccination (PUBMED:25087675). In the UK, healthcare professionals (HCPs) play a crucial role in influencing parents' decisions to vaccinate their children, and understanding their views is important for the successful introduction of the vaccine into routine schedules (PUBMED:38480101). Model-based studies, such as the one conducted in Italy, have shown that high vaccination coverage is the most critical factor for the significant prevention of varicella when introducing vaccination into national immunization programs (PUBMED:27769206). Long-term clinical studies in Japan have also supported the safety and efficacy of the varicella vaccine, suggesting that a two-dose schedule could improve coverage rates and decrease the prevalence of breakthrough varicella infections (PUBMED:24183712). Furthermore, economic analyses, like the one conducted in France, have found that routine varicella vaccination using the MMRV vaccine is cost-effective and could lead to substantial reductions in varicella and zoster-related complications (PUBMED:25721380). In the United States, the introduction of a two-dose varicella vaccination program has led to significant declines in varicella incidence, hospitalizations, and outbreaks (PUBMED:24101763). In Spain, regions with universal varicella vaccination at 15-18 months of age showed significantly lower varicella-related hospitalization rates (PUBMED:25444825). Globally, more countries are providing two-dose vaccinations and expanding their use universally, with the need for optimized vaccination strategies based on local epidemiology and health resources (PUBMED:36457198).
Instruction: Does maternal body mass index influence treatment effect in women with mild gestational diabetes? Abstracts: abstract_id: PUBMED:24839145 Does maternal body mass index influence treatment effect in women with mild gestational diabetes? Objective: The aim of the article is to determine whether maternal body mass index (BMI) influences the beneficial effects of diabetes treatment in women with gestational diabetes mellitus (GDM). Study Design: Secondary analysis of a multicenter randomized treatment trial of women with GDM. Outcomes of interest were elevated umbilical cord c-peptide levels (&gt; 90th percentile 1.77 ng/mL), large for gestational age (LGA) birth weight (&gt; 90th percentile), and neonatal fat mass (g). Women were grouped into five BMI categories adapted from the World Health Organization International Classification of normal, overweight, and obese adults. Outcomes were analyzed according to treatment group assignment. Results: A total of 958 women were enrolled (485 treated and 473 controls). Maternal BMI at enrollment was not related to umbilical cord c-peptide levels. However, treatment of women in the overweight, Class I, and Class II obese categories was associated with a reduction in both LGA birth weight and neonatal fat mass. Neither measure of excess fetal growth was reduced with treatment in normal weight (BMI &lt; 25 kg/m(2)) or Class III (BMI ≥ 40 kg/m(2)) obese women. Conclusion: There was a beneficial effect of treatment on fetal growth in women with mild GDM who were overweight or Class I and Class II obese. These effects were not apparent for normal weight and very obese women. abstract_id: PUBMED:2035574 The clinical utility of maternal body mass index in pregnancy. To describe maternal body mass index and to compare the use of maternal weight and body mass index for risk assessment at the initial prenatal visit, 6270 gravid women who were consecutively delivered of infants were studied. Body mass index increased with advancing maternal age, parity, and advancing gestational age and was significantly greater in black women than in nonblack women. Risks for the development of adverse outcome associated with maternal obesity, including development of gestational diabetes, preeclampsia, fetal macrosomia, and shoulder dystocia, were comparably predicted by either maternal weight or body mass index greater than 90th percentile. Maternal weight was as predictive of preeclampsia, macrosomia, and shoulder dystocia as was body mass index when these factors were analyzed as continuous variables, whereas increasing body mass index was more predictive of gestational diabetes. The prediction of factors associated with low maternal weights, small-for-gestational-age birth, prematurity, low birth weight, and perinatal death was equivalent for maternal weight and body mass index that was less than 10th percentile. This study indicates that in the initial risk assessment of outcomes related to maternal weight, the calculation of maternal body mass index offers no advantage over simply weighing the patient. This finding contrasts with results in nonpregnant women. abstract_id: PUBMED:36440200 Effect of maternal body mass index on the steroid profile in women with gestational diabetes mellitus. Objective: To explore the effect of maternal body mass index (BMI) on steroid hormone profiles in women with gestational diabetes mellitus (GDM) and those with normal glucose tolerance (NGT). Methods: We enrolled 79 women with NGT and 80 women with GDM who had a gestational age of 24-28 weeks. The participants were grouped according to their BMI. We quantified 11 steroid hormones profiles by liquid chromatography-tandem mass spectrometry and calculated the product-to-precursor ratios in the steroidogenic pathway. Results: Women with GDM and BMI&lt;25kg/m2 showed higher concentrations of dehydroepiandrosterone (DHEA) (p&lt;0.001), testosterone (T) (p=0.020), estrone (E1) (p=0.010) and estradiol (E2) (p=0.040) and lower Matsuda index and HOMA-β than women with NGT and BMI&lt;25kg/m2. In women with GDM, concentrations of E1 (p=0.006) and E2 (p=0.009) declined, accompanied by reduced E2/T (p=0.008) and E1/androstenedione (A4) (p=0.010) in the BMI&gt;25 kg/m2 group, when compared to that in the BMI&lt;25 kg/m2 group. The values of E2/T and E1/A4 were used to evaluate the cytochrome P450 aromatase enzyme activity in the steroidogenic pathway. Both aromatase activities negatively correlated with the maternal BMI and positively correlated with the Matsuda index in women with GDM. Conclusions: NGT women and GDM women with normal weight presented with different steroid hormone profiles. Steroidogenic pathway profiling of sex hormones synthesis showed a significant increase in the production of DHEA, T, E1, and E2 in GDM women with normal weight. Additionally, the alteration of steroid hormone metabolism was related to maternal BMI in women with GDM, and GDM women with overweight showed reduced estrogen production and decreased insulin sensitivity compared with GDM women with normal weight. abstract_id: PUBMED:21466515 Maternal and perinatal health outcomes by body mass index category. Aims: To determine the effect of increasing maternal body mass index (BMI) during pregnancy on maternal and infant health outcomes. Methods: The South Australian Pregnancy Outcome Unit's population database, 2008 was accessed to determine pregnancy outcomes according to maternal BMI. Women with a normal BMI (18.5-24.9 kg/m(2) ) formed a reference population, to which women in other BMI categories were compared utilising risk ratios and 95% confidence intervals. Results: Overweight and obese women had an increased risk of gestational diabetes, hypertension and iatrogenic preterm birth. Labour was more likely to be induced, and the risk of caesarean birth was increased. Infants were more likely to require resuscitation at birth and to have birth weight in excess of 4 kg. The risk increased with increasing maternal BMI. Conclusions: There is a well-documented increased risk of maternal and perinatal health complications for women who are overweight or obese during pregnancy. abstract_id: PUBMED:23696430 The effect of maternal body mass index on perinatal outcomes in women with diabetes. Objective: To determine the effect of increasing maternal obesity, including superobesity (body mass index [BMI] ≥ 50 kg/m2), on perinatal outcomes in women with diabetes. Study Design: Retrospective cohort study of birth records for all live-born nonanomalous singleton infants ≥ 37 weeks' gestation born to Missouri residents with diabetes from 2000 to 2006. Women with either pregestational or gestational diabetes were included. Results: There were 14,595 births to women with diabetes meeting study criteria, including 7,082 women with a BMI &gt; 30 kg/m2 (48.5%). Compared with normal-weight women with diabetes, increasing BMI category, especially superobesity, was associated with a significantly increased risk for preeclampsia (adjusted relative risk [aRR] 3.6, 95% confidence interval [CI] 2.5, 5.2) and macrosomia (aRR 3.0, 95% CI 1.8, 5.40). The majority of nulliparous obese women with diabetes delivered via cesarean including 50.5% of obese, 61.4% of morbidly obese, and 69.8% of superobese women. The incidence of primary elective cesarean among nulliparous women with diabetes increased significantly with increasing maternal BMI with over 33% of morbidly obese and 39% of superobese women with diabetes delivering electively by cesarean. Conclusion: Increasing maternal obesity in women with diabetes is significantly associated with higher risks of perinatal complications, especially cesarean delivery. abstract_id: PUBMED:26376766 Associations between body mass index and maternal weight gain on the delivery of LGA infants in Chinese women with gestational diabetes mellitus. Background: Women with gestational diabetes mellitus (GDM) are at increased risk for maternal and fetal complications including delivery of large for gestational age (LGA) infants. Maternal body mass index (BMI) and excessive weight gain during pregnancy are associated with delivery of LGA infants. However, whether maternal BMI and weight gain are associated with LGA infants in women with GDM is unclear. Basic Procedures: Data on 1049 pregnant women who developed GDM were collected from a university teaching hospital in China and retrospectively analyzed. Data included maternal BMI, weight gain, incidence of LGA and gestational week at diagnosis. Main Findings: The incidence of LGA infants was significantly associated with maternal BMI (p=0.0002) in women with GDM. The odds of delivery of LGA for obese or overweight pregnant women are 3.8 or 2 times more than normal weight pregnant women. The incidence of LGA infants was also significantly associated with maternal weight gain in women with GDM. The odds ratio of delivery of LGA for pregnant women with excessive weight gain was 3.3 times more than pregnant women with normal weight gain. The effect of weight gain was not significantly different between different maternal BMI. Principal Conclusion: The incidence of delivery of LGA infants in Chinese women with GDM who were overweight or obese is higher than Caucasians, Hispanic, and Asian-Americans. The effects of maternal BMI and weight gain on the delivery of LGA infants by women with GDM are additive. abstract_id: PUBMED:31174246 Influence of Body Mass Index on Gestation and Delivery in Nulliparous Women: A Cohort Study. Aims: To assess the influence of obesity on pregnancy and delivery in pregnant nulliparous women. Methods: A cohort, longitudinal, retrospective study was conducted in Spain with 710 women, of which 109 were obese (BMI &gt; 30) and 601 were normoweight (BMI &lt; 25). Consecutive nonrandom sampling. Variables: maternal age, BMI, gestational age, fetal position, start of labor, dilation and expulsion times, type of delivery and newborn weight and height. Results: The dilation time in obese women (309.81 ± 150.42 min) was longer than that in normoweight women (281.18 ± 136.90 min) (p = 0.05, Student's t-test). A higher fetal weight was more likely to lead to longer dilation time (OR = 0.43, 95% CI 0.010-0.075, p &lt; 0.001) and expulsion time (OR = 0.027, 95% CI 0.015-0.039, p &lt; 0.001). A higher maternal age was more likely to lead to a longer expulsion time (OR = 2.054, 95% CI 1.17-2.99, p &lt; 0.001). Obese women were more likely to have gestational diabetes [relative risk (RR) = 3.612, 95% CI 2.102-6.207, p &lt; 0.001], preeclampsia (RR = 5.514, 95% CI 1.128-26.96, p = 0.05), induced birth (RR = 1.26, 95% CI 1.06-1.50, p = 0.017) and cesarean section (RR = 2.16, 95% CI 1.11-4.20, p = 0.022) than normoweight women. Conclusion: Obesity is associated with increased complications during pregnancy, an increased incidence of a cesarean section and induced birth but it has no significant effect on the delivery time. abstract_id: PUBMED:36143886 Correlation between Maternal Weight Gain in Each Trimester and Fetal Growth According to Pre-Pregnancy Maternal Body Mass Index in Twin Pregnancies. Background andObjectives: This study aimed to determine the correlation between maternal weight gain in each trimester and fetal growth according to pre-pregnancy maternal body mass index in twin pregnancies. Materials and Methods: We conducted a retrospective review of the medical records of 500 twin pregnancies delivered at 28 weeks’ gestation or greater at a single tertiary center between January 2011 and December 2020. We measured the height, pre-pregnant body weight, and maternal body weight of women with twin pregnancies and evaluated the relationship between the maternal weight gain at each trimester and fetal growth restriction according to pre-pregnancy body mass index. Results: The overweight pregnant women were older than the normal or underweight pregnant women, and the risk of gestational diabetes was higher. The underweight pregnant women were younger, and the incidence of preterm labor and short cervical length during pregnancy was higher in the younger group. In normal weight pregnant women, newborn babies’ weight was heavier when their mothers gained weight, especially when they gained weight in the second trimester. Mothers’ weight gain in the first trimester was not a significant factor to predict fetal growth. The most predictive single factor for the prediction of small neonates was weight gain during 24−28 and 15−18 weeks, and the cutoff value was 6.2 kg (area under the curve 0.592, p &lt; 0.001). Conclusions: In twin pregnancy, regardless of the pre-pregnant body mass index, maternal weight gain affected fetal growth. Furthermore, weight gain in the second trimester of pregnancy is considered a powerful indicator of fetal growth, especially in normal weight pregnancies. abstract_id: PUBMED:34556066 Maternal body mass index and country of birth in relation to the adverse outcomes of large for gestational age and gestational diabetes mellitus in a retrospective cohort of Australian pregnant women. Background: The prevalence of gestational diabetes mellitus in Australia has been rising in line with the increased incidence of maternal overweight and obesity. Women with gestational diabetes mellitus, high body mass index or both are at an elevated risk of birthing a large for gestational age infant. The aim was to explore the relationship between country of birth, maternal body mass index with large for gestational age, and gestational diabetes mellitus. In addition to provide additional information for clinicians when making a risk assessment for large for gestational age babies. Method: A retrospective cohort study of 27,814 women residing in Australia but born in other countries, who gave birth to a singleton infant between 2008 and 2017 was undertaken. Logistic regression analysis was used to examine the association between the aforementioned variables. Results: A significantly higher proportion of large for gestational age infants was born to overweight and obese women compared to those who were classified as underweight and healthy weight. Asian-born women residing in Australia, with a body mass index of ≥40 kg/m2, had an adjusted odds ratio of 9.926 (3.859-25.535) for birthing a large for gestational age infant. Conversely, Australian-born women with a body mass index of ≥40 kg/m2 had an adjusted odds ratio of 2.661 (2.256-3.139) for the same outcome. Women born in Australia were at high risk of birthing a large for gestational age infant in the presence of insulin-requiring gestational diabetes mellitus, but this risk was not significant for those with the diet-controlled type. Asian-born women did not present an elevated risk of birthing a large for gestational age infant, in either the diet controlled, or insulin requiring gestational diabetes mellitus groups. Conclusions: Women who are overweight or obese, and considering a pregnancy, are encouraged to seek culturally appropriate nutrition and weight management advice during the periconception period to reduce their risk of adverse outcomes. abstract_id: PUBMED:27454190 The associations of maternal body mass index with birthweight and placental weight. Does maternal diabetes matter? A population study of 106 191 pregnancies. Introduction: The aim of this study was to study whether the associations of maternal body mass index with offspring birthweight and placental weight differ by maternal diabetes status. Material And Methods: We performed a population study of 106 191 singleton pregnancies by using data from the years 2009-2012 in the Medical Birth Registry of Norway. We estimated changes in birthweight and in placental weight (in grams) by maternal body mass index by linear regression analysis. Results: In pregnancies of women without diabetes, birthweight increased by 14.7 g (95% confidence interval 14.1-15.2) per unit increase in maternal body mass index, and the increase in placental weight was 4.2 g (95% confidence interval 4.0-4.4). In pregnancies of women with gestational diabetes, the corresponding figures were 11.8 g (95% confidence interval 8.3-15.4) and 2.9 g (95% confidence interval 1.7-4.0). In pregnancies of women with type 1 diabetes we found no significant changes in birthweight or in placental weight by maternal body mass index. Overall, mean birthweight was 513.9 g (95% confidence interval 475.6-552.1) higher in pregnancies involving type 1 diabetes than in pregnancies of women without diabetes. Mean placental weight was 102.1 g (95% confidence interval 89.3-114.9) higher. Also, in pregnancies of women with gestational diabetes, both birthweight and placental weight were higher than in women without diabetes (168.2 g and 46.5 g, respectively). Adjustments were made for maternal body mass index and gestational age at birth. Conclusions: Birthweight and placental weight increased with increasing maternal body mass index in pregnancies of women without diabetes and in pregnancies of women with gestational diabetes, but not in pregnancies of women with type 1 diabetes. Independent of body mass index, mean birthweight and mean placental weight were highest in pregnancies of women with type 1 diabetes. Answer: Yes, maternal body mass index (BMI) does influence the treatment effect in women with mild gestational diabetes (GDM). A study found that treatment of women with mild GDM who were overweight or Class I and Class II obese was associated with a reduction in both large for gestational age (LGA) birth weight and neonatal fat mass. However, these beneficial effects of treatment on fetal growth were not apparent for normal weight or Class III obese women (BMI ≥ 40 kg/m^2) (PUBMED:24839145). Additionally, another study indicated that in women with GDM, the concentrations of certain steroid hormones and their ratios, which reflect the activity of the cytochrome P450 aromatase enzyme in the steroidogenic pathway, were affected by maternal BMI. Overweight women with GDM showed reduced estrogen production and decreased insulin sensitivity compared with GDM women with normal weight (PUBMED:36440200). Moreover, the effect of maternal BMI on the delivery of LGA infants in Chinese women with GDM was found to be significant, with overweight or obese women having a higher incidence of delivering LGA infants compared to normal weight women (PUBMED:26376766). In general, increasing maternal obesity, including superobesity, is significantly associated with higher risks of perinatal complications, especially cesarean delivery, in women with diabetes (PUBMED:23696430). Furthermore, maternal BMI is associated with an increased risk of gestational diabetes, hypertension, and iatrogenic preterm birth, among other complications (PUBMED:21466515). Overall, these findings suggest that maternal BMI is an important factor that can influence the treatment outcomes and risks associated with mild GDM, and it should be considered when managing and treating women with this condition.
Instruction: Angioplasty balloon compliance: can in vivo size be predicted from in vitro pressure profile measurements? Abstracts: abstract_id: PUBMED:8723598 Angioplasty balloon compliance: can in vivo size be predicted from in vitro pressure profile measurements? Background And Hypothesis: This study was undertaken to determine whether the behavior of angioplasty balloons within coronary arteries may differ from that anticipated from data provided by the manufacturers. In particular, the in vitro pressure-diameter profiles may not truly represent in vivo sizes. Methods: Thus, we assessed the degree of correlation of in vitro with in vivo measurements obtained during routine angioplasty practice. In vivo size of 2.5 mm compliant (n = 8) and 3 mm semicompliant (n = 8) balloons was assessed using quantitative angiography for first, second, and third inflations. Results: In vivo size was less than expected from in vitro measurements. In general balloon diameter increased with inflation pressures up to 8 atmospheres, and some degree of elastic recoil was evident with both balloon types after the last inflation. Conclusion: In vivo balloon size may not be accurately predicted from manufacturers' published data. Size is more likely to be affected by factors such as lesion characteristics and elasticity of the vessel wall than by balloon material compliance characteristics. abstract_id: PUBMED:8770494 Oscillating balloon angioplasty: does pressure oscillation reach the balloon? Oscillating pressure inflations may minimize trauma to the coronary artery during coronary angioplasty. We measured in vitro diameters of polyolepin copolymer compliant angioplasty balloons (as a surrogate for pressure) during pressure oscillation at the inflator to determine if pressure oscillations at the inflator were conducted to the balloon. Balloon diameter oscillation increased as cycle length and inflator pressure amplitude increased. Currently practiced oscillating inflation (cycle length 2-3 sec, amplitude 2-3 atm) did not effectively transmit oscillation to the balloon. Clinically feasible optimal balloon oscillation was achieved at a cycle length of 6 sec and pressure amplitude of about 3 atm. abstract_id: PUBMED:30534326 Iliac artery fibromuscular dysplasia successfully treated by balloon angioplasty guided by intravascular ultrasound and pressure wire measurements: A case report. A 71-year-old woman was admitted with a 6-month history of lower limb intermittent claudication. She had well-controlled hypertension and no other risk factor of atherosclerosis. Angiographic findings revealed the "string of beads" pattern in bilateral renal arteries and external iliac arteries. She was diagnosed with combined renal and iliac fibromuscular dysplasia (FMD) and underwent balloon angioplasty for bilateral external iliac arteries. Angiography did not accurately show the severity of stenosis and the location of intraluminal obstruction. In contrast, intravascular ultrasound (IVUS) with pressure gradient measurements using a wire clearly identified the primary site of stenosis and determined the treatment efficiency. In conclusion, FMD of the external iliac arteries was successfully treated by balloon angioplasty guided by IVUS and pressure wire measurements. &lt;Learning objective: External iliac artery fibromuscular disease is relatively rare. Angiography is effective for diagnosing this disease; however, angiography has limitations in terms of plaque characterization, measurement of vessel size, and determination of procedural end. In this study, a case of a 71-year-old woman with FMD of the external iliac arteries was successfully treated with balloon angioplasty guided by IVUS and pressure wire gradient measurements.&gt;. abstract_id: PUBMED:17943350 Balloon angioplasty optimization: should we measure balloon volume as well as pressure? Purpose: To investigate the influence that measurement of balloon volume as a controlled variable in addition to balloon pressure has on the outcome of balloon angioplasty in an experimental model. Methods: One hundred and three segments of explanted normal porcine carotid arteries were obtained. Five were used as controls, and the remaining 98 were subjected to balloon angioplasty with simultaneous measurement of balloon volume and pressure. These arteries were randomized into two groups. In one group the endpoint of the angioplasty was determined by balloon pressure (pressure-limited group, PLG) and in the other group by balloon volume (volume-limited group, VLG). Pressure/volume curves for each procedure were constructed by continuous measurement of both parameters by a purpose-designed computer-controlled inflation device. The diameter of each arterial segment was measured by intravascular ultrasound (IVUS) and the ratio of the inflated balloon to arterial diameter calculated. Arterial appearances after angioplasty were recorded using IVUS. Results: The balloon volumes measured at the endpoint of angioplasty were significantly smaller in the PLG compared with the VLG (p &lt; 0.001). Three types of pressure/volume curves were identified: A, B, and C. In the type A curves, IVUS identified fissures in 28% (17/60) and the examination was normal in 72% (43/60). In the type B curves, IVUS identified fissures in 44% (4/9), dissections in 22% (2/9), and the examination was normal in 33% (3/9). In the type C curves, IVUS identified fissures in 44% (4/9) and dissection in 56% (5/9) with no normal examinations. In undamaged arterial segments a very high correlation was achieved between balloon volume and the balloon/artery ratio (Pearson correlation = -0.979, R(2) = 0.957, p &lt; 0.0001, n = 27). Conclusion: The measurement of pressure and volume during angioplasty enabled the construction of pressure/volume curves that showed deviations from the curves obtained in air. The balloon volume results, and significant deviation of the curve shape from the control curve shape, predicted vessel damage, which was confirmed by the IVUS appearance of the vessel after angioplasty. When pressure was used as the endpoint of balloon inflation the balloons were significantly underdilated compared with the manufacturer's nominal sizes. These data indicate that monitoring of pressure and volume during angioplasty may provide an alternative method of predicting vessel damage. abstract_id: PUBMED:34911460 Effect of coarctation of aorta anatomy and balloon profile on the outcome of balloon angioplasty in infantile coarctation. Objective: Coarctation of the Aorta (CoA) is a relatively common cardiovascular disorder. The present study aimed to evaluate the effect of COA anatomy and high versus low-pressure balloons on the outcome of balloon angioplasty among neonates and infants. Methods: In this retrospective study, the neonates and infants undergoing balloon angioplasty at Namazi hospital were enrolled. After balloon angioplasty, immediate data results were promptly recorded.Moreover, midterm echocardiographic information was collected via electronic cardiac records of pediatric wards and clinical and echocardiographic data at least 12 months after balloon angioplasty. Finally, data were analyzed using SPSS-20. Results: In this study, 42 infants were included. The median age at the time of balloon angioplasty was 1.55 (range 0.1-12) months and 66.7% of the patients were male. The mean pressure gradient of coarctation was 38.49 ± 24.97 mmHg, which decreased to 7.61 ± 8.00 mmHg (P &lt; 0.001). A high-pressure balloon was used in 27, and a low-pressure balloon was used in 15 patients. COA's pressure gradient changed 30.89 ± 18.06 in the high-pressure group and 24.53 ± 20.79 in the low-pressure balloon group (P = 0.282). In the high-pressure balloon group, 14.81% and in the low-pressure group, 33.33% had recoarctation and need second balloon angioplasty (p &lt; 0.021). The infant with discrete coarctation had a higher decrease in gradient and lower recoarctation. Conclusion: Recoarctation rate was lower in the high-pressure balloon. The infant with discrete COA had a better response to the balloon with more decrease in gradient and lower recoarctation rate. Therefore, the stenotic segment anatomy needs to be considered in the selection of treatment methods. abstract_id: PUBMED:6237808 Balloon dilatation angioplasty: nonsurgical management of coarctation of the aorta. Balloon dilatation angioplasty was successfully performed in five patients (ages 18 months to 17 years) with discrete aortic coarctation. The catheter size was No. 8F or 9F. Selection of balloon diameter was based on angiographic measurements of the aorta determined proximal and distal to the coarctation site. A 10 sec inflation-deflation cycle at 6 to 8 atmospheres (90 to 120 psi) was performed. The systolic pressure gradients across the coarctation before balloon dilatation angioplasty ranged from 35 to 70 mm Hg. Systolic pressure gradients after balloon dilatation angioplasty ranged from 0 to 10 mm Hg. All patients had normalized blood pressure immediately. Abnormal pulsed Doppler echocardiograms were observed in all patients before balloon dilatation angioplasty; four patients had normal echocardiograms after balloon dilatation angioplasty. No serious intraprocedural complications occurred. One patient required femoral artery thrombectomy 36 hr after balloon dilatation angioplasty. One to 6 months after balloon dilatation angioplasty no patients have evidence of restenosis of coarctation. Early results suggest that balloon dilatation angioplasty may offer a safe and effective nonsurgical alternative for the treatment of discrete coarctation in older infants and children. Long-term follow-up for the incidence of restenosis and formation of aneurysms will ultimately determine the efficacy and safety of this procedure. abstract_id: PUBMED:24255092 Blood pressure normalization post-jugular venous balloon angioplasty. Objective: This study is the first in a series investigating the relationship between autonomic nervous system dysfunction and chronic cerebrospinal venous insufficiency in multiple sclerosis patients. We screened patients for the combined presence of the narrowing of the internal jugular veins and symptoms of autonomic nervous system dysfunction (fatigue, cognitive dysfunction, sleeping disorders, headache, thermal intolerance, bowel/bladder dysfunction) and determined systolic and diastolic blood pressure responses to balloon angioplasty. Methods: The criteria for eligibility for balloon angioplasty intervention included ≥ 50% narrowing in one or both internal jugular veins, as determined by the magnetic resonance venography, and ≥ 3 clinical symptoms of autonomic nervous system dysfunction. Blood pressure was measured at baseline and post-balloon angioplasty. Results: Among patients who were screened, 91% were identified as having internal jugular veins narrowing (with obstructing lesions) combined with the presence of three or more symptoms of autonomic nervous system dysfunction. Balloon angioplasty reduced the average systolic and diastolic blood pressure. However, blood pressure categorization showed a biphasic response to balloon angioplasty. The procedure increased blood pressure in multiple sclerosis patients who presented with baseline blood pressure within lower limits of normal ranges (systolic ≤ 105 mmHg, diastolic ≤ 70 mmHg) but decreased blood pressure in patients with baseline blood pressure above normal ranges (systolic ≥ 130 mmHg, diastolic ≥ 80 mmHg). In addition, gender differences in baseline blood pressure subcategories were observed. Discussion: The coexistence of internal jugular veins narrowing and symptoms of autonomic nervous system dysfunction suggests that the two phenomena may be related. Balloon angioplasty corrects blood pressure deviation in multiple sclerosis patients undergoing internal jugular vein dilation. Further studies should investigate the association between blood pressure deviation and internal jugular veins narrowing, and whether blood pressure normalization affects Patient's clinical outcomes. abstract_id: PUBMED:2529053 Influence of inflation pressure and balloon size on the development of intimal hyperplasia after balloon angioplasty. A study in the atherosclerotic rabbit. To evaluate the effect of balloon size and inflation pressure on acute and subsequent outcome following balloon angioplasty (BA), 70 New Zealand White rabbits with bilateral femoral atherosclerosis were assigned to four groups: group 1, oversized balloon, low inflation pressure (n = 35 vessels; balloon size, 3.0 mm/inflation pressure, 5 atm); group 2, oversized balloon, high inflation pressure (n = 36; 3.0 mm/10 atm); group 3, appropriate size, low inflation pressure (n = 17; 2.5 mm/5 atm); and group 4, appropriate size balloon, high inflation pressure (n = 19; 2.5 mm/10 atm). Angiograms were obtained before, 10 minutes after, and 28 days after BA and read by two blinded observers using electronic calipers. The in vivo balloon-to-vessel ratio was measured for each group. There were eight non-BA controls. Rabbits were sacrificed either immediately (n = 34) or at 28 days after BA (n = 36), with the femoral vessels pressure perfused for histologic and morphometric analysis. The latter was performed at 28 days only. Absolute angiographic diameters increased in all groups immediately after BA (p less than 0.01). Acute angiographic success, defined as greater than 20% increase in luminal diameter, was higher using high inflation pressure (group 2, 32/36 [89%] and group 4, 16/19 [84%] vs. group 1, 23/35 [66%] and group 3, 9/17 [53%]; p less than 0.05). A 3.0-mm balloon resulted in significant oversizing irrespective of inflation pressure (balloon-to-vessel ratio, 1.5 +/- 0.1 vs. 1.1 +/- 0.1 to 1, for the 2.5-mm balloon). Vessels exposed to high inflation pressure had a significantly higher incidence of mural thrombus, dissection (p less than 0.01), and medial necrosis versus low pressure (p less than 0.05). At 28 days, the rates of restenosis (defined as greater than 50% loss of initial gain) were 14/20 (70%), 11/16 (69%), 5/10 (50%), and 5/10 (50%) for groups 1 through 4 (p = NS; a trend in favor of the groups using an oversized balloon). There was an increase in the degree of intimal hyperplasia by morphometric analysis in all groups, being most marked in group 2 (oversized balloon and high inflation pressure, 1.7 +/- 0.9 vs. 0.5 +/- 0.2 mm for controls, p less than 0.001). We reached two conclusions. First, all protocols resulted in a significant increase in luminal diameter immediately after angioplasty with the highest success rate in vessels subjected to high pressure dilatation.(ABSTRACT TRUNCATED AT 400 WORDS) abstract_id: PUBMED:6230907 Higher balloon dilatation pressure in coronary angioplasty. The advent of improved balloon catheters for percutaneous transluminal coronary angioplasty (PTCA) in 1981 extended the theoretic pressure range available for dilatation from 7 atm to 13 atm. The impact of higher dilatation pressure on results of PTCA was studied. The last 100 consecutive patients treated exclusively with the old balloon type (low-pressure group) were compared to the first 100 consecutive patients treated exclusively with the new balloon type (high-pressure group). There was no difference in age, sex, artery distribution, initial degree of stenosis, and initial pressure gradient between the two groups. The mean peak pressure applied was 7.0 +/- 1.6 atm in the low-pressure group and 8.5 +/- 2.1 atm in the high-pressure group (p less than 0.001). The average balloon diameter used and the number and duration of balloon fillings were similar in both groups. Primary success, complications, and residual degree of stenosis were not different in the two groups. The residual pressure gradient, however, was significantly lower in the high-pressure group (11 +/- 7 mm Hg) than in the low-pressure group (16 +/- 10 mm Hg) (p less than 0.01). This indicates a better immediate hemodynamic result without increased risk. It is concluded that it is safe to perform PTCA with the new balloon types allowing for higher pressures. The increment in average pressure used for dilatation, which occurred incidentally, improved the average hemodynamic outcome. This may influence recurrence rate and deserves further investigation by randomized trials. abstract_id: PUBMED:8833078 Flow rate-pressure drop relation in coronary angioplasty: catheter obstruction effect. Quantitative methods to measure the hemodynamic consequences of various endovascular interventions including balloon angioplasty are limited. Catheters measuring translesional pressure drops during balloon angioplasty procedures can cause flow blockage and thus inaccurate estimates of pre- and post-intervention flow rates. The purpose of this investigation was to examine the influence of the presence and size of an angioplasty catheter on measured mean pressure gradients across human coronary artery stenoses. Analytical flow modeling and in vitro experimental evidence, coupled with angiographic data on the dimensions and shape of stenotic vessel segments before and after angioplasty, indicated significant flow blockage effects with the catheter present. Answer: The in vivo size of angioplasty balloons may not be accurately predicted from in vitro pressure profile measurements provided by manufacturers. A study found that in vivo balloon size was generally less than expected from in vitro measurements, suggesting that factors such as lesion characteristics and the elasticity of the vessel wall have a greater impact on balloon size than the compliance characteristics of the balloon material itself (PUBMED:8723598). Additionally, oscillating pressure inflations, which are thought to minimize trauma to the coronary artery during angioplasty, were found to not effectively transmit oscillation to the balloon under conditions typically used in practice. Optimal balloon oscillation was achieved with different cycle lengths and pressure amplitudes, indicating that in vivo conditions can differ significantly from in vitro expectations (PUBMED:8770494). Furthermore, the use of intravascular ultrasound (IVUS) and pressure wire measurements during balloon angioplasty has been shown to be effective in guiding the procedure, especially in cases where angiography alone does not accurately show the severity of stenosis or the location of intraluminal obstruction, such as in iliac artery fibromuscular dysplasia (PUBMED:30534326). This suggests that relying solely on in vitro pressure profiles for balloon sizing may not be sufficient for optimal treatment outcomes. In another study, the measurement of balloon volume in addition to pressure during angioplasty provided insights into the outcome of the procedure, with pressure/volume curves predicting vessel damage confirmed by IVUS. This indicates that monitoring both pressure and volume could be an alternative method for predicting vessel damage and ensuring proper balloon dilation (PUBMED:17943350). Overall, these findings suggest that in vivo balloon size and behavior during angioplasty cannot be accurately predicted solely based on in vitro pressure profile measurements, and that additional in vivo measurements and considerations are necessary for optimal angioplasty outcomes.
Instruction: Does an advanced insulin education programme improve outcomes and health service use for people with Type 2 diabetes? Abstracts: abstract_id: PUBMED:20002481 Does an advanced insulin education programme improve outcomes and health service use for people with Type 2 diabetes? A 5-year follow-up of the Newcastle Empowerment course. Objective: To show that an advanced diabetes education programme delivers sustained benefits to people with diabetes prescribed insulin and healthcare providers over and above those provided by basic diabetes education. Methods: An historical cohort study of 68 people with Type 1 and 51 people with Type 2 diabetes on insulin who attended the 4-day Newcastle Empowerment programme in 2001 and 2002 compared with 71 people with Type 1 and 312 people with Type 2 diabetes who attended only the basic 4-day insulin education programme over the same period, followed until 2007. Primary outcome was all hospital admissions and emergency visits; secondary outcomes were the composite of first cardiac event or death and readmission for diabetes complications. Cox-proportional hazards regression was used to analyse Type 1 and Type 2 diabetes separately. Results: The empowerment programme significantly delayed time to first hospital admission/visit for patients with Type 2 diabetes; the hazard ratio (HR) of 0.41 (P = 0.01) translates into a delay of almost 3 years; this was partly driven by a significant reduction in cardiovascular events and mortality (HR = 0.24, P = 0.01). These effects were not seen for people with Type 1 diabetes. Conclusions: A one-time, advanced diabetes education programme teaching intensive insulin self-management with an empowerment style can lead to sustained improvement in patient outcomes and reduce use of hospital services for people with Type 2 diabetes on insulin. abstract_id: PUBMED:21715124 The effect of an education programme (MEDIAS 2 ICT) involving intensive insulin treatment for people with type 2 diabetes. Objective: In a randomized, multi-centre trial, the effect of an education programme (MEDIAS 2 ICT) involving intensive insulin treatment for people with type 2 diabetes was compared with an established education programme as an active comparator condition (ACC). Methods: We investigated whether MEDIAS 2 ICT was non-inferior to ACC in overall glycaemic control. Secondary outcomes were the diabetes-related distress, diabetes knowledge, quality of life, self-care behavior, lipids, blood pressure and weight. Results: 186 subjects were randomized. After a six month follow-up the mean HbA1c decrease was 0.37% (from 8.2±1.1% to 7.8±1.5%) in the ACC and 0.63% (from 8.5±1.5% to 7.9±1.2%) in MEDIAS 2 ICT. The mean difference between both groups was -0.26% (95% CI -0.63 to -0.14) in favor of MEDIAS 2 ICT. This result was within the predefined limit for non-inferiority. Diabetes-related distress was significantly more reduced in MEDIAS 2 ICT (-3.4±7.1) than in ACC (0.4±9.0; p=0.31). Conclusion: MEDIAS 2 ICT is as effective in lowering HbA1c as previously established education programmes, but showed superiority in reducing diabetes-related distress. Practical Implications: MEDIAS 2 ICT provides an alternative for education of people with type 2 diabetes treated by multiple injection therapy. abstract_id: PUBMED:27779772 The effectiveness of multimedia education for patients with type 2 diabetes mellitus. Aims: The aim of this study was to explore the effectiveness of two types of health education on improving knowledge concerning diabetes and insulin injection, insulin injection skills and self-efficacy, satisfaction with health education and glycated haemoglobin (HbA1c) and creatinine levels among patients with type 2 diabetes who began insulin therapy using a pen injector. Background: Insulin therapy is recommended to facilitate the regulation of plasma glucose; however, patient's acceptance of insulin therapy is generally low. Healthcare providers should help them improve their knowledge of diabetes and insulin injection, as well as their insulin injection skills. Design: A randomized repeated measures experimental study design. Methods: The experimental (n = 21) and control (n = 21) groups received multimedia and regular health education programmes, respectively from October 2013-August 2014. Four structured questionnaires were used and videotapes were applied to demonstrate injection skills. Results: Generalized estimating equations showed that the experimental group's scores were significantly higher than those of the control group for diabetes and insulin injection knowledge, insulin injection skills, self-efficacy in insulin injection and satisfaction with health education. On the other hand, an analysis of covariance revealed glycated hemoglobin (HbA1c) and creatinine levels did not differ significantly between the two groups. Conclusions: Implementation of a multimedia diabetes education programme could improve patients' diabetes and insulin injection knowledge, insulin injection skills, self-efficacy in insulin injection and satisfaction with health education. Healthcare providers should improve quality of patient care by providing multimedia diabetes health education. abstract_id: PUBMED:31066115 The effectiveness of a self-efficacy-focused structured education programme on adults with type 2 diabetes: A multicentre randomised controlled trial. Aims And Objectives: To evaluate the effectiveness of a self-efficacy-focused structured education programme on outcomes in adults with type 2 diabetes (T2DM) without insulin therapy. Background: Structured education regarding metabolic control in T2DM adults without insulin therapy has not always been effective, and this lack of effectiveness might be due to overlooking self-efficacy. Whether a self-efficacy-focused structured education programme could improve metabolic and psychosocial outcomes for T2DM adults more effectively remains unknown. Design: A multicentre parallel randomised controlled concealed label trial. Methods: The study conducted in outpatients of four hospitals in China. A total of 265 T2DM adults without insulin therapy were randomly assigned to an intervention group of a self-efficacy-focused structured education programme (n = 133), or to a control group of routine education (n = 132). The differences in metabolic and psychosocial outcomes were investigated at baseline, three- and 6-month follow-ups. Results: The primary outcome of A1C and the secondary outcomes of weight, body mass index, waist circumference, diastolic pressure, self-efficacy, self-management behaviours and knowledge improved significantly in the intervention group compared with the control group at 6-month follow-up. The differences in A1C between groups for patients with a low educational background at 6-month follow-up were significant. No significant differences were found in other secondary outcomes of systolic pressure, the blood lipid profile and diabetes distress between groups at 6-month follow-up. Conclusions: This programme can improve glycaemic control, weight control, diastolic pressure, self-efficacy, self-management behaviours and diabetes knowledge for T2DM adults. Relevance To Clinical Practice: This self-efficacy-focused structured education programme is effective and can be incorporated into regular clinical care and led by trained staff (e.g. nurses), and it can be implemented in patients with low educational backgrounds. abstract_id: PUBMED:25657811 Resource use and outcomes associated with initiation of injectable therapies for patients with type 2 diabetes mellitus. Introduction: Management of type 2 diabetes mellitus (T2DM) often requires intervention with oral and injectable therapies. Across National Health Service (NHS) England, injectable therapies may be initiated in secondary, intermediate or primary care. We wished to understand resource utilization, pathways of care, clinical outcomes, and experience of patients with T2DM initiated on injectable therapies. Method: We conducted three service evaluations of initiation of injectable therapies (glucagon-like peptide-1 receptor agonists (GLP-1 RAs) or basal insulin) for T2DM in primary, secondary and intermediate care. Evaluations included retrospective review of medical records and service administration; prospective evaluation of NHS staff time on each episode of patient contact during a 3-month initiation period; patient-experience survey for those attending for initiation. Data from each evaluation were analysed separately and results stratified by therapy type. Results: A total of 133 patients were included across all settings; 54 were basal-insulin initiations. After initiation, the mean HbA1c level fell for both types of therapies, and weight increased for patients on basal insulin yet fell for patients on GLP-1 RA. The mean cost of staff time per patient per initiation was: £43.81 for GLP-1 RA in primary care; £243.49 for GLP-1 RA and £473.63 for basal insulin in intermediate care; £518.99 for GLP-1 RA and £571.11 for basal insulin in secondary care. Patient-reported questionnaires were completed by 20 patients, suggesting that patients found it easy to speak to the diabetes team, had opportunities to discuss concerns, and felt that these concerns were addressed adequately. Conclusion: All three services achieved a reduction in HbA1c level after initiation. Patterns of weight gain with basal insulin and weight loss with GLP-1 RA were as expected. Primary care was less resource-intensive and costly, and was driven by lower staff costs and fewer clinic visits. abstract_id: PUBMED:32481316 Patient-perceived service needs and health care utilization in people with type 2 diabetes: A multicenter cross-sectional study. The aim of this study was to investigate service needs and health care utilization among people with type 2 diabetes, further to identify the relationship between service needs and health care utilization.We used a self-reported questionnaire to collect data regarding demographic and diabetes characteristics, service needs toward self-management and follow-up care, and 4 health care utilizations during past year. Multiple linear regression and binary logistic regression were used to test the impacts of demographic and diabetes characteristics on service needs and health care utilizations, respectively. Spearman rank correlations were used to explore correlation between service needs and health care utilization.We recruited 1796 participants with type 2 diabetes from 20 community health centers across 12 cities of Sichuan Province in China. Needs of self-management and follow-up had significant positive correlations with health care utilization. Participants rated that nutrition was the most needed aspects of self-management (78.5%), and out-patient visit was the most popular type of follow-up (66.8%). Educational level and treatment modality were predictors of self-management needs. Low educational level (elementary school or blow, β = 0.11, P = .008; middle school, β = 0.10, P = .015) and insulin treatment (β = 0.08, P = .007) were positive factors of self-management needs. Younger age (age &lt; 45 years old, β = 0.07, P = .046), being employed (β = 0.14, P &lt; .001), and underdeveloped region (β = 0.16, P &lt; .001) were positive factors of follow-up care needs. Elementary educational level (OR: 0.53; CI: 0.30-0.96) and underdevelopment region (OR: 0.01; CI: 0.01-0.07) were protective factors of general practitioner visit, in contrast, those factors were risk factors of specialist visit (elementary educational level, OR: 1.69; CI: 1.13-2.5; underdevelopment region, OR: 2.93; CI: 2.06-4.16) and emergency room visit (elementary educational level, OR: 2.97; CI: 1.09, 8.08; underdevelopment region, OR: 6.83; CI: 2.37-14.65).The significant positive relationship between service needs and health care utilization demonstrated the role of service needs in influencing health care utilization. When self-management education is provided, age, educational level, employment status, treatment modality, and region should be considered to offer more appropriate education and to improve health care utilization. abstract_id: PUBMED:17584424 An evaluation of an insulin transfer programme delivered in a group setting. Aim: This study assesses the efficacy of a group education programme through the improvement of the patients' HbA1c status and their overall understanding of diabetes. Background: The transfer from tablets to insulin is a crucial time for patients with diabetes. The provision of diabetes education is essential, enabling patients to develop a knowledge base from which to manage their diabetes and consequently take control of their life. The growing numbers of referrals for insulin transfer meant that the traditional one-to-one education approach is unrealistic and a local diabetes team developed a group programme to meet the demand. Method: The study has a pre-post test design. Biomedical outcomes are: HbA1c and weight. Self-report outcomes are: diabetes knowledge, diabetes treatment satisfaction (DTSQ) and quality of life (EQ-5D). All outcomes were measured prior to the group education programme and then after the programme had been completed. HbA1c was also measured again at three months after the education programme had finished. A repeated measures anova was used to analyse the main data. Results: Seventy-two patients were recruited into the study and at follow-up 77% (n = 65) remained in the study. The mean age of the participants was 64.5 (SD 12.01) years and the median time since being diagnosed with diabetes was 7.3 years (range two months to 20 years). There was an overall significant reduction in the HbA1c scores across the three time points (F(1.5) = 57.87, p &lt; 0.0001). Post hoc tests indicate there are significant reductions (p &lt; 0.0001) from the pregroup mean at both the postgroup (9.84% vs. 8.54%) and three-month follow up means (9.84% vs. 8.10%). The difference from times 2 and 3 (8.54% vs. 8.10%) was also significant (p &lt; 0.0001). There were no significant changes in weight. After the programme, satisfaction with diabetes treatment was high and knowledge about diabetes had improved (t(54) = 7.46, p &lt; 0.0001). There were significant improvements reported on the five health dimensions of the EQ-5D. Conclusion: These results indicate that a group education programme can be an effective method of helping patients transfer from tablets to insulin. Relevance To Clinical Practice: These group sessions have reduced waiting times for patients transferring from tablets to insulin and have proved an effective use of time management by Diabetes Specialist Nurses. abstract_id: PUBMED:35019844 Use of Health Information Technology by Adults With Diabetes in the United States: Cross-sectional Analysis of National Health Interview Survey Data (2016-2018). Background: The use of health information technology (HIT) has been proposed to improve disease management in patients with type 2 diabetes mellitus. Objective: This study aims to report the prevalence of HIT use in adults with diabetes in the United States and examine the factors associated with HIT use. Methods: We analyzed data from 7999 adults who self-reported a diabetes diagnosis as collected by the National Health Interview Survey (2016-2018). All analyses were weighted to account for the complex survey design. Results: Overall, 41.2% of adults with diabetes reported looking up health information on the web, and 22.8% used eHealth services (defined as filled a prescription on the web, scheduled an appointment with a health care provider on the web, or communicated with a health care provider via email). In multivariable models, patients who were female (vs male: prevalence ratio [PR] 1.16, 95% CI 1.10-1.24), had higher education (above college vs less than high school: PR 3.61, 95% CI 3.01-4.33), had higher income (high income vs poor: PR 1.40, 95% CI 1.23-1.59), or had obesity (vs normal weight: PR 1.11, 95% CI 1.01-1.22) were more likely to search for health information on the web. Similar associations were observed among age, race and ethnicity, education, income, and the use of eHealth services. Patients on insulin were more likely to use eHealth services (on insulin vs no medication: PR 1.21, 95% CI 1.04-1.41). Conclusions: Among adults with diabetes, HIT use was lower in those who were older, were members of racial minority groups, had less formal education, or had lower household income. Health education interventions promoted through HIT should account for sociodemographic factors. abstract_id: PUBMED:28257159 The effect of an education programme (MEDIAS 2 BSC) of non-intensive insulin treatment regimens for people with Type 2 diabetes: a randomized, multi-centre trial. Aims: A self-management oriented education programme (MEDIAS 2 BSC) for people with Type 2 diabetes who are on a non-intensive insulin treatment regimen was developed. In a randomized, multi-centre trial, the effect of MEDIAS 2 BSC was compared with an established education programme that acted as a control group. Methods: The primary outcome was the impact of MEDIAS 2 BSC on glycaemic control. Secondary outcomes included the incidence of severe hypoglycaemia, hypoglycaemia unawareness, diabetes-related distress, diabetes knowledge, quality of life and self-care behaviour. Results: In total, 182 participants were randomized to the control group or MEDIAS 2 BSC [median age 64.0 (interquartile range 58.0-68.5) vs. 63.5 (57.0-70.0) years; HbA1c 62.8 ± 12.7 mmol/mol vs. 63.7 ± 14.0 mmol/mol; 7.9% ± 1.2% vs. 8.0% ± 1.3%]. After a 6-month follow-up, there was a mean decrease in HbA1c of 3.5 mmol/mol (0.32%) in the control group and 6.7 mmol/mol (0.61%) in MEDIAS 2 BSC. After adjusting for baseline differences and study centre, the mean difference between the groups was -3.3 mmol/mol [95% confidence interval (CI) -0.54 to -5.90 mmol/mol] [-0.30% (95% CI -0.05 to -0.54)] in favour of MEDIAS 2 BSC (P = 0.018). There were no increases in severe hypoglycaemia or hypoglycaemia unawareness. The education programmes had no significant effects on psychosocial outcome variables. Conclusion: MEDIAS 2 BSC was more effective in lowering HbA1c than the control condition. MEDIAS 2 BSC is a safe educational tool that improves glycaemic control without increasing the risk for hypoglycaemia. (Clinical Trials Registry No; NCT 02748239). abstract_id: PUBMED:35000599 The design of an evaluation framework for diabetes self-management education and support programs delivered nationally. Background: The aim of this work was to develop a National Evaluation Framework to facilitate the standardization of delivery, quality, reporting, and evaluation of diabetes education and support programs delivered throughout Australia through the National Diabetes Services Scheme (NDSS). The NDSS is funded by the Australian Government, and provides access to diabetes information, education, support, and subsidized product across diverse settings in each state and territory of Australia through seven independent service-providers. This article reports the approach undertaken to develop the Framework. Methods: A participatory approach was undertaken, focused on adopting nationally consistent outcomes and indicators, nominating objectives and measurement tools, specifying evaluation processes, and developing quality standards. Existing programs were classified based on related, overarching indicators enabling the adoption of a tiered system of evaluation. Results: Two outcomes (i.e., improved clinical, reduced cost) and four indicators (i.e., improved knowledge and understanding, self-management, self-determination, psychosocial adjustment) were adopted from the Eigenmann and Colagiuri national consensus position statement for diabetes education. This allowed for the identification of objectives (i.e., improved empowerment, reduced distress, autonomy supportive program delivery, consumer satisfaction) and related measurement instruments. Programs were categorized as comprehensive, topic-specific, or basic education, with comprehensive programs allocated to receive the highest-level of evaluation. Eight quality standards were developed, with existing programs tested against those standards. Based on the results of testing, two comprehensive (OzDAFNE for people with type 1 diabetes, DESMOND for people with type 2 diabetes), and eight topic-specific (CarbSmart, ShopSmart, MonitorSmart, FootSmart, MedSmart, Living with Insulin, Insulin Pump Workshop, Ready Set Go - Let's Move) structured diabetes self-management education and support programs were nominated for national delivery. Conclusions: The National Evaluation Framework has facilitated consistency of program quality, delivery, and evaluation of programs delivered by multiple service providers across diverse contexts. The Framework could be applied by other service providers who facilitate multiple diabetes education and support programs and could be adapted for use in other chronic disease populations where education and support are indicated. Answer: Yes, an advanced insulin education programme can improve outcomes and health service use for people with Type 2 diabetes. The Newcastle Empowerment programme, which is an advanced diabetes education programme, was shown to significantly delay the time to first hospital admission or visit for patients with Type 2 diabetes. The hazard ratio of 0.41 indicates a delay of almost 3 years, driven partly by a significant reduction in cardiovascular events and mortality (HR = 0.24, P = 0.01). These effects were not observed in people with Type 1 diabetes, suggesting that the programme is particularly beneficial for those with Type 2 diabetes on insulin (PUBMED:20002481). Additionally, the MEDIAS 2 ICT education programme, which involves intensive insulin treatment for people with Type 2 diabetes, was found to be as effective in lowering HbA1c as previously established education programmes. It also showed superiority in reducing diabetes-related distress, indicating that it provides an alternative for education of people with Type 2 diabetes treated by multiple injection therapy (PUBMED:21715124). Moreover, a self-efficacy-focused structured education programme was found to improve glycaemic control, weight control, diastolic pressure, self-efficacy, self-management behaviours, and diabetes knowledge for adults with Type 2 diabetes not on insulin therapy. This suggests that such a programme can be effective and incorporated into regular clinical care (PUBMED:31066115). In summary, advanced insulin education programmes have been demonstrated to lead to sustained improvements in patient outcomes and reductions in the use of hospital services for people with Type 2 diabetes on insulin. These programmes can be an effective method of helping patients manage their diabetes more effectively, leading to better health outcomes and potentially reducing the burden on healthcare systems.
Instruction: MRI of right atrial pseudomass: is it really a diagnostic problem? Abstracts: abstract_id: PUBMED:29152296 Catheter-related right atrial thrombus in sickle cell disease. Catheter-related right atrial thrombus (CRAT) can occur in patients with sickle cell disease, particularly if additional risk factors for thrombosis are present. Cardiac MRI may differentiate thrombi from other types of atrial masses. Treatment should include anticoagulation and the timing of catheter removal should balance the potential risk of embolization. abstract_id: PUBMED:8188905 MRI of right atrial pseudomass: is it really a diagnostic problem? Objective: To determine whether the high proportion of patients reported to have prominence of normal right atrial structures by MRI may lead to inappropriate diagnosis of intracardiac tumors. Materials And Methods: One hundred forty-nine subjects were examined by spin-echo MRI: patients with cardiac (no. 40), pericardial (no. 30), or thoracic aortic disease (no. 40) and mediastinal tumor (no. 15), and normal volunteers (no. 24). Imaging was reviewed to determine the frequency of a prominent crista terminalis/Chiari network and the likelihood of misdiagnosis of cardiac tumor. Results: Prominent intraatrial structures were seen in 59% of subjects, a single prominent nodule in 36%, an intraatrial strand in 13%, and both in 10%. In no case were these findings originally or on review thought to represent a pathological mass or was it felt likely that they could reasonably be misinterpreted as such. Conclusion: Normal structures within the right atrium, such as the crista terminalis and Chiari network, may be seen more commonly with MRI than with other imaging modalities. An appreciation of the frequency with which these findings are seen should prevent inappropriate misdiagnosis of pathological masses when none is present. abstract_id: PUBMED:26820740 Assessment of left and right atrial 3D hemodynamics in patients with atrial fibrillation: a 4D flow MRI study. Atrial fibrillation (AF) is associated with embolic stroke due to thrombus formation in the left atrium (LA). Based on the relationship of atrial stasis to thromboembolism and the marked disparity in pulmonary versus systemic thromboembolism in AF, we tested the hypothesis that flow velocity distributions in the left (LA) versus right atrium (RA) in patients with would demonstrate increased stasis. Whole heart 4D flow MRI was performed in 62 AF patients (n = 33 in sinus rhythm during imaging, n = 29 with persistent AF) and 8 controls for the assessment of in vivo atrial 3D blood flow. 3D segmentation of the LA and RA geometry and normalized velocity histograms assessed atrial velocity distribution and stasis (% of atrial velocities &lt;0.2 m/s). Atrial hemodynamics were similar for RA and LA and significantly correlated (mean velocity: r = 0.64; stasis: r = 0.55, p &lt; 0.001). RA and LA mean and median velocities were lower in AF patients by 15-33 % and stasis was elevated by 11-19 % compared to controls. There was high inter-individual variability in LA/RA mean velocity ratio (range 0.5-1.8) and LA/RA stasis ratio (range 0.7-1.7). Patients with a history of AF and in sinus rhythm showed most pronounced differences in atrial flow (reduced mean velocities, higher stasis in the LA). While there is no systematic difference in LA versus RA flow velocity profiles, high variability was noted. Further delineation of patient specific factors and/or regional atrial effects on the LA and RA flow velocity profiles, as well as other factors such as differences in procoagulant factors, may explain the more prevalent systemic versus pulmonary thromboembolism in patients with AF. abstract_id: PUBMED:31355065 Multimodality Imaging of a Right Atrial Cardiac Mass. Work up of a right atrial mass usually requires multimodality imaging and sometimes a biopsy to affirm histological diagnosis. We present a case of a 74-year-old woman with primary cutaneous melanoma (wildtype BRAF) of the right toe who was found to have a large heterogeneous mass in the right atrium on routine surveillance CT scan. She did not have any cardiac symptoms. Vital signs and physical examination were unremarkable. Cardiac magnetic resonance (CMR) imaging demonstrated a bilobed mass with an intramural component and a mobile blood pool component, with interposed thrombus. Three-dimensional transesophageal echocardiogram (3D-TEE) revealed the mass and its site of attachment on the lateral wall of the right atrium. Given the large size of the tumor and its potential for obstruction of tricuspid inflow, the right atrial mass was surgically resected. Pathology confirmed metastatic melanoma. The patient tolerated cardiac surgery well and was discharged shortly thereafter. In the present case, a large cardiac metastasis was discovered in the absence of clinically detectable disease elsewhere. CMR allowed a comprehensive evaluation of the location, extension, and tissue characterization of the cardiac mass. Transthoracic echocardiogram (TTE) and 3D-TEE allowed assessment of the hemodynamic consequences of this mass and aided in operative planning. abstract_id: PUBMED:20661323 A case of right atrial aneurysm incidentally found in old age. Right atrial aneurysm is a rare abnormality of unknown origin. Approximately half of patients with right atrial aneurysm show no symptoms. Right atrial aneurysm is usually detected by chance at any time between fetal and adult life and can be associated with atrial arrhythmia and systemic embolism. The diagnosis of right atrial aneurysm can be established with echocardiography, computed tomography (CT) or magnetic resonance imaging (MRI). Because of thromboembolic risk, aneurysmectomy is usually recommended. We review the case report of a 69-year-wold woman with right atrial appendiceal aneurysm, whose diagnosis was established by echocardiography and CT angiography. abstract_id: PUBMED:31238756 A case series of right atrial mass in neonates: a diagnostic dilemma. The diagnosis of a right atrial mass in a neonate should be treated as an emergency. There are three major differential diagnoses for a right atrial mass-thrombus, infectious vegetation, and myxoma. Embolization of the mass can result in life-threatening complications and hence timely diagnosis and treatment is vital. This case series describes the clinical course, management, and outcome of four neonates who presented with a right atrial mass. abstract_id: PUBMED:37554666 Diagnosing the culprit behind a subtle case of concomitant right atrial myxoma and atrial fibrillation: A case report. Myxomas are rare tumors arising from the uncontrolled proliferation of mesenchymal cells. Among cardiac conditions, cardiac myxomas account for less than 0.1% of cases, with the majority found in the left atrium and only 8% in the right atrium. Atrial myxomas present with various clinical manifestations, including constitutional symptoms, symptoms caused by blood flow obstruction, and tumor embolism. This case report describes a 50-year-old male patient presenting with syncope, fatigue, and dyspnea, who had a history of well-controlled hypertension and atrial fibrillation. Physical examination, further diagnostic workup, and echocardiography led to a provisional diagnosis of right atrial myxoma. The patient underwent a median sternotomy, and the tumor was surgically excised, resulting in both diagnostic and curative outcomes. Histological analysis confirmed the diagnosis of myxoma. This case report contributes valuable insights into the presentation, diagnostic challenges, and treatment of atrial myxoma. abstract_id: PUBMED:30404609 An unusual presentation of prominent crista terminalis mimicking a right atrial mass: a case report. Background: The crista terminalis is a variation of normal anatomical structure within the right atrium which may be misdiagnosed with an abnormal atrial mass normally visualized in the standard views on the transthoracic echocardiogram. Case Presentation: In this case presentation, we demonstrated a rare case report describing the accidental discovery of a right atrial mass-like structure in a 54-year old Asian man without physical discomfort during an echocardiographic examination. These findings naturally caused some concern as the differential diagnosis such as right atrial myxoma or thrombus and further examination were organized. The subsequent positron emission tomography/magnetic resonance imaging (PET/MRI) differentiated a true right atrial mass from a strip extending into the atrium in accordance with prominent crista terminalis. Conclusion: A preferable understanding of the complex anatomy and PET/MRI appearance of a prominent crista terminalis will minimize the misdiagnosis of this structure and avoiding unnecessary anxiety and more invasive examinations. abstract_id: PUBMED:33282648 Right Atrial Myxoma with Coexistent Coronary Artery Disease - A Rare Combination. Atrial myxomas are the most common primary heart tumors. Two-dimensional echocardiography is the diagnostic procedure of choice. The majority of myxomas are located in the left atrium. Myxoma in the right atrium is an uncommon location. The co-occurrence of right atrial myxoma with atherosclerotic coronary artery disease (CAD) is uncommon. In our case, right atrial myxoma was associated with CAD, which makes it a unique case because very few cases of right atrial myxoma coexistent with CAD are described in literature. abstract_id: PUBMED:26332794 Simultaneous Right and Left Atrial Appendage Thrombus in a Patient with Atrial Fibrillation: A Lesson to Remember. We report a case of simultaneous right and left atrial appendage thrombi in a 76-year-old-man with atrial fibrillation. Although transesophageal echocardiography is considered a routine diagnostic procedure, complete and comprehensive visualization of both atrial appendages is of paramount importance, particularly with the introduction of new therapeutic options like left atrial appendage exclusion devices. We expect to raise awareness among clinicians of the prevalence and implications of bilateral atrial appendage thrombi in common clinical practice. Answer: The question of whether MRI of right atrial pseudomass presents a diagnostic problem is addressed in the abstract from PUBMED:8188905. The study aimed to determine if the prominence of normal right atrial structures seen in MRI could lead to the inappropriate diagnosis of intracardiac tumors. The study involved 149 subjects with various cardiac-related conditions and normal volunteers. The results showed that prominent intraatrial structures, such as the crista terminalis and Chiari network, were observed in 59% of subjects. However, these findings were not originally or upon review thought to represent a pathological mass, nor was it likely that they could be misinterpreted as such. The conclusion drawn from this study is that an understanding of the frequency with which these normal structures are seen in MRI should prevent the misdiagnosis of pathological masses when none are present. Therefore, while MRI can visualize normal structures within the right atrium that may appear prominent, with proper knowledge and interpretation, these should not pose a significant diagnostic problem.
Instruction: Dose escalation using twice-daily radiotherapy for nasopharyngeal carcinoma: does heavier dosing result in a happier ending? Abstracts: abstract_id: PUBMED:12182970 Dose escalation using twice-daily radiotherapy for nasopharyngeal carcinoma: does heavier dosing result in a happier ending? Purpose: To present our experience using a twice-daily radiotherapy (RT) technique, including hyperfractionated and accelerated-hyperfractionated RT, on nasopharyngeal carcinoma (NPC) patients. The dose to the primary tumor was increased in the hope that local control could be increased without the cost of increased late complications. We analyzed acute and late complications and local control and compared the results with the results of NPC patients treated during the same period using conventional once-daily RT. Methods And Materials: Between October 1991 and July 1998, 222 histologically confirmed, Stage M0, previously unirradiated NPC patients completed RT at our hospital. Most patients had American Joint Committee on Cancer (AJCC) 1992 Stage III and IV disease. Among them, 88 received altered fractionated, twice-daily RT; 76 patients received hyperfractionated RT and 12 accelerated-hyperfractionated RT. The remaining 134 patients received a conventional once-daily regimen. Hyperfractionated RT was delivered using 120 cGy b.i.d. separated by 6-h intervals throughout the course. For the accelerated-hyperfractionated patients, 160 cGy b.i.d. was given, also at 6-h intervals. The median dose in the twice-daily group was 7810 cGy (range 6840-8200). In the once-daily regimen, RT was delivered using 180-200 cGy q.d. The median tumor dose to the primary tumor was 7000 cGy (range 6560-8100) given during about 8 weeks. The median follow-up time was 70.5 and 72 months for the twice-daily and once-daily groups, respectively. Results: The incidence of acute toxicities was higher in the twice-daily group with more severe mucositis and moist desquamation than in the once-daily group. Both groups had a similar incidence of late complications, except for 3 cases of temporal lobe necrosis in the twice-daily group, all in patients treated with 160 cGy. No difference was noted in recurrence-free local control between the two groups when the individual T stage was compared using AJCC 1992 or 1997 criteria (p = 0.51 and 0.59, respectively). The 5-year local control rate for T1-3 (AJCC 1997) was 93.2% for the twice-daily group and 86.4% for the once-daily group (p = 0.45). In Stage T4 (AJCC 1997) patients, the local control rate dropped drastically to 43.5% and 36.9% for the twice-daily and once-daily groups, respectively. The overall neck control rate at 5 years was 87.3% and 80.3% for the twice-daily and once-daily patients, respectively (p = 0.16). The overall locoregional control rate was 82.7% for the twice-daily group and 66.6% for the once-daily group. The difference was again not statistically significant, but showed a tendency in favor of the twice-daily regimen (p = 0.055). Locoregional failure occurred mainly in Stage T4 patients with central nervous invasion for whom local control was particularly poor, with a failure rate of about 60%. Conclusion: The present data suggest that NPC patients can be safely treated using a 120-cGy twice-daily program with a 6-h interval up to 8000 cGy. The accelerated-hyperfractionated technique is not recommended. A large discrepancy in local control between patients with T1-3 and T4 disease was noted. For T1-3 disease, an excellent local control rate &gt;90% was achieved using the twice-daily regimen. In contrast, failure in the T4 patients was as high as 55% in the twice-daily group and reached 65% in the once-daily group. More rigorous treatment is needed using either additional dose escalation or other strategies for T4 NPC patients. With a dose escalation of 1000 cGy using 120-cGy twice-daily RT, a trend toward better locoregional control and disease-specific survival was noted in the twice-daily group. Whether this difference was truly the result of an increased dose needs additional confirmation in studies with larger patient numbers. abstract_id: PUBMED:31911016 A Review of Modern Radiation Therapy Dose Escalation in Locally Advanced Head and Neck Cancer. The management of head and neck cancer is complex and often involves multimodality treatment. Certain groups of patients, such as those with inoperable or advanced disease, are at higher risk of treatment failure and may therefore benefit from radiation therapy dose escalation. This can be difficult to achieve without increasing toxicity. However, the combination of modern treatment techniques and increased research into the use of functional imaging modalities that assist with target delineation allows researchers to push this boundary further. This review aims to summarise modern dose escalation trials to identify the impact on disease outcomes and explore the growing role of functional imaging modalities. Studies experimenting with dose escalation above standard fractionated regimens as outlined in National Comprehensive Cancer Network guidelines using photon therapy were chosen for review. Seventeen papers were considered suitable for inclusion in the review. Eight studies investigated nasopharyngeal cancer, with the remainder treating a range of subsites. Six studies utilised functional imaging modalities for target delineation. Doses as high as 85.9 Gy in 2.6 Gy fractions (EQD2 90.2 Gy10) were reportedly delivered with the aid of functional imaging modalities. Dose escalation in nasopharyngeal cancer resulted in 3-year locoregional control rates of 86.6-100% and overall survival of 82-95.2%. For other mucosal primary tumour sites, 3-year locoregional control reached 68.2-85.9% and 48.4-54% for overall survival. There were no clear trends in acute or late toxicity across studies, regardless of dose or addition of chemotherapy. However, small cohort sizes and short follow-up times may have resulted in under-reporting. This review highlights the future possibilities of radiation therapy dose escalation in head and neck cancer and the potential for improved target delineation with careful patient selection and the assistance of functional imaging modalities. abstract_id: PUBMED:32440209 DW-MRI-Guided Dose Escalation Improves Local Control of Locally Advanced Nasopharyngeal Carcinoma Treated with Chemoradiotherapy. Background: Nasopharyngeal carcinoma (NPC) is one of the most highly radiosensitive malignancies; however, some locally advanced NPC patients experienced local recurrence even though they received aggressive treatment regimens. Defining the tumor volume precisely is important to escalate the total dose required for the primary tumor. In this study, we aimed to investigate the feasibility and efficacy of dose escalation guided by DW-MRI in patients with locally advanced NPC. Patients And Methods: A total of 230 patients with locally advanced NPC treated with intensive modulated radiotherapy (IMRT) at Sichuan Cancer Hospital between January 2010 and January 2015 were enrolled in this retrospective study. All the patients were treated with all-course of simultaneous integrated boost-IMRT. DW-MRI-guided dose escalation with 2.2-2.5 Gy/F, qd for 1-3 days or 1.2-1.5 Gy/F, bid for 1-3 days were prescribed to 123 patients. Survival and complication of the patients were evaluated, and multivariate analysis was performed. Results: The median follow-up of patients in the DW-MRI-guided dose-escalation group and the conventional group was 48 months (range 8-88 months) and 52 months (range 6-90 months), respectively. The 5-year overall survival rate, distant metastasis-free survival rate, progression-free survival, and local recurrence-free survival (LRFS) of patients in the dose-escalation group and the conventional group were 88% vs 82.5% (p = 0.244), 86.1% vs 83.3% (p = 0.741), 82.2% vs 76.6% (p = 0.286), and 89.1% vs 80.1% (p = 0.029), respectively. Multivariate analysis showed that dose escalation was independent prognostic factor for LRFS (HR 0.386, 95% CI 0.163-0.909, p = 0.03). Conclusion: DW-MRI-guided dose escalation is a feasible strategy to improve local control of patients with locally advanced NPC. The treatment-related complications are tolerable. abstract_id: PUBMED:37157884 18F-FMISO PET-guided dose escalation with multifield optimization intensity-modulated proton therapy in nasopharyngeal carcinoma. Purpose: The purpose of this study was to evaluate the radiotherapy planning feasibility of dose escalation with intensity-modulated proton therapy (IMPT) to hypoxic tumor regions identified on 18F-Fluoromisonidazole (FMISO) positron emission tomography and computed tomography (PET-CT) in NPC. Materials And Methods: Nine patients with stages T3-4N0-3M0 NPC underwent 18F-FMISO PET-CT before and during week 3 of radiotherapy. The hypoxic volume (GTVhypo) is automatically generated by applying a subthresholding algorithm within the gross tumor volume (GTV) with a tumor to muscle standardized uptake value (SUV) ratio of 1.3 on the 18F-FMISO PET-CT scan. Two proton plans were generated for each patient, a standard plan to 70 Gy and dose escalation plan with upfront boost followed by standard 70GyE plan. The stereotactic boost was planned with single-field uniform dose optimization using two fields to deliver 10 GyE in two fractions to GTVhypo. The standard plan was generated with IMPT with robust optimization to deliver 70GyE, 60GyE in 33 fractions using simultaneous integrated boost technique. A plan sum was generated for assessment. Results: Eight of nine patients showed tumor hypoxia on the baseline 18F-FMISO PET-CT scan. The mean hypoxic tumor volume was 3.9 cm3 (range .9-11.9cm3 ). The average SUVmax of the hypoxic volume was 2.2 (range 1.44-2.98). All the dose-volume parameters met the planning objectives for target coverage. Dose escalation was not feasible in three of eight patients as the D0.03cc of temporal lobe was greater than 75GyE. Conclusions: The utility of boost to the hypoxic volume before standard course of radiotherapy with IMPT is dosimetrically feasible in selected patients. Clinical trials are warranted to determine the clinical outcomes of this approach. abstract_id: PUBMED:31602257 Safety and Effectiveness of De-escalated Radiation Dose in T1-3 Nasopharyngeal Carcinoma: A Propensity Matched Analysis. Backgrounds: With the excellent local control in T1 to T3 nasopharyngeal carcinoma (NPC) treated with intensity modulated radiotherapy (IMRT), the importance of toxicities is increasingly being recognised. This retrospective propensity score analysis sought to assess whether moderate dose reduction compromised long-term outcome compared with standard dose in T1-3 NPCs. Materials and Methods: A total of 266 patients (67 female, 199 male) with a median age of 50 years between June 2011 and June 2015 were analysed. All were treated with IMRT, with or without systemic chemotherapy. The prescription radiation dose to gross tumor is 70Gy/2.12Gy/33F in our institution. Results: With a median follow-up time of 50 months, the 5-year loco-regional failure-free survival (LRFS) and overall survival (OS) were 93.5% and 81.8%, respectively. 32 patients received radiation dose less than prescription dose, with a median dose of 63.6Gy (53-67Gy). Another 234 patients received exactly the prescription dose of 70Gy. Propensity scores were computed (32 patients treated with de-escalated dose and 64 patients with standard dose), there was no significant difference in 5-year LRFS and 5-year OS between the two groups (92.5% and 91.7% with standard dose; 82.1% and 85.7% with de-escalation dose; p=0.863 for LRFS and 0.869 for OS). No independent prognostic factor was associated with loco-regional failure in univariate analysis. Conclusions: T1-3 nasopharyngeal carcinoma presenting with superior locoregional control, a moderately reduced dose (about 10%) delivered with IMRT resulted in comparable prognosis to those with prescription dose of 70Gy. abstract_id: PUBMED:12694833 Intensity-modulated radiotherapy in nasopharyngeal carcinoma: dosimetric advantage over conventional plans and feasibility of dose escalation. Purpose: To compare intensity-modulated radiotherapy (IMRT) with two-dimensional RT (2D-RT) and three-dimensional conformal radiotherapy (3D-CRT) treatment plans in different stages of nasopharyngeal carcinoma and to explore the feasibility of dose escalation in locally advanced disease. Materials And Methods: Three patients with different stages (T1N0M0, T2bN2M0 with retrostyloid extension, and T4N2M0) were selected, and 2D-RT, 3D-CRT, and IMRT treatment plans (66 Gy) were made for each of them and compared with respect to target coverage, normal tissue sparing, and tumor control probability/normal tissue complication probability values. In the Stage T2b and T4 patients, the IMRT 66-Gy plan was combined with a 3D-CRT 14-Gy boost plan using a 3-mm micromultileaf collimator, and the dose-volume histograms of the summed plans were compared with their corresponding 66-Gy 2D-RT plans. Results: In the dosimetric comparison of 2D-RT, 3D-CRT, and IMRT treatment plans, the T1N0M0 patient had better sparing of the parotid glands and temporomandibular joints with IMRT (dose to 50% parotid volume, 57 Gy, 50 Gy, and 31 Gy, respectively). In the T2bN2M0 patient, the dose to 95% volume of the planning target volume improved from 57.5 Gy in 2D-RT to 64.8 Gy in 3D-CRT and 68 Gy in IMRT. In the T4N2M0 patient, improvement in both target coverage and brainstem/temporal lobe sparing was seen with IMRT planning. In the dose-escalation study for locally advanced disease, IMRT 66 Gy plus 14 Gy 3D-CRT boost achieved an improvement in the therapeutic ratio by delivering a higher dose to the target while keeping the normal organs below the maximal tolerance dose. Conclusions: IMRT is useful in treating all stages of nonmetastatic nasopharyngeal carcinoma because of its dosimetric advantages. In early-stage disease, it provides better parotid gland sparing. In locally advanced disease, IMRT offers better tumor coverage and normal organ sparing and allows room for dose escalation. abstract_id: PUBMED:33154554 Population pharmacokinetics of the anti-PD-1 antibody camrelizumab in patients with multiple tumor types and model-informed dosing strategy. Camrelizumab, a programmed cell death 1 (PD-1) inhibitor, has been approved for the treatment of patients with relapsed or refractory classical Hodgkin lymphoma, nasopharyngeal cancer and non-small cell lung cancer. The aim of this study was to perform a population pharmacokinetic (PK) analysis of camrelizumab to quantify the impact of patient characteristics and to investigate the appropriateness of a flat dose in the dosing regimen. A total of 3092 camrelizumab concentrations from 133 patients in four clinical trials with advanced melanoma, relapsed or refractory classical Hodgkin lymphoma and other solid tumor types were analyzed using nonlinear mixed effects modeling. The PKs of camrelizumab were properly described using a two-compartment model with parallel linear and nonlinear clearance. Then, covariate model building was conducted using stepwise forward addition and backward elimination. The results showed that baseline albumin had significant effects on linear clearance, while actual body weight affected intercompartmental clearance. However, their impacts were limited, and no dose adjustments were required. The final model was further evaluated by goodness-of-fit plots, bootstrap procedures, and visual predictive checks and showed satisfactory model performance. Moreover, dosing regimens of 200 mg every 2 weeks and 3 mg/kg every 2 weeks provided similar exposure distributions by model-based Monte Carlo simulation. The population analyses demonstrated that patient characteristics have no clinically meaningful impact on the PKs of camrelizumab and present evidence for no advantage of either the flat dose or weight-based dose regimen for most patients with advanced solid tumors. abstract_id: PUBMED:16213105 Preliminary results of radiation dose escalation for locally advanced nasopharyngeal carcinoma. Purpose: To study the safety and efficacy of dose escalation in tumor for locally advanced nasopharyngeal carcinoma (NPC). Methods And Materials: From September 2000 to June 2004, 50 patients with T3-T4 NPC were treated with intensity-modulated radiotherapy (IMRT). Fourteen patients had Stage III and 36 patients had Stage IVA-IVB disease. The prescribed dose was 76 Gy to gross tumor volume (GTV), 70 Gy to planning target volume (PTV), and 72 Gy to enlarged neck nodes (GTVn). All doses were given in 35 fractions over 7 weeks. Thirty-four patients also had concurrent cisplatin and induction or adjuvant PF (cisplatin and 5-fluorouracil). Results: The average mean dose achieved in GTV, GTVn, and PTV were 79.5 Gy, 75.3 Gy, and 74.6 Gy, respectively. The median follow-up was 25 months, with 4 recurrences: 2 locoregional and 2 distant failures. All patients with recurrence had IMRT alone without chemotherapy. The 2-year locoregional control rate, distant metastases-free and disease-free survivals were 95.7%, 94.2%, and 93.1%, respectively. One treatment-related death caused by adjuvant chemotherapy occurred. The 2-year overall survival was 92.1%. Conclusions: Dose escalation to 76 Gy in tumor is feasible with T3-T4 NPC and can be combined with chemotherapy. Initial results showed good local control and survival. abstract_id: PUBMED:28963789 18 F-Fluoromisonidazole positron emission tomography/CT-guided volumetric-modulated arc therapy-based dose escalation for hypoxic subvolume in nasopharyngeal carcinomas: A feasibility study. Background: The purpose of this study is to investigate the feasibility of a simultaneously integrated boost to the hypoxic subvolume of nasopharyngeal carcinomas (NCPs) under the guidance of 18 F-fluoromisonidazole (FMISO) positron emission tomography (PET)/CT using volumetric-modulated arc therapy (VMAT) and intensity-modulated radiotherapy (IMRT) techniques. Methods: Eight patients with NPC were treated with simultaneous integrated boost-IMRT (treatment plan named IMRT70) with dose prescriptions of 70 Gy, 66 Gy, 60 Gy, and 54 Gy to the gross tumor volume (GTV), positive neck nodes, the planning target volume (PTV), and the clinically negative neck, respectively. Based on the same datasets, experimental plans with the same dose prescription plus a dose boost of 14 Gy (an escalation of 20% of the prescription dose) to the hypoxic volume target contoured on the pretreatment 18 F-FMISO PET/CT imaging were generated using IMRT and VMAT techniques, respectively (represented by IMRT84 and VMAT84). Two or more arcs (approximately 2-2.5 arcs, totally rotating angle &lt;1000 degrees) were used in VMAT plans and 9 equally separated fields in IMRT plans. Dosimetric parameters, total monitor units, and delivery time were calculated for comparative study of plan quality and delivery efficiency between IMRT84 and VMAT84. Results: In experimental plans, hypoxic target volumes successfully received the prescribed dose of 84 Gy in compliance with other dose constraints with either the IMRT technique or the VMAT technique. In terms of the target coverage, dose homogeneity, and organs at risk (OAR) sparing, there was no statistically significant difference between the actual treatment plan of IMRT70 and experimental plans. The total monitor unit of VMAT84 (525.7 ± 39.8) was significantly less than IMRT70 (1171.5 ± 167; P = .001) and IMRT84 (1388.3 ± 151.0; P = .001) per fraction, with 55.1% and 62.1% reduction. The average machine delivery time was 3.5 minutes for VMAT plans in comparison with approximately 8 minutes for IMRT plans, resulting in a reduction factor of 56.2%. For experimental plans, the 3D gamma index average was over 98.0% with no statistical significant difference when a 3%/3 mm gamma passing rate criteria was used. Conclusion: With the guidance of 18 F-FMISO PET/CT imaging, dose escalation to hypoxic zones within NPC could be achieved and delivered efficiently with the VMAT technique in comparison with the IMRT technique. abstract_id: PUBMED:17762441 Dose escalation for patients with locally advanced nasopharyngeal carcinoma treated with radiotherapy alone. Objectives: The local control of patients with locally advanced nasopharyngeal carcinoma is still unsatisfactory. This prospective study was designed to evaluate the treatment outcomes and treatment-related complications of patients with locally advanced nasopharyngeal carcinoma treated with escalated radiation doses. Methods: A total of 118 consecutive patients with T4 classification (1992 American Joint Committee on Cancer staging system), histology-proven nonmetastatic nasopharyngeal carcinoma were treated with radiotherapy alone between 1992 and 1997 in a medical center in southern Taiwan. Thirty-two of them were enrolled into a prospective study of dose escalation and were irradiated to a total of 81 Gy. The other 86 patients received a total of 70.2 Gy. Potentially significant patient-related and treatment-related parameters were analyzed for their prognostic significance. Radiation-related complications were recorded and analyzed. Results: The 5-year local progression-free rates were 61%, and 61% for patients receiving 70.2 Gy and 81 Gy, respectively (P &gt; 0.05). The incidences of xerostomia, hearing impairment, and temporal radionecrosis were significantly higher for those receiving 81 Gy. The 5-year complication-free rates of patients receiving 70.2 Gy and 81 Gy were 14% versus 2% for xerostomia (P = 0.0070), 50% versus 30% for hearing impairment (P = 0.0198), and 91% versus 82% for temporal radionecrosis (P = 0.0400). Conclusions: For patients with locally advanced nasopharyngeal carcinoma treated with radiotherapy alone, dose escalation to 81 Gy failed to show benefits on local control rate. Higher radiation doses contribute to a higher incidence of radiation-related complications. Answer: The concept of dose escalation using twice-daily radiotherapy for nasopharyngeal carcinoma (NPC) has been explored to improve local control without significantly increasing late complications. However, the evidence from various studies presents a mixed picture. One study reported that NPC patients could be safely treated using a 120-cGy twice-daily program with a 6-hour interval up to 8000 cGy, suggesting that the accelerated-hyperfractionated technique is not recommended. This study found no significant difference in recurrence-free local control between twice-daily and once-daily groups when comparing individual T stages. However, there was a large discrepancy in local control between patients with T1-3 and T4 disease, with T4 patients experiencing high failure rates in both groups. The study concluded that more rigorous treatment is needed for T4 NPC patients and that a trend toward better locoregional control and disease-specific survival was noted in the twice-daily group, although further confirmation is needed with larger patient numbers (PUBMED:12182970). Another study reviewed modern dose escalation trials and found that doses as high as 85.9 Gy in 2.6 Gy fractions were delivered with the aid of functional imaging modalities, resulting in 3-year locoregional control rates of 86.6-100% and overall survival of 82-95.2% for nasopharyngeal cancer. However, the review noted no clear trends in acute or late toxicity across studies, suggesting that small cohort sizes and short follow-up times may have resulted in under-reporting (PUBMED:31911016). A study that utilized DW-MRI-guided dose escalation in patients with locally advanced NPC treated with chemoradiotherapy showed that this approach is a feasible strategy to improve local control, with tolerable treatment-related complications (PUBMED:32440209). In contrast, a study that escalated the dose to 81 Gy for patients with locally advanced NPC treated with radiotherapy alone did not show benefits in local control rate and contributed to a higher incidence of radiation-related complications (PUBMED:17762441). Overall, while there is some evidence that dose escalation using twice-daily radiotherapy can improve outcomes in certain NPC patient groups, the results are not uniformly positive, and the potential for increased toxicity must be carefully considered. Further research with larger patient numbers and longer follow-up is needed to confirm the benefits of heavier dosing in NPC treatment.
Instruction: Is direct collection of pleural fluid into a heparinized syringe important for determination of pleural pH? Abstracts: abstract_id: PUBMED:9315803 Is direct collection of pleural fluid into a heparinized syringe important for determination of pleural pH? A brief report. Introduction: It has long been believed that pleural fluid must be directly aspirated into a heparinized syringe to obtain an accurate value. Many operators aspirate 30 to 60 mL of pleural fluid into a syringe without heparin, and then place 1 mL into a heparinized syringe from which the pH is determined. We postulated that this technique does not cause a clinically significant difference in pleural pH values. Methods: Patients undergoing thoracentesis in the outpatient clinic, general ward, and medical ICU were eligible. After the initial entry of the needle into the pleural space, a heparinized syringe was used to obtain pleural fluid for pH determination. A 60-mL syringe was then used to aspirate additional pleural fluid for biochemical analysis and culture. At the end of the procedure, a second aliquot of pleural fluid was placed into a heparinized syringe for pH determination. A difference of 0.1 in pH was taken as clinically important. Results: Twenty-one pleural fluid samples were obtained from 20 patients. Pleural fluid pH determinations were within 0.1 in all but one patient. The mean pH for the directly collected group was 7.39 (25%: 7.35; 75%: 7.45). The mean for the indirectly collected group was 7.41 (25%: 7.35; 75%: 7.45). The difference between the two means (0.02; 95% confidence interval, 0.0368 to 0.00131) was statistically significant but clinically unimportant (p=0.037). Conclusions: Pleural fluid can be collected in a large syringe and then placed into a heparinized syringe to assess pH. This is useful information because the use of just one syringe saves time and reduces the risk of iatrogenic complications. abstract_id: PUBMED:18775252 Influence of the method used to obtain pleural fluid on the determination of the Acid-base balance Objective: To analyze the methods used in our hospital for obtaining pleural fluid to determine the acid-base balance and to evaluate the clinical repercussions of each method. Methods: Initially we studied the methods used by physicians in our hospital to collect pleural fluid for determination of the acid-base balance. In a second phase, we performed a prospective, descriptive, comparative study with the participation of 71 patients with pleural effusions in order to compare the acid-base balance according to the technique used to obtain the fluid. Results: Pleural fluid was obtained using 3 methods: a) direct extraction using a heparinized syringe (group 1); b) extraction using a 20 mL syringe with subsequent aspiration from this syringe into a heparinized syringe (group 2); and c) filling a heparinized syringe from the 20 mL syringe (group 3). The only significant differences between group 1 and groups 2 and 3 were an increase in the pleural PO2 and oxygen saturation. The difference in the mean pH between groups 1 and 2 was 0.009 (95% confidence interval: -0.39 to 0.02; P=.5) and between groups 1 and 3 was 0.007 (95% confidence interval: -0.38 to 0.023; P=.6). The correlations between findings for PO2, pH, and PCO2 obtained in the different groups were statistically significant, with values superior to .95 in the last 2 variables. Conclusions: Physicians who perform thoracentesis in our hospital use different methods for obtaining fluid to determine the pleural acid-base balance. The 3 methods analyzed show no significant differences with regard to pH or PCO2. Pleural fluid may be obtained by a single puncture with a large-volume syringe, subsequently transferring the fluid to a heparinized syringe without this significantly affecting the pH or PCO2, thus reducing the number of manipulations and the risk of complications. abstract_id: PUBMED:17273619 Collection and preservation of the pleural fluid and pleural biopsy The samples of pleural fluid obtained by thoracentesis for the diagnosis of transudates and exudates shall follow a routine of collection and preservation for an appropriate laboratorial analysis. Equally, fragments of pleura biopsy obtained for the differential diagnosis of the exudates should be collected in a systematic way in order to optimize the diagnosis and facilitate the institution of appropriate therapeutics actions. abstract_id: PUBMED:19417673 Factors influencing the measurement of pleural fluid pH. Purpose Of Review: Pleural fluid pH measurement is important in the management of patients with exudative pleural effusions, especially in guiding treatment of parapneumonic effusions. Common variations in the method used to sample pleural fluid affect the accuracy of the value obtained. This article reviews the effects of these variations. Recent Findings: Pleural fluid pH is decreased by exposure to acidic fluids, such as retention of local anesthetic or heparin in the syringe or sampling following infiltration of local anesthetic. Exposure of the sample to air leads to an increase in pH. If immediate analysis is not possible, delay of up to 4 h does not cause a significant change in pH, even when the sample is kept at room temperature. It is essential that a blood gas analyzer is used to obtain accurate pH measurement.These factors have less effect on the glucose concentration, which may be used to guide management if an accurate pH value is not available. Summary: Several common variables in collection method can lead to a clinically significant alteration in the pH value obtained. An evidence-based method for sampling and handling pleural fluid in order to obtain an accurate pH measurement is described. abstract_id: PUBMED:17675830 Use of heparinized versus non-heparinized syringes for measurements of the pleural fluid pH. Background: Pleural fluid (PF) pH measurement is important for establishing a diagnosis and for guiding clinical management. The current standard practice is to collect PF samples for pH measurement in heparinized syringes at room temperature and to instantaneously process these samples. Objective: The purpose of this study is to investigate the effect of collecting PF in heparinized versus non-heparinized syringes at room temperature on PF pH measurements when processed at various time intervals. Methods: From 50 consecutive thoracenteses, 1 ml of PF was collected anaerobically in each of six 3-ml syringes. Only three syringes were coated with heparin. The samples were processed for PF pH measurements at time 0 (T(0)) and 1 h (T(1)) and 2 h (T(2)) after collection. All specimens were preserved at room temperature, until the measurements were carried out in duplicates by a calibrated blood gas analyzer. Results: PF pH values were significantly lower with heparinized versus non-heparinized syringes at all time intervals (T(0): pH heparinized = 7.378 +/- 0.107 vs. pH non-heparinized = 7.390 +/- 0.108; T(1): pH heparinized = 7.378 +/- 0.115 vs. pH non-heparinized = 7.389 +/- 0.111; T(2): pH heparinized = 7.367 +/- 0.105 vs. pH non-heparinized = 7.389 +/- 0.121). In the heparinized syringes, there was a significant decrease in PF pH values at T(2) versus T(0) and T(1). There were no significant changes in PF pH values over time in the non-heparinized syringes. Conclusions: For serial PF pH measurements, the same type of syringes (either heparinized or non-heparinized) should be consistently used. With heparinized syringes, processing of PF pH measurements should be performed within 1 h after collection. abstract_id: PUBMED:25433793 Contribution of pleural fluid analysis to the diagnosis of pleural effusion Analysis of pleural fluid can have, on its own, a high diagnostic value. In addition to thoracocentesis, a diagnostic hypothesis based on medical history, physical examination, blood analysis and imaging tests, the diagnostic effectiveness will significantly increase in order to establish a definite or high probable diagnosis in a substantial number of patients. Differentiating transudates from exudates by the classical Light's criteria helps knowing the pathogenic mechanism resulting in pleural effusion, and it is also useful for differential diagnosis purposes. An increased N-terminal pro-brain natriuretic peptide, both in the fluid and in blood, in a due clinical context, is highly suggestive of heart failure. The presence of an increased inflammatory marker, such as C-reactive protein, together with the presence of over 50% of neutrophils is highly suggestive of parapneumonic pleural effusion. If, in these cases, the pH is&lt;7.20, then the likelihood of complicated pleural effusion is high. There remains to be demonstrated the usefulness of other markers to differentiate complicated from uncomplicated effusions. An adenosine deaminase &gt; 45 U/L and&gt;50% lymphocytes is suggestive of tuberculosis. If a malignant effusion is suspected but the cytological result is negative, increased concentrations of some markers in the pleural fluid can yield high specificity values. Increased levels of mesothelin and fibruline-3 are suggestive of mesothelioma. Immunohistochemical studies can be useful to differentiate reactive mesothelial cells, mesothelioma and metastatic adenocarcinoma. An inadequate use of the information provided by the analysis of pleural fluid would results in a high rate of undiagnosed effusions, which is unacceptable in current clinical practice. abstract_id: PUBMED:30815526 Comparison of analytical performance of i-Smart 300 and pHOx ultra for the accurate determination of pleural fluid pH. Background: Pleural fluid pH is an essential test for diagnosing complicated parapneumonic effusion. We evaluated the performance of two blood gas analyzers in measuring pleural fluid pH. Methods: The i-STAT G3+ (Abbott) was used as a reference analyzer to evaluate the pH values obtained from other methods: the i-Smart 300 (i-SENS), the pHOx Ultra (Nova Biomedical), using a clot catcher to filter off microclot, and pH indicator paper. Within-device precision was performed using quality control materials. We compared pleural fluid pH (n = 86) by the above methods and analyzed the concordance rate at the level of the medical decision point, pH 7.2. Results: The within-device coefficient of variations of pH were below 0.1% for all blood gas analyzers tested. The slopes of the regression equations for the i-Smart 300, pHOx Ultra, and pH indicator paper against the reference analyzer were 0.850 (95% confidence interval, CI, 0.800-0.896), 0.714 (95% CI, 0.671-0.766), and 1.105 (95% CI, 0.781-1.581), respectively. The kappa values for the i-Smart 300, pHOx Ultra, and pH indicator paper against the reference analyzer were 0.883 (95% CI, 0.656-1.110), 0.739 (95% CI, 0.393-1.084), and 0.464 (95% CI, 0.102-0.826), respectively. Conclusions: The i-Smart 300 and pHOx Ultra demonstrated good analytical performance and diagnostic accuracy when determining pleural fluid pH compared with that by the i-STAT G3+, whereas the pH indicator paper showed unsatisfactory results. abstract_id: PUBMED:9367470 Is pH paper an acceptable, low-cost alternative to the blood gas analyzer for determining pleural fluid pH? Background: Our laboratory uses pH paper rather than a blood gas analyzer to measure pleural fluid pH to decrease cost and avoid analyzer malfunction due to viscous fluids. Methods: To compare these two methods of determining pleural fluid pH, 42 patients undergoing diagnostic or therapeutic thoracentesis had two 1-mL aliquots of pleural fluid anaerobically collected in a heparinized syringe and placed on ice. pH measurements were made using litmus paper (pHydron Vivid 6-8 brand litmus paper; MicroEssential Labs; Brooklyn, NY) and the model 995-Hb blood gas analyzer (AVL Instruments; Roswell, GA) within 1 h of collection. Agreement analysis was performed in three ways: on the entire group; in subcategories of complicated or uncomplicated parapneumonic effusions (&lt;7.1, 7.1 to 7.3, &gt;7.3); and in subcategories of poor prognosis or better prognosis malignant effusions(&lt;7.3, &gt;7.3). Results: pH measured with pH paper was significantly more variable (SD=0.55, coefficient of variation [CV]=7.5%) than was pH measured with the blood gas analyzer (SD=0.11, CV=1.5%). There was no significant correlation between values obtained with the two techniques (r=-0.26, SD of the differences=0.59). Using the pH subcategories, there was 72% discordance in classification between litmus paper and arterial blood gas (ABG) determinations for patients with parapneumonic effusions. In patients with malignant effusions, there was 30% discordance. The pH values obtained by the ABG analyzer predicted tube thoracostomy 72% of the time, whereas the pH values obtained using pH paper were consistent only 36% of the time. Conclusion: Determination of pleural fluid pH using pH paper is unreliable and should not be considered an acceptable alternative to the blood gas analyzer. There is no need to determine pH on purulent samples. Hospital laboratories will be more likely to allow the use of the ABG analyzer on fluids other than blood if clinicians keep this in mind. abstract_id: PUBMED:20116740 Management of pleural disease In view of the presentations in the First National Forum of Trainee Pneumologists, the present article focuses on infectious pleural effusions and on the study of possible markers of malignant disease in asbestos-exposed individuals. The yield of the distinct techniques for the diagnosis of tuberculous pleural effusion is assessed, with emphasis on analysis of sputum and pleural samples (fluid and tissue) for Mycobacteriumtuberculosis. The utility of adenosine deaminase (ADA) (in the absence of empyema, ADA &gt; 70 U/l is diagnostic of tuberculous pleurisy, while values of less than 40 U/l exclude this diagnosis) and interferon gamma in pleural fluid (cut off: 3.7 Ul/ml) is also discussed. The management of complicated parapneumonic pleural effusions is stratified in four categories, depending on the anatomical and morphological (size and eventual presence of loculations), bacteriological (positivity or negativity of pleural fluid culture) and biochemical (pH/glucose) characteristics of the effusion. Finally, recently developed markers for the evaluation and follow-up of asbestos-exposed individuals are described, with special emphasis on serum determination of mesothelin levels, which seem highly promising as a marker of the development of mesothelioma in these cases. A multicenter study currently being performed in Spain found that soluble mesothelin-related protein (SMRP) levels higher than 0.55 nmol/L showed a sensitivity and specificity of 72% for the diagnosis of epithelial malignant mesothelioma. abstract_id: PUBMED:3307471 Pleural fluid pH: diagnostic, therapeutic, and prognostic value. Measurement of pleural fluid pH has diagnostic, therapeutic, and prognostic implications in exudative pleural effusions (Table II). A parapneumonic effusion with a pleural fluid pH below 7.2 indicates an empyema is forming which necessitates chest tube drainage in all patients, whereas a pleural fluid pH over 7.3 does not require drainage. If the pH of a parapneumonic effusion is 7.2 to 7.3, serial pleural fluid pH measurements with clinical observation will help to determine the need for chest tube drainage. A pleural fluid glucose level of below 60 mg/dl and a lactic dehydrogenase level over 1,000 IU/dl in conjunction with a pleural fluid pH of 7.2 to 7.3 indicate an impending empyema. These findings are consistent with our clinical experience in patients with parapneumonic effusion. Tuberculous pleural effusions had a pleural fluid pH below 7.4 in all reported patients. This pH may be of value in distinguishing tuberculous pleural effusions from recent malignant effusions, which tend to have a higher pleural fluid pH, particularly if used in conjunction with other pleural fluid values, cell counts, and other clinical parameters. In patients with malignant pleural effusions, a pH of less than 7.3 is usually seen in those effusions present for several months and is associated with a lower glucose level and a higher white cell count and lactic dehydrogenase level. Results of cytologic study of the pleural fluid and pleural biopsy are often positive, there is poor response to sclerosing agents, and the prognosis is poor. A rheumatoid pleural effusion most often has a pleural fluid pH below 7.3. A pleural fluid pH below 6 is seen almost exclusively in esophageal rupture but rarely with empyemas, whereas a pleural fluid pH below 7 occurs in esophageal rupture, empyema, and rheumatoid pleural effusions. In pleural effusions secondary to congestive heart failure, the pH is almost always greater than 7.4 unless systemic acidemia coexists, in which case the pleural fluid pH is within 0.04 units of the simultaneous arterial pH. The major value of pleural fluid pH is to determine the need for chest tube drainage in parapneumonic effusions and to determine the response to sclerosing agents in patients with malignant pleural effusions. As with all diagnostic tests, the results should be interpreted in the context of other diagnostic tests of the pleural fluid and clinical aspects before diagnostic or therapeutic decisions are made. Answer: Direct collection of pleural fluid into a heparinized syringe has been traditionally considered important for accurate determination of pleural pH. However, studies have shown that this may not be as critical as once thought. One study found that pleural fluid can be collected in a large syringe and then placed into a heparinized syringe without causing a clinically significant difference in pleural pH values (PUBMED:9315803). Another study supported this finding, showing that different methods for obtaining pleural fluid, including direct extraction with a heparinized syringe and extraction using a non-heparinized syringe followed by transfer to a heparinized syringe, did not show significant differences with regard to pH or PCO2 (PUBMED:18775252). Furthermore, research has indicated that for serial pleural fluid pH measurements, the same type of syringes should be consistently used, and with heparinized syringes, processing of pleural fluid pH measurements should be performed within 1 hour after collection (PUBMED:17675830). This suggests that while the use of heparinized syringes may have an impact on pH measurements, the timing of the analysis is also an important factor. In summary, while the use of heparinized syringes has been the standard practice, evidence suggests that transferring pleural fluid from a non-heparinized syringe to a heparinized one does not significantly affect the pH measurement, provided that the analysis is conducted in a timely manner. This can simplify the procedure, save time, and reduce the risk of complications associated with the use of multiple syringes (PUBMED:9315803; PUBMED:18775252; PUBMED:17675830).
Instruction: Does the impact of managed care on substance abuse treatment services vary by provider profit status? Abstracts: abstract_id: PUBMED:16336553 Does the impact of managed care on substance abuse treatment services vary by provider profit status? Objective: To extend our previous research by determining whether, and how, the impact of managed care (MC) on substance abuse treatment (SAT) services differs by facility ownership. Data Sources: The 2000 National Survey of Substance Abuse Treatment Services, which is designed to collect data on service offerings and other characteristics of SAT facilities in the U.S. These data are merged with data from the 2002 Area Resource File, a county-specific database containing information on population and MC activity. We use data on 10,513 facilities, virtually a census of all SAT facilities. Study Design: For each facility ownership type (for-profit [FP], not-for-profit [NFP], public), we estimate the impact of MC on the number and types of SAT services offered. We use instrumental variables techniques that account for possible endogeneity between facilities' involvement in MC and service offerings. Principal Findings: We find that the impact of MC on SAT service offerings differs in magnitude and direction by facility ownership. On average, MC causes FPs to offer approximately four additional services, causes publics to offer approximately four fewer services, and has no impact on the number of services offered by NFPs. The differential impact of MC on FPs and publics appears to be concentrated in therapy/counseling, medical testing, and transitional services. Conclusion: Our findings raise policy concerns that MC may reduce the quality of care provided by public SAT facilities by limiting the range of services offered. On the other hand, we find that FP clinics increase their range of services. One explanation is that MC results in standardization of service offerings across facilities of different ownership type. Further research is needed to better understand both the specific mechanisms of MC on SAT and the net impact on society. abstract_id: PUBMED:9793111 A new approach to managed care: the provider-run organization. Behavioral managed care has been dominated by for-profit carve-out managed care organizations who deliver mental health and substance abuse services by reducing services and fees to the detriment of patients and providers. We offer a new model of managed care based on a provider-run, hospital-based approach in which provider groups contract directly with HMOs and eliminate the managed care organization intermediaries. This approach allows providers to maintain or regain control of the delivery of behavioral health services. A model is presented of an academically based organization which has achieved utilization patterns compatible with the demands of payors. Innovations in service delivery, network management and fiscal issues are reviewed. abstract_id: PUBMED:15032957 The impact of managed care on substance abuse treatment services. Objective: To examine the impact of managed care on the number and types of services offered by substance abuse treatment (SAT) facilities. Both the number and types of services offered are important factors to analyze, as research shows that a broad range of services increases treatment effectiveness. Data Sources: The 2000 National Survey of Substance Abuse Treatment Services (NSSATS), which is designed to collect data on service offerings and other characteristics of SAT facilities in the United States. These data are merged with data from the 2002 Area Resource File (ARF), a county-specific database containing information on population and managed care activity. We use data on 10,513 facilities, virtually a census of all SAT facilities. Study Design: We estimate the impact of managed care (MC) on the number and types of services offered by SAT facilities using instrumental variables (IV) techniques that account for possible endogeneity between facilities' involvement in MC and service offerings. Due to limitations of the NSSATS data, MC and specific services are modeled as binary variables. Principal Findings: We find that managed care causes SAT facilities to offer, on average, approximately two fewer services. This effect is concentrated primarily in medical testing services (i.e., tests for TB, HIV/AIDs, and STDs). We also find that MC increases the likelihood of offering substance abuse assessment and relapse prevention groups, but decreases the likelihood of offering outcome follow-up. Conclusion: Our findings raise policy concerns that managed care may reduce treatment effectiveness by limiting the range of services offered to meet patient needs. Further, reduced onsite medical testing may contribute to the spread of infectious diseases that pose important public health concerns. abstract_id: PUBMED:12633003 The impact of managed care on the substance abuse treatment patterns and outcomes of Medicaid beneficiaries: Maryland's HealthChoice program. The introduction of Medicaid managed care raises concern that profit motives lead to the undersupply of substance abuse (SA) services. To test effects of the Maryland Medicaid HealthChoice program on SA treatment patterns and outcomes, Medicaid eligibility files were linked to treatment provider records and two study designs were used to estimate program impact: a quasi-experimental design with matched comparison groups and a natural experiment. Patient sociodemographic and clinical characteristics were adjusted using multiple regression. Under managed care, there was a shift from residential, correctional-only, and detoxification-only treatment toward outpatient-only treatment. Among beneficiaries entering treatment, those enrolled in managed care organizations (MCOs) had similar utilization and outcomes to those in Medicaid fee-for-service; those enrolling in MCOs during treatment had longer and more intensive episodes and, as a result, better outcomes. Thus, the study disclosed no empirical evidence that health plans respond to capitation by reducing SA services. abstract_id: PUBMED:9782654 The impact of managed care on mental health services for children and their families. For more than a decade, the philosophy of community-based systems of care has guided the delivery of mental health services for children and adolescents served by publicly funded agencies. This philosophy supports system attributes that include a broad array of services; interagency collaboration; treatment in the least-restrictive setting; individualized services; family involvement; and services responsive to the needs of diverse ethnic and racial populations. The notion of systems of care emerged in an era when managed health care also was gaining popularity. However, the effect of managed care on the delivery of mental health and substance-abuse services--also known as behavioral health services--has not been widely studied. Preliminary results from the nationwide Health Care Reform Tracking Project (HCRTP) inform discussions about the impact of managed behavioral health care on services for children and adolescents enrolled in state Medicaid programs. Most states have used some type of "carve-out design" to finance the delivery of behavioral health services, and there is a trend toward contracting with private-sector, for-profit companies to administer these benefits. In general, managed care has resulted in greater access to basic behavioral health and community-based services for children and adolescents, though access to inpatient hospital care has been reduced. Under managed care, it also has been more difficult for youths with serious emotional disorders, as well as the uninsured, to obtain needed services. With managed care has come a trend toward briefer, more problem-oriented treatment approaches for behavioral health disorders. A number of problems related to the implementation of managed behavioral health care for children and adolescents were illuminated by the HCRTP. First, there is concern that ongoing efforts to develop systems of care for youths with serious emotional disorders are not being linked with managed care initiatives. The lack of investment in service-capacity development, the lack of coordination with other agencies serving children with behavioral health problems, and cumbersome preauthorization requirements that may restrict access to appropriate service delivery were other concerns raised by respondents about managed care. As the adoption of managed behavioral health care arrangements for Medicaid beneficiaries expands rapidly, the HCRTP will continue to analyze how this trend has affected children and adolescents with behavioral health problems and their families. abstract_id: PUBMED:11338326 Public sector managed care for substance abuse treatment: opportunities for health services research. Observations of reduced utilization of alcohol and drug abuse treatment following the introduction of managed behavioral health care suggest that substance abuse services may be especially responsive to managed care restrictions and limits. In publicly funded treatment systems, patient attributes, system and provider characteristics, and financing mechanisms may heighten susceptibility to unintended effects. The State Substance Abuse and Mental Health Treatment Managed Care Evaluation Program reviewed state managed care programs for publicly funded alcohol and drug treatment services and is evaluating programs in Arizona, Iowa, Maryland, and Nebraska. The article describes initiatives and outlines evaluation activities. It discusses the opportunities and challenges of assessing public managed care plans. abstract_id: PUBMED:11887956 The impact of managed care on the use of outpatient mental health and substance abuse services in Puerto Rico. This paper estimates the impact of managed care on use of mental health services by residents of low-income areas in Puerto Rico. A quasi-experimental design evaluates the impact of a low capitation rate on a minority population using three waves of data from a random community sample. Results indicate that two years after introducing managed care, privatization of mental health services had minimal impact on use. Advocates had hoped health care reform would increase access in comparison to access seen within the public system, while opponents feared profit motives would lead to decreased access. Neither forecast turned out to be correct. The question remains as to how to improve access for the poor with low capitation rates. abstract_id: PUBMED:12645495 How did the introduction of managed care for the uninsured in Iowa affect the use of substance abuse services? Concerns about access under managed care have been raised for vulnerable populations such as publicly funded patients with substance abuse problems. To estimate the effects of the Iowa Managed Substance Abuse Care Plan (IMSACP) on substance abuse service use by publicly funded patients, service use before and after IMSACP was compared; adjustments were made for changes in population sociodemographic and clinical characteristics. Between fiscal years 1994 and 1997, patient case mix was marked by a higher burden of illness and the use of inpatient, residential nondetox, outpatient counseling, and assessment services declined, while use of intensive outpatient and residential detox services increased. Findings were similar among women, children, and homeless persons. Thus, care moved away from high-cost inpatient settings to less costly venues. Without knowing the impact on treatment outcomes, these changes cannot be interpreted as improved provider efficiency versus simply cost containment and profit maximization. abstract_id: PUBMED:12710370 Managed care and access to substance abuse treatment services. Using nationally representative data from 1995 and 2000, this study examined how managed care penetration and other organizational characteristics were related to accessibility to outpatient substance abuse treatment. At an organizational level, access was measured as the percentage of clients unable to pay for services; the percentage of clients receiving a reduced fee; and the percentage of clients with shortened treatment because of their inability to pay. Treatment units with both relatively low and relatively high managed care penetration were more likely to support access to care; these units provided care to higher percentages of clients unable to pay and were less likely to shorten treatment because of client inability to pay. Treatment units with midrange managed care penetration were least likely to support access to care. The complexity of managing in an environment of conflicting incentives may reduce the organization's ability to serve those with limited financial means. abstract_id: PUBMED:16864469 The impact of publicly funded managed care on adolescent substance abuse treatment outcomes. This study compares the 12-month changes in substance use following admission to substance abuse treatment in Massachusetts between adolescents enrolled in Medicaid managed care and other publicly funded adolescents. Two hundred and fifty-five adolescents were interviewed as they entered substance abuse treatment and at 6 and 12 month follow-ups. Medicaid enrollment data were used to determine the managed care enrollment status. One hundred forty two (56%) adolescents were in the managed care group and 113 (44%) comprise the comparison group. Substance use outcomes include a count of negative consequences of substance use, days of alcohol use, days of cannabis use, and days of any substance use in the previous 30 days. Repeated measures analysis of covariance (ANCOVA) was used to assess change with time of measurement and managed care status as main effects and the interaction of time and managed care included to measure differences between the groups over time. Although several changes across time were detected for all four outcomes, we found no evidence of an impact of managed care for any of the outcomes. The results of our study do not support the fears that behavioral managed care, by imposing limits on services provided, would substantially reduce the effectiveness of substance abuse treatment for adolescents. At the same time, the results do not support those who believe that the continuity of care and improved resource utilization claimed for managed care would improve outcomes. Answer: Yes, the impact of managed care on substance abuse treatment services does vary by provider profit status. A study using data from the 2000 National Survey of Substance Abuse Treatment Services found that managed care causes for-profit (FP) facilities to offer approximately four additional services, while public facilities offer approximately four fewer services. Not-for-profit (NFP) facilities showed no impact on the number of services offered due to managed care. The differential impact on FPs and public facilities was particularly noted in therapy/counseling, medical testing, and transitional services (PUBMED:16336553). This suggests that managed care may lead to a standardization of service offerings across facilities of different ownership types, but it also raises concerns that the quality of care provided by public substance abuse treatment facilities may be reduced due to a limited range of services offered under managed care. Further research is needed to understand the specific mechanisms of managed care on substance abuse treatment and the net impact on society.
Instruction: Universal influenza immunization. Were Ontario family physicians prepared? Abstracts: abstract_id: PUBMED:14594100 Universal influenza immunization. Were Ontario family physicians prepared? Objective: To explore family physicians' experiences during the first year of Ontario's universal influenza immunization program. Design: Qualitative study using in-depth interviews. Setting: Thames Valley region of southwestern Ontario. Participants: A maximum variation sample of nine family physicians selected by snowball sampling after initial consultation with a local family physician advisory committee. Method: Interviews were audiotaped and transcribed verbatim. Analysis was sequential, using a combination of editing, immersion, and crystallization. Interview transcripts were read by individual members of the team who met to compare findings at several stages during data collection. Main Findings: The program affected family physicians because immunization strategies designed for immunizing high-risk patients needed to be modified to deal with greater numbers of patients. While generally supportive of the program, physicians found it difficult to implement. Responses reflected ongoing conflict between individual and public health priorities, particularly regarding children and pregnant women. Conclusion: The program could have been more effective if the culture and climate of Ontario family practice had been considered during its development and implementation. abstract_id: PUBMED:22585770 Low rates of influenza immunization in young children under Ontario's universal influenza immunization program. Objectives: To determine physician-administered influenza vaccine coverage for children aged 6 to 23 months in a jurisdiction with a universal influenza immunization program during 2002-2009 and to describe predictors of vaccination. Methods: By using hospital records, we identified all infants born alive in Ontario hospitals from April 2002 through March 2008. Immunization status was ascertained by linkage to physician billing data. Children were categorized as fully, partially, or not immunized depending on the number and timing of vaccines administered. Generalized linear mixed models determined the association between immunization status and infant, physician, and maternal characteristics. Results: Influenza immunization was low for the first influenza season of the study period (1% fully immunized during the 2002-2003 season), increased for the following 3 seasons (7% to 9%), but then declined (4% to 6% fully immunized during the 2006-2007 to 2008-2009 seasons). Children with chronic conditions or low birth weight were more likely to be immunized. Maternal influenza immunization (adjusted odds ratio 4.31; 95% confidence interval 4.21-4.40), having a pediatrician as the primary care practitioner (adjusted odds ratio 1.85; 95% confidence interval 1.68-2.04), high visit rates, and better continuity of care were all significantly associated with full immunization, whereas measures of social disadvantage were associated with nonimmunization. Low birth weight infants discharged from neonatal care in the winter were more likely to be immunized. Conclusions: Influenza vaccine coverage among children aged 6 to 23 months in Ontario is low, despite a universal vaccination program and high primary care visit rates. Interventions to improve coverage should target both physicians and families. abstract_id: PUBMED:25024113 Examining Ontario's universal influenza immunization program with a multi-strain dynamic model. Seasonal influenza imposes a significant worldwide health burden each year. Mathematical models help us to understand how changes in vaccination affect this burden. Here, we develop a new dynamic transmission model which directly tracks the four dominant seasonal influenza strains/lineages, and use it to retrospectively examine the impact of the switch from a targeted to a universal influenza immunization program (UIIP) in the Canadian province of Ontario in 2000. According to our model results, averaged over the first four seasons post-UIIP, the rates of influenza-associated health outcomes in Ontario were reduced to about half of their pre-UIIP values. This is conservative compared to the results of a study estimating the UIIP impact from administrative data, though that study finds age-specific trends similar to those presented here. The strain interaction in our model, together with its flexible parameter calibration scheme, make it readily extensible to studying scenarios beyond the one explored here. abstract_id: PUBMED:20386727 Economic appraisal of Ontario's Universal Influenza Immunization Program: a cost-utility analysis. Background: In July 2000, the province of Ontario, Canada, initiated a universal influenza immunization program (UIIP) to provide free seasonal influenza vaccines for the entire population. This is the first large-scale program of its kind worldwide. The objective of this study was to conduct an economic appraisal of Ontario's UIIP compared to a targeted influenza immunization program (TIIP). Methods And Findings: A cost-utility analysis using Ontario health administrative data was performed. The study was informed by a companion ecological study comparing physician visits, emergency department visits, hospitalizations, and deaths between 1997 and 2004 in Ontario and nine other Canadian provinces offering targeted immunization programs. The relative change estimates from pre-2000 to post-2000 as observed in other provinces were applied to pre-UIIP Ontario event rates to calculate the expected number of events had Ontario continued to offer targeted immunization. Main outcome measures were quality-adjusted life years (QALYs), costs in 2006 Canadian dollars, and incremental cost-utility ratios (incremental cost per QALY gained). Program and other costs were drawn from Ontario sources. Utility weights were obtained from the literature. The incremental cost of the program per QALY gained was calculated from the health care payer perspective. Ontario's UIIP costs approximately twice as much as a targeted program but reduces influenza cases by 61% and mortality by 28%, saving an estimated 1,134 QALYs per season overall. Reducing influenza cases decreases health care services cost by 52%. Most cost savings can be attributed to hospitalizations avoided. The incremental cost-effectiveness ratio is Can$10,797/QALY gained. Results are most sensitive to immunization cost and number of deaths averted. Conclusions: Universal immunization against seasonal influenza was estimated to be an economically attractive intervention. abstract_id: PUBMED:16716034 The effect of universal influenza immunization on vaccination rates in Ontario. Objectives: This article examines the association between introduction of Ontario's Universal Influenza Immunization Program and changes in vaccination rates over time in Ontario, compared with the other provinces combined. Data Sources: The data are from the 1996/97 National Population Health Survey and the 2000/01 and 2003 Canadian Community Health Survey, both conducted by Statistics Canada. Analytical Techniques: Cross-tabulations were used to estimate vaccination rates for the total population aged 12 or older, for groups especially vulnerable to the effects of influenza, and by selected socio-demographic variables. Z tests and multiple logistic regression were used to examine differences between estimates. Main Results: Between 1996/97 and 2000/01, the increase in the overall vaccination rate in Ontario was 10 percentage points greater than the increase in the other provinces combined. Increases in Ontario were particularly pronounced among people who were: younger than 65, more educated, and had a higher household income. Between 2000/01 and 2003, vaccination rates were stable in Ontario, while rates continued to rise in the other provinces. Even so, Ontario's 2003 rates exceeded those in the other provinces. abstract_id: PUBMED:19624280 The effect of universal influenza immunization on antibiotic prescriptions: an ecological study. The Canadian province of Ontario introduced universal influenza immunization in 2000, offering free vaccines to the entire population. We compared changes in rates of influenza-associated respiratory antibiotic prescriptions before and after universal immunization in Ontario with corresponding changes in other provinces. Universal influenza immunization is associated with reduced influenza-associated antibiotic prescriptions. abstract_id: PUBMED:32040380 Recommending immunizations to adolescents in Turkey: a study of the knowledge, attitude, and practices of physicians. Introduction: The aim of this study was to determine the knowledge, attitudes, and practices of family physicians and pediatricians in regard to adolescent immunization.Methods: The study was conducted from March to May 2017. A total of 665 physicians participated. Participants were asked 31 questions about their personal sociodemographic characteristics and their knowledge, attitudes, and practices around adolescent immunization.Results: The study sample consisted of 348 family physicians (52.3% of the sample) and 317 pediatricians (47.7%). The results showed that 5.4% of family physicians and 10.4% of pediatricians thought that they had enough knowledge about adolescent immunization (p &lt; .01). Overall, 15.8% of family physicians and 12.7% of pediatricians provided adolescents with information about vaccines 'always/most of the time'. A variety of reasons for not providing information about adolescent vaccines was provided, including 'inability to allocate time' (50.2% of family physicians, 69.3% of pediatricians); 'forgetfulness' (34.8% of family physicians, 28.5% of pediatricians); 'lack of knowledge about vaccines' (34.1% of family physicians, 27.4% of pediatricians); and 'no need to immunize adolescents' (15.7% of family physicians, 6.5% of pediatricians) (p &lt; .01). HPV immunization was recommended only to girls by 30.5% of family physicians and 38.8% of pediatricians (p &lt; .01). The percentages of family physicians and pediatricians not recommending that adolescents be immunized with the Tdap vaccine were 53.4% and 42.6%, respectively (p = .016). Meningococcal immunization was not recommended by 20.7% of family physicians and 11.4% of pediatricians (p &lt; .01), and influenza immunization was not recommended by 10.3% of family physicians and 8.2% of pediatricians (p &lt; .01).Conclusion: Family physicians and pediatricians in Turkey have low rates of recommendation of immunization to adolescents. Reasons for not recommending immunization include an inability to allocate time, forgetfulness, and lack of knowledge about vaccines. We conclude that educational programs should be used to improve knowledge of adolescent immunization among family physicians and pediatricians. abstract_id: PUBMED:16624458 Incidence of influenza in Ontario following the Universal Influenza Immunization Campaign. The purpose of this study was to determine whether the incidence of influenza in Ontario, Canada has decreased following the introduction of the Universal Influenza Immunization Campaign (UIIC) in 2000. All laboratory-confirmed influenza cases in Ontario, from January 1990 to August 2005 were analyzed using multitaper time series analysis. We found that there has not been a decrease in the mean monthly influenza rate following the introduction of the UIIC (109.5 (S.D. 20) versus 160 (S.D. 50.3) p&gt;0.1). Despite increased vaccine distribution and financial resources towards promotion, the incidence of influenza in Ontario has not decreased following the introduction of the UIIC. abstract_id: PUBMED:18959473 The effect of universal influenza immunization on mortality and health care use. Background: In 2000, Ontario, Canada, initiated a universal influenza immunization program (UIIP) to provide free influenza vaccines for the entire population aged 6 mo or older. Influenza immunization increased more rapidly in younger age groups in Ontario compared to other Canadian provinces, which all maintained targeted immunization programs. We evaluated the effect of Ontario's UIIP on influenza-associated mortality, hospitalizations, emergency department (ED) use, and visits to doctors' offices. Methods And Findings: Mortality and hospitalization data from 1997 to 2004 for all ten Canadian provinces were obtained from national datasets. Physician billing claims for visits to EDs and doctors' offices were obtained from provincial administrative datasets for four provinces with comprehensive data. Since outcomes coded as influenza are known to underestimate the true burden of influenza, we studied more broadly defined conditions. Hospitalizations, ED use, doctors' office visits for pneumonia and influenza, and all-cause mortality from 1997 to 2004 were modelled using Poisson regression, controlling for age, sex, province, influenza surveillance data, and temporal trends, and used to estimate the expected baseline outcome rates in the absence of influenza activity. The primary outcome was then defined as influenza-associated events, or the difference between the observed events and the expected baseline events. Changes in influenza-associated outcome rates before and after UIIP introduction in Ontario were compared to the corresponding changes in other provinces. After UIIP introduction, influenza-associated mortality decreased more in Ontario (relative rate [RR] = 0.26) than in other provinces (RR = 0.43) (ratio of RRs = 0.61, p = 0.002). Similar differences between Ontario and other provinces were observed for influenza-associated hospitalizations (RR = 0.25 versus 0.44, ratio of RRs = 0.58, p &lt; 0.001), ED use (RR = 0.31 versus 0.69, ratio of RRs = 0.45, p &lt; 0.001), and doctors' office visits (RR = 0.21 versus 0.52, ratio of RRs = 0.41, p &lt; 0.001). Sensitivity analyses were carried out to assess consistency, specificity, and the presence of a dose-response relationship. Limitations of this study include the ecological study design, the nonspecific outcomes, difficulty in modeling baseline events, data quality and availability, and the inability to control for potentially important confounders. Conclusions: Compared to targeted programs in other provinces, introduction of universal vaccination in Ontario in 2000 was associated with relative reductions in influenza-associated mortality and health care use. The results of this large-scale natural experiment suggest that universal vaccination may be an effective public health measure for reducing the annual burden of influenza. abstract_id: PUBMED:32409138 Barriers and drivers to adult vaccination among family physicians - Insights for tailoring the immunization program in Germany. Background: In Germany, vaccination gaps exist mainly among adolescents and adults. Family physicians (FPs) administer adult vaccines. FPs strongly influence the vaccination behavior and attitudes of their patients, so their own vaccination-related attitudes and behaviors are critical to achieve high vaccination coverage. The aim of this study was to identify determinants of FPs' own vaccination uptake and their recommendation behavior. Method: 700 FPs participated in a random sampled telephone survey. Respondents were interviewed in both their roles as vaccine recipients and vaccine providers. Thus, participants indicated their own vaccination status and recommendation behavior as primary outcomes. Primary determinants were the 5C psychological antecedents of vaccination. In addition, participants indicated demographic data and other barriers towards vaccination. Association between outcome and determinants were examined using logistic regression models. Results: Around 60% of physicians reported to be vaccinated against influenza, pertussis and hepatitis B, and the majority claimed to recommend vaccines to patients. Own vaccination status was significantly associated with the recommendation of vaccines. Of the psychological determinants confidence in the safety of vaccines was associated with own vaccination and recommendation behavior. Collective responsibility, constraints and complacency were associated with own vaccination status. Being from western Germany and being a homeopathic FP were independently associated with lower own vaccination behavior. Vaccine shortages (52.5%) and cost coverage problems (25.6%) were reported frequently as system-related barriers. There was a perception that the National Immunization Technical Advisory Group was influenced by other interests (14.8%) and that people are vaccinated against too many diseases (8%). Around 40% had implemented an office-based reminder system. Discussion: FPs' vaccination behaviors are associated with various psychological determinants and additional barriers. In particular, confidence can leverage FPs' vaccination behaviors. Promoting office-based reminder systems, reducing system-related barriers, and building trust in official recommendations are additional measures to improve adult vaccination in Germany. Answer: The introduction of Ontario's universal influenza immunization program (UIIP) presented challenges for family physicians in the region. While they generally supported the program, they found it difficult to implement due to the need to modify immunization strategies to accommodate the increased number of patients. The program affected family physicians by requiring changes to strategies that were originally designed for immunizing high-risk patients. The responses from the physicians reflected a conflict between individual and public health priorities, especially concerning children and pregnant women. The effectiveness of the program could have been enhanced if the culture and climate of Ontario family practice had been taken into account during the development and implementation of the UIIP (PUBMED:14594100). Despite the UIIP, influenza immunization rates among young children in Ontario remained low. The first influenza season after the study period saw only 1% of children fully immunized, with a slight increase in the following three seasons, and then a decline again. Factors such as chronic conditions, low birth weight, maternal influenza immunization, having a pediatrician as the primary care practitioner, high visit rates, and better continuity of care were associated with full immunization. Conversely, social disadvantage was linked to non-immunization. The study concluded that to improve coverage, interventions should target both physicians and families (PUBMED:22585770). Overall, the introduction of the UIIP did not lead to a decrease in the mean monthly influenza rate in Ontario, despite increased vaccine distribution and financial resources towards promotion (PUBMED:16624458). However, another study found that the UIIP was associated with reduced influenza-associated mortality and health care use, suggesting that universal vaccination may be an effective public health measure for reducing the annual burden of influenza (PUBMED:18959473). In summary, while Ontario family physicians faced challenges with the implementation of the UIIP, the program's introduction did lead to some positive outcomes in terms of reduced influenza-associated health outcomes. However, the low immunization rates among young children and the unchanged incidence of influenza suggest that there were areas where the program's implementation could have been better supported and prepared for family physicians.
Instruction: Is low serum tocopherol in Wilson's disease a significant symptom? Abstracts: abstract_id: PUBMED:15694191 Is low serum tocopherol in Wilson's disease a significant symptom? Background: Free radical mediated injury is increasingly recognized in many metabolic diseases including Wilson's disease (WD). Use of antioxidants as an adjunctive therapy in WD may have therapeutic significance. Aim: The aim of the study was to correlate serum levels of tocopherols with serum copper and ceruloplasmin and clinical status of these patients. Methods: Serum levels of tocopherol of were measured spectrophotometrically using the Emmerie-Engel reaction in 34 patients from a large cohort of WD being followed up at a tertiary care center. Results: Majority of patients were male (M/F=23:11). The mean serum copper was 43.6+/-26.2 microg/dl (range=10-121 microg/dl) and serum ceruloplasmin was 5.6+/-5.5 mg/dl (range=0-30 mg/dl). The mean serum tocopherol level was 0.68+/-0.18 mg/dl (range=0.23-1.14 mg/dl) and compared to the control (1.07+/-0.17 mg/dl), nearly 59% of patients had decreased levels (p&lt;0.001). No significant correlation was noted between low serum tocopherol levels and serum copper levels, Mini Mental Status Examination (MMSE) scores and CHU staging. However, serum tocopherol levels were lower in patients with relatively short duration of treatment (7.8 years vs. 12.4 years). Conclusion: Decreased levels of serum tocopherol were detected in 59% of patients compared to controls. However, low tocopherol levels did not correlate with clinical status or biochemical parameters of WD, except for relatively shorter duration of treatment. Further studies, especially in newly diagnosed patients, need to be done to validate the role of low tocopherol levels in Wilson's disease. abstract_id: PUBMED:11054132 The level of serum lipids, vitamin E and low density lipoprotein oxidation in Wilson's disease patients. Unlabelled: The aim of this study was to estimate the level of lipids and of the main serum antioxidant, alpha-tocopherol (vitamin E), and to evaluate the susceptibility of low density lipoprotein (LDL) to oxidation in Wilson's disease patients. It was assumed that enhanced LDL peroxidation caused by high copper levels could contribute to the injury of liver and other tissues. The group investigated comprised 45 individuals with Wilson's disease treated with penicillamine or zinc salts and a control group of 36 healthy individuals. Lipids were determined by enzymatic methods, alpha-tocopherol by high performance liquid chromatography, the susceptibility of LDL to oxidation in vitro by absorption changes at 234 nm during 5 h and end-products of LDL lipid oxidation as thiobarbituric acid reacting substances. In Wilson's disease patients total cholesterol, LDL cholesterol and alpha-tocopherol levels were significantly lower compared with the control group. No difference in LDL oxidation in vitro between the patients and the controls was stated. Conclusion: enhanced susceptibility of isolated LDL for lipid peroxidation in vitro was not observed in Wilson's disease patients. One cannot exclude, however, that because of low alpha-tocopherol level lipid peroxidation in the tissues can play a role in the pathogenesis of tissue injury in this disease. abstract_id: PUBMED:539789 Serum antioxidant activity in normal and abnormal subjects. Serum oxidant activity (AOA) was correlated with the serum caeruloplasmin and serum copper concentration and with the total and available serum iron-binding capacity in 313 normal and abnormal subjects. In all groups except in patients with Wilson's disease (hepatolenticular degeneration) there was a highly significant direct correlation between serum AOA and serum caeruloplasmin concentration. A statistically significant direct correlation between serum AOA and the available iron-binding capacity of serum was found only in normal subjects and in children with thalassemia major and iron overload. There was no correlation between serum AOA and the serum tocopherol concentration in any of the groups studied. abstract_id: PUBMED:15562734 Essential fatty acid status in infants and children with chronic liver disease. The relationship between essential fatty acid (EFA) status and degree of hyperbilirubinaemia and oxidant stress in infants and children with chronic liver diseases was evaluated. Thirty patients with chronic cholestasis and 30 with liver cirrhosis were examined; 30 healthy subjects served as controls. Patient groups had significant decreases in EFAs and significant elevation of total bilirubin. Levels of thiobarbituric acid reactive substances were significantly raised and were significantly inversely correlated to decreased EFA levels. There were also significant decreases in retinol, alpha-tocopherol and alpha-tocopherol/total lipids ratio, which had significant positive correlations with decreased EFA levels. Infants and children with chronic liver diseases have a high risk of EFA deficiency correlated with progressive elevation of serum bilirubin and progressive deterioration of oxidant status. abstract_id: PUBMED:8201221 Low vitamin E content in plasma of patients with alcoholic liver disease, hemochromatosis and Wilson's disease. The RRR-alpha-tocopherol (vitamin E) content in plasma from 46 patients with liver diseases and 23 healthy controls was determined by high performance liquid chromatography and electrochemical detection. Patients were divided into three groups: alcoholic liver diseases (n = 17; group A), hemochromatosis (n = 17; group B) and Wilson's disease (n = 12; group C). Lipid-standardized alpha-tocopherol levels were determined to neutralize differences due to hyperlipemia. The ratio of serum vitamin E to serum lipids (cholesterol, triglycerides, phospholipids) was highest in healthy controls and in patients in group A with cirrhosis and normal transaminases and bilirubin. Patients in group A with acute or chronic ethanol intoxication and high bilirubin levels had a 37% lower lipid-standardized vitamin E level than controls. Patients in group B with hemochromatosis, showing high serum iron (&gt; 180 micrograms/dl), a low free iron binding capacity (&lt; 8 mumol/l) and high ferritin-levels (&lt; 450 micrograms/l), had a 34% lower vitamin E/lipid ratio than healthy controls. No significant lowering of the vitamin E/lipid ratio was observed in the other patients in group B. A significant decrease (37%) in the vitamin E/lipid ratio was only detectable in patients with Wilson's disease (group C) showing high free serum copper (&gt; 10 micrograms/dl). The data support a role for free radicals in the pathogenesis of active liver diseases. abstract_id: PUBMED:30881280 Unusually Low Serum Alkaline Phosphatase Activity in a Patient with Acute on Chronic Liver Failure and Hemolysis. A 28-year-old male with acute on chronic liver failure (ACLF) and hepatic encephalopathy had deranged liver function with curiously low level (0-15 IU/L) of serum alkaline phosphatase (ALP). Peripheral smear examination suggested hemolytic anemia. The finding of persistent low ALP, after ruling out pre-analytical causes, in ACLF has been reported in Wilson's disease (WD) with/ without autoimmune hemolytic anemia (AIHA). Definitive evidences of WD were not seen in our case. Positive DCT and histological features suggest a diagnosis of autoimmune hepatitis with secondary hemochromatosis and cholangitis. Low ALP might not always be a determinant of bile duct pathology in patients of ACLF with AIHA. abstract_id: PUBMED:17727313 Clinical significance of the laboratory determination of low serum copper in adults. Background: Low serum copper is often indicative of copper deficiency. Acquired copper deficiency can cause hematological/neurological manifestations. Wilson disease (copper toxicity) is associated with neurological manifestations and low serum copper, with copper deposited in tissues responsible for the toxicity. Low serum copper can also be observed in some carriers of the Wilson disease gene and aceruloplasminemia. This study was undertaken to determine the clinical significance of low serum copper. Methods: The Mayo Medical Laboratories', Metals Laboratory database was reviewed over a 9-month period to identify patients who received their care at the Mayo Clinic and had low serum copper. The medical records were analyzed to determine the significance of the low copper. Results: In six of the 57 patients with low serum copper, the low copper was due to Wilson disease. In the remaining 51 patients, copper deficiency due to an underlying cause was identified in 38 as a reason for the low serum copper. The most commonly identified neurological manifestation of copper deficiency was myeloneuropathy. Coexisting nutrient deficiencies and hematological manifestations of copper deficiency were often but not invariably present. Conclusions: Copper deficiency, Wilson disease (or a carrier state), and aceruloplasminemia are all associated with low serum copper. The presence of coexisting neurological or hematological manifestations that are recognized sequelae of copper deficiency should be considered prior to making a diagnosis of copper deficiency. Gastrointestinal disease or surgery is a common cause of acquired copper deficiency. Even in patients in whom low serum copper is indicative of copper deficiency, the cause of the copper-deficient state may not be evident. abstract_id: PUBMED:36570465 Role of serum ceruloplasmin in the diagnosis of Wilson's disease: A large Chinese study. Background: Conventionally, serum ceruloplasmin levels below the lower reference limit (0. 20 g/L) is considered a diagnostic cutoff point for Wilson's disease (WD). However, the lower reference limit varies with assay methodologies and the individuals in the included studies. The objective of this study was to determine the optimal cutoff value of serum ceruloplasmin levels for the diagnosis of WD in a large Chinese cohort and to identify factors associated with serum ceruloplasmin. Methods: The cutoff value of ceruloplasmin levels was developed based on a retrospective derivation cohort of 3,548 subjects (1,278 patients with WD and 2,270 controls) and was validated in a separate validation cohort of 313 subjects (203 patients with WD and 110 controls). The performance of immunoassay was tested by receiver operating characteristic curve (ROC) analysis, and differences among the groups were analyzed by using the Mann-Whitney U-test and the Kruskal-Wallis test. Results: The conventional cutoff of serum ceruloplasmin levels of &lt;0.2 g/L had an accuracy of 81.9%, which led to a false-positive rate of 30.5%. The optimal cutoff of the serum ceruloplasmin level for separating patients with WD from other participants was 0.13 g/L, as determined by ROC analysis. This cutoff value had the highest AUC value (0.99), a sensitivity of 97.0%, and a specificity of 96.1%. Moreover, it prevented unnecessary further investigations and treatments for 492 false-positive patients. By determining the correlation between serum ceruloplasmin and phenotypes/genotypes in patients with WD, we found that the serum ceruloplasmin level was lower in early-onset patients and higher in late-onset patients. Interestingly, patients with the R778L/R919G genotype had higher serum ceruloplasmin levels than patients with other hot spot mutation combinations. Conclusion: Our work determined the optimal cutoff value of serum ceruloplasmin levels for the diagnosis of WD and identified differences in serum ceruloplasmin levels with respect to the age of symptom onset and ATP7B mutations, which may provide some valuable insights into the diagnosis and counsel of patients with WD. abstract_id: PUBMED:3758940 Low serum alkaline phosphatase activity in Wilson's disease. Low values for serum alkaline phosphatase activity were observed early in the course of two patients with Wilson's disease presenting with the combination of severe liver disease and Coombs' negative acute hemolytic anemia. A review of other cases of Wilson's disease revealed that 11 of 12 patients presenting with hemolytic anemia had values for serum alkaline phosphatase less than their respective sex- and age-adjusted mean values; in eight, serum alkaline phosphatase activity was less than the lower value for the normal range of the test. Low values for serum alkaline phosphatase were much less common in Wilson's disease patients with more chronic forms of presentation. Copper added in high concentration to serum in vitro did not have an important effect on serum alkaline phosphatase activity. The mechanism responsible for the decrease in serum alkaline phosphatase activity in patients is uncertain. abstract_id: PUBMED:26904791 The Clinical Utility of a Low Serum Ceruloplasmin Measurement in the Diagnosis of Wilson Disease. The first step in screening for potential Wilson disease is serum ceruloplasmin testing, whereby a level of less than 0.2g/L is suggestive of the disease. We aimed to determine what proportion of an Irish population had a low ceruloplasmin level, whether low measurements were appropriately followed-up and what were the clinical outcomes. We conducted a retrospective review of all serum ceruloplasmin measurements between August 2003 and October 2009 in a large tertiary referral centre in Southern Ireland. Clinical data, serum ceruloplasmin, liver function tests, urinary copper and liver biopsy reports were all recorded where available. 1573 patients had a serum ceruloplasmin measurement during the 7-year study period. 96 patients (6.1%) had a ceruloplasmin level &lt; 0.2g/L and of these only 3 patients had Wilson disease. There was only 1 new diagnosis. Only 27 patients (28.1%) had some form of confirmatory testing performed. In our centre's experience, the positive predictive value of a significantly low ceruloplasmin level is 11.1% (95% CI 2.91-30.3%). In practice a low serum ceruloplasmin measurement is often not followed by appropriate confirmatory testing. Measuring serum ceruloplasmin as a singular diagnostic test for Wilson disease or as part of the battery of unselected liver screening tests is inappropriate and low-yield. Answer: Low serum tocopherol, also known as vitamin E, has been observed in patients with Wilson's disease (WD), but its significance as a symptom is not clearly established. In one study, decreased levels of serum tocopherol were detected in 59% of patients with Wilson's disease compared to controls. However, the study found no significant correlation between low serum tocopherol levels and the clinical status or biochemical parameters of WD, except for a relatively shorter duration of treatment (PUBMED:15694191). This suggests that while low tocopherol levels are common in WD patients, they may not directly reflect the severity or progression of the disease. Another study found that patients with Wilson's disease had significantly lower levels of alpha-tocopherol (vitamin E) compared to a control group. Despite this, there was no observed difference in the in vitro oxidation of low-density lipoprotein (LDL) between patients and controls, leading to the conclusion that enhanced susceptibility to LDL peroxidation was not evident in WD patients. However, the study suggested that due to the low alpha-tocopherol level, lipid peroxidation in tissues could play a role in the pathogenesis of tissue injury in Wilson's disease (PUBMED:11054132). In contrast, a study that measured serum antioxidant activity (AOA) found no correlation between serum AOA and serum tocopherol concentration in any of the groups studied, including patients with Wilson's disease (PUBMED:539789). This indicates that serum tocopherol levels may not be a reliable indicator of antioxidant activity in WD. Overall, while low serum tocopherol is frequently observed in Wilson's disease patients, its clinical significance as a symptom of WD remains uncertain. Further research is needed to clarify the role of tocopherol deficiency in the pathogenesis and clinical management of Wilson's disease.
Instruction: Is a bioprosthesis with a rigid stent a good choice for aortic valve replacement in elderly patients? Abstracts: abstract_id: PUBMED:12238259 Is a bioprosthesis with a rigid stent a good choice for aortic valve replacement in elderly patients? Background: Either stented or stentless bioprostheses can be used for aortic valve replacement (AVR) in aged patients. However the choice of the valve type remains controversial. The implantation technique of the stentless valves is more complex but the haemodynamic performance supposed to be superficial to the stented ones. The aim of the study was to review our experience with stented bioprostheses implanted in the last year. Material And Methods: The study reviews retrospectively 35 patients who underwent AVR with Biocor fy St Jude Medical from May 2000 to May 2001. The mean age was 73 years (65-81). Associated procedures were CABG in 17, aortoplasty in 3 and Bentall procedure in 1. Thirty-two patients had aortic stenosis, the mean preoperative gradient was 44.2 mmHg. Nineteen implanted valves were 23 mm and smaller in diameter. All patients were examined by a cardiologist (including ECHO) one month after surgery. Results: There was no early mortality (30 days) and no sign of structural valve deterioration or valve thrombosis. Mean hospital stay was 10.2 days (5-30). Mean postoperative gradient one month after surgery was 14.1 mmHg (6-24). Conclusions: The AVR with a stented bioprosthesis is a standard procedure with excellent results, the postoperative gradient is comparable to the gradient of the stentless valves. abstract_id: PUBMED:2294343 Aortic valve replacement with stentless porcine aortic bioprosthesis. Twenty-nine patients were entered in a clinical trial on aortic valve replacement with a stentless glutaraldehyde-fixed porcine aortic valve. This bioprosthesis is secured to the aortic root by the same technique used for aortic valve replacement with aortic valve homografts. The functional results obtained from this operation have been most satisfactory. To assess the hemodynamic benefit of eliminating the stent of a porcine aortic valve, we matched 22 patients with a stentless porcine bioprosthesis for age, sex, body surface area, valve lesion, and bioprosthesis size to 22 patients who had aortic valve replacement with a Hancock II bioprosthesis. Mean and peak systolic gradients across the aortic bioprosthesis and effective aortic valve areas were obtained by Doppler studies. Gradients across the stentless bioprosthesis were significantly lower than gradients across the Hancock II valve for every bioprosthesis size. Effective aortic valve areas of the stentless bioprosthesis were significantly larger than the valve areas of the Hancock II valve. Our data demonstrate that the hemodynamic characteristics of a glutaraldehyde-fixed porcine aortic bioprosthesis are greatly improved when the aortic root is used as a stent for the valve. This technique of implantation is expected to enhance the durability of the bioprosthesis, because the aortic root may dampen the mechanical stress to which the leaflets are subjected during the cardiac cycle. abstract_id: PUBMED:22916051 Transcatheter aortic valve replacement in elderly patients. Aortic stenosis is the most common native valve disease, affecting up to 5% of the elderly population. Surgical aortic valve replacement reduces symptoms and improves survival, and is the definitive therapy in patients with symptomatic severe aortic stenosis. However, despite the good results of classic surgery, risk is markedly increased in elderly patients with co-morbidities. Transcatheter aortic valve replacement (TAVR) allows implantation of a prosthetic heart valve within the diseased native aortic valve without the need for open heart surgery and cardiopulmonary bypass, offering a new therapeutic option to elderly patients considered at high surgical risk or with contraindications to surgery. To date, several multicenter registries and a randomized trial have confirmed the safety and efficacy of TAVR in those patients. In this chapter, we review the background and clinical applications of TAVR in elderly patients. abstract_id: PUBMED:27359372 Transcatheter valve-in-valve implantation for degenerated bioprosthetic aortic and mitral valves. Introduction: Redo surgery still is the treatment of choice for degenerated bioprosthesis. However, as far as elderly patients with concomitant comorbidities are concerned, the standard reoperation carries additional operative risks and, therefore, minimally invasive procedures must be prioritized. Areas Covered: During the last ten years, transcatheter procedures in native valves have become a standard technique in several centers with excellent procedural and mid-term results. Similarly, implantation of transcatheter stent-valves within degenerated aortic and mitral bioprosthesis, the 'valve-in-valve' procedure (V-in-V), represents a valid alternative to redo surgery in patients with high-risk surgical profiles. New challenges for V-in-V are the transcatheter stent-valve deployment in hostile targets (stented bioprosthesis with externally mounted leaflets, stentless valves, small bioprostheses), and avoid complications as delayed atrial embolization of mitral implantation and V-in-V thrombosis. Moreover a continuous ameliorated design of the devices on the market and new transcatheter stent-valves are recently developed in order to improve the outcome and safety of V-in-V treatment. Expert commentary: We reviewed the clinical outcomes and the procedural details of published transcatheter aortic and mitral valve-in-valve series focusing, in particular, on data from the Valve-in-Valve International Data registry (VIVID), and we provide a practical guide for valve sizing and stent-valve positioning. abstract_id: PUBMED:37834910 Transcatheter Aortic Valve Replacement in Degenerated Perceval Bioprosthesis: Clinical and Technical Aspects in 32 Cases. Background: Sutureless aortic bioprostheses are increasingly being used to provide shorter cross-clamp time and facilitate minimally invasive aortic valve replacement. As the use of sutureless valves has increased over the past decade, we begin to encounter their degeneration. We describe clinical outcomes and technical aspects in patients with degenerated sutureless Perceval (CorCym, Italy) aortic bioprosthesis treated with valve-in-valve transcatheter aortic valve replacement (VIV-TAVR). Methods: Between March 2011 and March 2023, 1310 patients underwent aortic valve replacement (AVR) with Perceval bioprosthesis implantation. Severe bioprosthesis degeneration treated with VIV-TAVR occurred in 32 patients with a mean of 6.4 ± 1.9 years (range: 2-10 years) after first implantation. Mean EuroSCORE II was 9.5 ± 6.4% (range: 1.9-35.1%). Results: Thirty of thirty-two (94%) VIV-TAVR were performed via transfemoral and two (6%) via transapical approach. Vascular complications occurred in two patients (6%), and mean hospital stay was 4.6 ± 2.4 days. At mean follow-up of 16.7 ± 15.2 months (range: 1-50 months), survival was 100%, and mean transvalvular pressure gradient was 18.7 ± 5.3 mmHg. Conclusion: VIV-TAVR is a useful option for degenerated Perceval and appears safe and effective. This procedure is associated with good clinical results and excellent hemodynamic performance in our largest single-center experience. abstract_id: PUBMED:10824474 Early experience of aortic valve replacement with the Freestyle stentless aortic bioprosthesis in elderly patients. Objectives: Stentless bioprostheses have been gaining popularity in recent years as hemodynamically superior alternatives to conventional stented bioprostheses. Methods: Between July 1996 and November 1998, 13 patients with aortic valve disease, 7 males and 6 females with a mean age (+/- SD) of 68 +/- 5 years, underwent an aortic valve replacement using the Medtronic Freestyle aortic bioprosthesis. The predominant lesions were stenosis in 8 patients and regurgitation in 5, while 2 patients had endocarditis. The operation was performed by a subcoronary technique in 9, root-inclusion technique in 3, and full root technique in 1 patient. Results: Throughout the follow-up periods (with average follow-up period of 20.6 months), there was no hospital mortality, though there was one late death of unknown cause. The New York Heart Association class improved in all patients. The peak transvalvular gradient decreased from 18.4 +/- 9.8 to 12.6 +/- 9.6 mmHg, and the effective valve orifice area increased from 2.30 +/- 0.96 to 2.59 +/- 1.05 cm2 between the 1-month and the 6-month follow-up examinations. In patients with aortic regurgitation, the left ventricular end-diastolic/end-systolic volume index significantly decreased from 147 +/- 36/62 +/- 19 to 73 +/- 26/33 +/- 14 ml/m2 at 1 month after the operation. The left ventricular mass index also significantly decreased from 189 +/- 26 to 143 +/- 30 g/m2 in patients with aortic regurgitation and from 171 +/- 28 to 144 +/- 30 g/m2 in those with aortic stenosis. Conclusions: Although long-term follow-up is required for further evaluation, the early results appeared to indicate that the Freestyle aortic bioprosthesis was suitable for elderly patients requiring aortic valve replacement. abstract_id: PUBMED:9930422 Aortic valve replacement in the elderly: bioprosthesis or mechanical valve? Background: With increased life expectancy, valve operations are more and more common in elderly patients. The choice of valve substitute-mechanical valve or bioprosthesis-remains debated. Methods: Two groups of patients of the same age (69, 70, and 71 years) with isolated aortic valve replacement (mechanical 240, bioprostheses 289) were compared for mortality, morbidity, and valve-related complications. Results: No significant difference was found in survival, valve-related mortality, valve endocarditis, and thromboembolism. Mechanical valve had more bleeding events; bioprostheses had more structural deterioration, reoperation, and valve-related morbidity and mortality. Conclusions: To avoid reoperations in octogenarians, the 10-year durability of current bioprostheses should be matched with the life expectancy of the particular patient. Bioprostheses should be used after 74 years in men and 78 years in women. abstract_id: PUBMED:30581583 Edwards Intuity Aortic Bioprosthesis in Patient with Bicuspid Aortic Valve. Bicuspid aortic valve (BAV) is generally considered to be a contraindication to sutureless aortic valve replacement (AVR). Implantation of the Edwards Intuity aortic bioprosthesis is an innovative approach associated with superior hemodynamic performance, significantly reduced myocardial ischaemia and cardiopulmonary bypass times and proves to be suitable for type 1 and 2 of bicuspid aortic valves replacement. We report a case of successful AVR using a fast deployment bioprosthesis,the Edwards Intuity Valve System, in a 67-year-old woman with a bicuspid aortic valve and concomitant severe aortic stenosis. abstract_id: PUBMED:34283394 Sutureless aortic valve with supracoronary ascending aortic replacement as an alternative strategy for composite graft replacement in elderly patients. Aortic valve disease is frequently associated with ascending aorta dilatation and can be treated either by separate replacement of the aortic valve and ascending aorta or by a composite valve graft. The type of surgery is depending on the exact location of the aortic dilatation and the concomitant valvular procedures required. The evidence for elective aortic surgery in elderly high-risk patients remains challenging and therefore alternative strategies could be warranted. We propose an alternative strategy for the treatment of ascending aortic aneurysm and aortic valve pathology with the use of a sutureless, collapsible, stent-mounted aortic valve prosthesis. abstract_id: PUBMED:30348270 Sutureless aortic bioprosthesis replacement in elderly Asian patients with aortic stenosis: Experience in a single institution. Background: Sutureless aortic valve replacement (SU-AVR) has emerged as a promising alternative for the treatment of patients with aortic valve stenosis. This study aims to assess the safety and efficacy of SU-AVR in an elderly Asian population. Methods: From June 2015 to May 2016, 15 adults with severe aortic stenosis (9 females) with a median age of 79 years underwent Perceval sutureless bioprosthesis (LivaNova, UK) implantation in a single Taiwanese institution; peri-operative recovery, clinical improvement, and valve performance were analyzed. Results: Three (20%) patients underwent concomitant procedures (coronary artery bypass grafting, 1 patient; maze, 2 patients) and 6/12 (50%) patients underwent J-ministernotomy for isolated SU-AVR. Median cardiopulmonary bypass and cross-clamp time were 105 min and 69 min, respectively. All sutureless bioprosthesis were implanted successfully without conversion to a traditional valve, but 2 patients (13.3%) need intraoperative valve repositioning because of paravalvular leakage. Median extubation time and intensive care unit stay were 5 h and 2 days, respectively. One patient experienced in-hospital mortality due to sudden collapse thought secondary to high degree atrioventricular block. Serial echocardiographic evaluations were performed preoperatively and at 1, 3, and 6 months postoperatively. The final echocardiographic exams showed nothing greater than mild aortic insufficiency and the median mean trans-valvular gradient was 13.2 (range, 6.0-26.3) mmHg. Conclusions: By simplified procedure and improved hemodynamics, SU-AVR can be implanted safely in elderly Asian population with excellent valvular performance. Answer: A bioprosthesis with a rigid stent can be a good choice for aortic valve replacement (AVR) in elderly patients, as evidenced by the satisfactory outcomes reported in several studies. A retrospective review of 35 patients who underwent AVR with a stented bioprosthesis (Biocor fy St Jude Medical) showed no early mortality, no sign of structural valve deterioration or valve thrombosis, and a mean postoperative gradient of 14.1 mmHg one month after surgery, which is comparable to the gradient of stentless valves (PUBMED:12238259). This suggests that stented bioprostheses can provide excellent results in the elderly. However, it is important to note that stentless bioprostheses have also been shown to have superior hemodynamic performance compared to stented bioprostheses. A clinical trial comparing stentless porcine aortic bioprostheses to Hancock II bioprostheses found that gradients across the stentless bioprosthesis were significantly lower and effective aortic valve areas were significantly larger (PUBMED:2294343). This indicates that stentless bioprostheses may offer some hemodynamic advantages. For elderly patients at high surgical risk or with contraindications to surgery, transcatheter aortic valve replacement (TAVR) is an alternative that allows implantation of a prosthetic heart valve within the diseased native aortic valve without open heart surgery (PUBMED:22916051). TAVR has been confirmed as safe and effective in this patient population. Furthermore, the valve-in-valve (V-in-V) procedure, which involves implanting a transcatheter stent-valve within a degenerated bioprosthesis, represents a valid alternative to redo surgery in elderly patients with high-risk surgical profiles (PUBMED:27359372). This minimally invasive approach is increasingly used for degenerated sutureless bioprostheses, such as the Perceval valve, with good clinical outcomes and hemodynamic performance reported (PUBMED:37834910). In summary, while both stented and stentless bioprostheses can be suitable for AVR in elderly patients, the choice should be individualized based on the patient's specific condition, surgical risk, and expected valve durability in relation to their life expectancy. Stentless bioprostheses may offer better hemodynamics, but stented bioprostheses have also shown good results. Additionally, TAVR and V-in-V procedures are valuable options for high-risk elderly patients or those with degenerated bioprostheses.
Instruction: Delayed Primary Closure of Fasciotomy Incisions in the Lower Leg: Do We Need to Change Our Strategy? Abstracts: abstract_id: PUBMED:25536212 Delayed Primary Closure of Fasciotomy Incisions in the Lower Leg: Do We Need to Change Our Strategy? Objectives: The primary purpose of this study is to determine whether a strategy of bringing patients back to the operating room for successive debridements allows for the eventual delayed primary closure (DPC) of fasciotomy wounds. Design: Retrospective cohort study. Data were collected from medical records and radiographs. Setting: Two urban level 1 trauma centers. Patients: One hundred four adult patients with acute compartment syndrome in the setting of a tibia fracture (open or closed). Intervention: All patients underwent decompressive fasciotomies with closure by either DPC or split-thickness skin grafting (STSG) during a subsequent surgical procedure. Main Outcome Measure: Number of fasciotomy wounds closed by DPC after the initial fasciotomy procedure. Results: Of the 104 patients brought to the operating room for their first debridement after their fasciotomies, 19 patients (18%) were treated with DPC, whereas 42 patients (40%) were closed with STSG because they were believed to be too swollen to allow for primary closure by the treating surgeon. Three of the remaining 43 patients were treated with DPC during their second debridement. No patients who underwent more than 2 washouts could be treated with DPC. No patients who sustained open fractures were able to be closed by DPC (P = 0.02). Patients who underwent STSG on their first postfasciotomy procedure had a significantly shorter hospital stay than patients who underwent additional procedures before closure (12.2 vs. 17.4 days; P = 0.005). Conclusions: Fasciotomy wounds that are not able to be primarily closed during their first postfasciotomy surgical procedure are rarely closed through DPC techniques. Early skin grafting of these wounds should be considered, especially in the clinical setting of an open injury, because it significantly decreases the length of hospital stay. Other techniques that avoid repeated debridements and attempted closures might also help reduce hospital stay. Level Of Evidence: Therapeutic Level IV. See Instructions for Authors for a complete description of levels of evidence. abstract_id: PUBMED:31876698 Management of Fasciotomy Incisions After Acute Compartment Syndrome: Is Delayed Primary Closure More Feasible in Children Compared With Adults? Background: Recent adult literature has demonstrated that in the setting of acute compartment syndrome (ACS), if fasciotomy wounds are not closed after the first debridement, they are unlikely to be closed via delayed primary closure (DPC). The purpose of this study was to report the success of DPC through serial debridement in children with fasciotomy wounds secondary to ACS and to determine whether length of hospital stay is negatively affected by adopting a DPC strategy. Methods: We identified all patients treated with fasciotomy for ACS (aged 0 to18 y). Patient, injury, and treatment characteristics were summarized by fasciotomy treatment type. Patients were grouped as: primary closure, DPC, and flap or skin graft (F/SG). For patients who required additional debridements after initial fasciotomy, treatment success was defined as closure by DPC (without requiring F/SG). Multivariable logistic regression was used to determine factors associated with additional surgeries, complications, and treatment success. Results: A total of 82 children underwent fasciotomies for ACS. Fifteen (18%) patients were treated with primary closure at the time of their initial fasciotomy and were excluded from the remainder of the analysis, 48 (59%) patients underwent DPC, and 19 (23%) patients were treated with F/SG. The majority of delayed fasciotomy wounds were successfully closed by DPC (48/67, 72%) and the rate of successful closure remained consistent with each successive operative debridement. There were no differences across DPC and F/SG groups with respect to age, method of injury, or injury severity. Patients who underwent F/SG remained in the hospital for an average of 12 days compared with 8 days for those who underwent DPC (P&lt;0.001). Conclusions: In the setting of ACS, pediatric fasciotomy wounds that are not closed after the first postfasciotomy debridement still have a high likelihood of being closed through DPC with serial surgical debridement. In children, persisting with DPC strategy for fasciotomy closure after ACS is more successful than it is in adults. Level Of Evidence: Level III. abstract_id: PUBMED:10716046 A new technique for delayed primary closure of fasciotomy wounds. Fasciotomy for compartment syndrome in the lower limb is a surgical emergency to preserve future limb function. The advised standard procedure involves both medial and lateral dermotomy in addition to the fasciotomy. There is often concern before and after performing fasciotomy about the cosmetic appearance and prolonged hospital stay if split skin grafting is required to cover the resultant skin defect. This is the case in over 50% of lower limb fasciotomies. We have used a technique of subcuticular prolene suture, first described for the delayed primary closure of contaminated abdominal wounds, in six patients who had undergone lower limb fasciotomies. In all of these cases delayed primary closure was easily achieved without the need for skin grafting. Experiments using a synthetic skin model have shown a 60% reduction in suture tension when compared with interrupted vertical mattress suturing. The subcutaneous prolene suture has the advantage of being both the method of approximation and final closure whilst spreading tension evenly across the wound edges without causing skin edge necrosis. It appears to be simpler and more economical than any technique so far described for the successful delayed primary closure of fasciotomy wounds. abstract_id: PUBMED:28127967 Fasciotomy closure using negative pressure wound therapy in lower leg compartment syndrome. Background: Fasciotomy wounds can be a major contributor to length of stay for patients as well as a difficult reconstructive challenge. Objectives: To evaluate lower leg fasciotomy wound closure outcomes comparing treatment with combined dressing fabric (COM) and negative pressure wound therapy (NPWT) in combination with elastic dynamic ligature (EDL). Methods: Retrospective study of 63 patients who underwent lower leg fasciotomy due to injury treated from January 2008 to December 2015 at the Department of Trauma Surgery, University Hospital Brno. Of these fasciotomy wounds 42 received a NPWT treatment in combination with EDL, 21 were treated only with COM. Fasciotomy wounds were closed either with primary suture or in case of persisting oedema and skin retraction the defect was covered with split thickness skin graft. Results: There was statistically significantly higher rate of primary wound closure using the NPWT versus traditional dressing (p = 0.015). Median time to definitive wound closure or skin grafting was shorter in the NPWT group. Number of dressing changes was lower in the NPWT group (p=0.008). Conclusion: NPWT combined with elastic dynamic ligature offers many advantages for fasciotomy wound closure in comparison with traditional techniques (Tab. 5, Fig. 3, Ref. 21). abstract_id: PUBMED:16643918 Delayed primary closure of fasciotomy wounds with Wisebands, a skin- and soft tissue-stretch device. Background: Fasciotomy incisions for limb compartment syndrome usually cannot be closed primarily. The conventional method of wound closure with split-thickness skin grafting is effective, but it results in an insensate and disfiguring wound and is associated with donor site morbidity. We present our experience in delayed primary closure of fasciotomy wounds with Wisebands (WB), a skin- and soft tissue-stretching device. Patients: Between 2000 and 2003, we treated 16 patients with extremity fasciotomy wounds for which primary closure was not feasible. Results: The Wisebands devices achieved controlled stretching of the wound edges, including skin and underlying soft tissue, until primary closure was feasible. Fourteen patients (88%) had successful wound closure, two patients (12%) had minor wound complications that did not necessitate the removal of the device, and two patients had local wound complications (infection, intractable pain) and their devices were removed prematurely. Delayed primary closure was achieved at the initial surgery using intraoperative skin stretching in 3 of the 14 cases (21%). After a 2-year follow-up (1.3-4 years), the treated area showed stable scarring with good aesthetic outcome and no functional deficit. Conclusions: The Wisebands device facilitates closure of fasciotomy wounds with low complication rates and good functional and aesthetic outcome. Its application is simple and safe and requires a short learning curve. Nevertheless, appropriate patient selection, intraoperative judgment and close postoperative supervision are essential for optimal results. abstract_id: PUBMED:9095421 Primary closure of fasciotomy incisions with a skin-stretching device in patients with burn and trauma. Closure of fasciotomy wounds is often a clinical problem after successful management of compartment syndrome. Commonly, split-thickness skin grafts or regional composite grafts are used for fasciotomy closure. However, functional and cosmetic results would be improved if primary reapproximation of these wounds were more practical. The main obstacle that must be overcome is excessive tension on the wound edges. A recently developed skin-stretching device (Sure-Closure, Life Medical Sciences, Princeton, N.J.) allows large tissue defects to be closed with approximation of the wound edges. In this report we describe two patients in whom closure of the fasciotomy incisions was successfully accomplished with the skin-stretching device. These patients included an 11-month-old girl with a circumferential burn of the left arm, and a 42-year-old woman involved in a motor vehicle accident who sustained frostbite and crush injury to her left upper extremity without bone fractures. The skin-stretching device produced excellent functional and cosmetic wound closure results and eliminated the need for additional operative procedures. abstract_id: PUBMED:24421654 Elevation as a treatment for fasciotomy wound closure. There are currently numerous techniques described in the literature that attempt to optimize wound closure following a fasciotomy. However, primary closure of fasciotomy wounds continues to be difficult to accomplish successfully because of the underlying edema sustained from the compartment syndrome. The approach described in the present report is simple and physiologically sound, and addresses the underlying pathology. The authors focus on alleviating edema by strictly elevating the limb, followed by primary closure. Twelve consecutive fasciotomy wounds, referred from 2005 to 2012, were closed using this approach. The average wound closure time was 3.4 days (range three to five days) following the initial consultation. All 12 fasciotomy wounds responded with no revisions, complications, failures or loss of skin sensation. The approach was successful in all anatomical locations that were closed and conversion to any techniques currently available in the literature was not necessary. There are no costs associated with this approach, making it practical in settings with limited resources. It has a high success rate, superior cosmetic results and, most importantly, it achieves an efficient closure time. Therefore, this approach is superior to current techniques and should be a part of a plastic surgeon's armamentarium. abstract_id: PUBMED:28176601 Fasciotomy closure techniques. We evaluated the risks and success rates of the three major techniques for compartment syndrome fasciotomy closure by reviewing all literature published to date. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, we systematically evaluated the Medline (PubMed) database until July 2015, utilizing the Boolean search sting "compartment syndrome OR fasciotomy closure." Two authors independently assessed all studies published in the literature to ensure validity of extracted data. The data was compiled into an electronic spreadsheet, and the wound closure rate with each technique was assessed utilizing a proportion random model effect. Success was defined as all wounds that could be closed without skin grafting, amputation, or death. The highest success rate was observed for dynamic dermatotraction and gradual suture approximation, whereas vacuum-assisted closure had the lowest complication rate. abstract_id: PUBMED:33161435 Utility of Shoelace Technique in Closure of Fasciotomy Wounds in Electric Burns. Fasciotomy is indicated to relieve compartment syndrome caused by electric burns. Many techniques are available to close the fasciotomy wounds including vacuum-assisted closure, skin grafting, and healing by secondary intention. This study assessed the shoelace technique in fasciotomy wound closure in patients with electric burns. The study included 19 fasciotomy wounds that were treated by shoelace technique (Group ST, n = 10 fasciotomy wounds) or by skin grafting/healing by secondary intention (Group C, n = 9 fasciotomy wounds). Data were collected for wound surface area, time to intervention, time to wound closure, rate of decrease in wound surface area after application of shoelace technique and associated complications. The mean time to intervention after fasciotomy was significantly lower in Group ST-7.6 ± 3.8 days as compared to 15.8 ± 5.3 days in Group C (P = .004). The median time to closure was also significantly lower in Group ST-7 days (range 6-10) as compared to Group C-20 days (range 12-48) (P &lt; .001). Primary closure was achieved in 80% cases in the group ST and no complications were recorded. The shoelace technique is an economical, fast, and effective method of fasciotomy wound closure in electric burns, especially in high volume centers and resource-limited areas. abstract_id: PUBMED:30014272 Comparison of lower extremity fasciotomy wound closure techniques in children: vacuum-assisted closure device versus temporary synthetic skin replacement. Purpose: No clear consensus on the optimal treatment of fasciotomy wounds due to acute compartment syndrome of the lower leg in children exists. We therefore compared two commonly used methods to close fasciotomy wounds, Epigard, a temporary synthetic skin replacement (SSR) and the vacuum-assisted closure (V.A.C.) device, in respect of treatment duration and complication rates. Methods: We studied the cases of 27 patients who were treated at our institution for acute compartment syndrome of the lower leg with a fasciotomy over a 10-year period. The fasciotomy wound was either treated with SSR or V.A.C. device. We recorded the number of procedures to definitive wound closure, days to wound closure, hospitalization days and sequelae rate. Results: In the V.A.C. device group (18 patients) the mean number of procedures until definitive wound closure was 3.1, mean days until wound closure was 9.4 and mean days of hospitalization was 16.2. One patient suffered from a wound infection and one patient required a full thickness skin graft. In the SSR group (9 patients), the mean number of procedures was 1.8, mean days until definitive wound closure was 4.9 and mean days of hospitalization was 9.9. No sequelae were recorded. There was a statistically significant smaller number of procedures (p value 0.018), fewer days to definitive wound closure (p value 0.002) and fewer hospitalization days (p value 0.005) in the SSR group. Conclusions: Both SSR and V.A.C. device are safe and reliable for closure of fasciotomy wounds in children, whereas SSR seems to lead to shorter time until definitive wound closure. Answer: The question of whether there is a need to change the strategy for delayed primary closure (DPC) of fasciotomy incisions in the lower leg is addressed by several studies, which provide insights into the effectiveness of various closure techniques and the factors that influence their success. A retrospective cohort study at two urban level 1 trauma centers involving 104 adult patients with acute compartment syndrome in the setting of a tibia fracture found that fasciotomy wounds not closed during the first postfasciotomy surgical procedure are rarely closed through DPC techniques. The study suggests that early skin grafting should be considered, especially in the setting of an open injury, as it significantly decreases the length of hospital stay (PUBMED:25536212). In contrast, a study on pediatric patients with fasciotomy wounds secondary to acute compartment syndrome reported that DPC through serial debridement is more successful in children compared to adults. The study found that 72% of delayed fasciotomy wounds in children were successfully closed by DPC, and the rate of successful closure remained consistent with each successive operative debridement (PUBMED:31876698). Various techniques have been described for facilitating DPC, including the use of subcuticular prolene suture, which has been shown to reduce suture tension and avoid skin edge necrosis (PUBMED:10716046), and negative pressure wound therapy (NPWT) combined with elastic dynamic ligature, which has been associated with a higher rate of primary wound closure and fewer dressing changes (PUBMED:28127967). The use of skin- and soft tissue-stretch devices, such as Wisebands, has also been reported to facilitate closure of fasciotomy wounds with low complication rates and good functional and aesthetic outcomes (PUBMED:16643918). Additionally, a skin-stretching device has been used successfully in patients with burn and trauma, producing excellent functional and cosmetic results (PUBMED:9095421). Elevation of the limb to alleviate edema followed by primary closure has been described as a simple and effective approach with a high success rate and superior cosmetic results (PUBMED:24421654). A systematic review of fasciotomy closure techniques found that dynamic dermatotraction and gradual suture approximation had the highest success rate, while NPWT had the lowest complication rate (PUBMED:28176601).
Instruction: Do our patients understand? Abstracts: abstract_id: PUBMED:8678391 Factors associated with do-not-resuscitate orders: patients' preferences, prognoses, and physicians' judgments. SUPPORT Investigators. Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatment. Background: Medical treatment decisions should be based on the preferences of informed patients or their proxies and on the expected outcomes of treatment. Because seriously ill patients are at risk for cardiac arrest, examination of do-not-resuscitate (DNR) practices affecting them provides useful insights into the associations between various factors and medical decision making. Objective: To examine the association between patients' preferences for resuscitation (along with other patient and physician characteristics) and the frequency and timing of DNR orders. Design: Prospective cohort study. Setting: 5 teaching hospitals. Patients: 6802 seriously ill hospitalized patients enrolled in the Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatment (SUPPORT) between 1989 and 1994. Measurements: Patients and their surrogates were interviewed about patients' cardiopulmonary resuscitation preferences, medical records were reviewed to determine disease severity, and a multivariable regression model was constructed to predict the time to the first DNR order. Results: The patients' preference for cardiopulmonary resuscitation was the most important predictor of the timing of DNR orders, but only 52% of patients who preferred not to be resuscitated actually had DNR orders written. The probability of surviving for 2 months was the next most important predictor of the timing of DNR orders. Although DNR orders were not linearly related to the probability of surviving for 2 months, they were written earlier and more frequently for patients with a 50% or lower probability of surviving for 2 months. Orders were written more quickly for patients older than 75 years of age, regardless of prognosis. After adjustment for these and other influential patient characteristics, the use and timing of DNR orders varied significantly among physician specialties and among hospitals. Conclusions: Patients' preferences and short-term prognoses are associated with the timing of DNR orders. However, the substantial variation seen among hospital sites and among physician specialties suggests that there is room for improvement. In this study, DNR orders were written earlier for patients older than 75 years of age, regardless of prognosis. This finding suggests that physicians may be using age in a way that is inconsistent with the reported association between age and survival. The process for making decisions about DNR orders needs to be improved if such orders are to routinely and accurately reflect patients' preferences and probable outcomes. abstract_id: PUBMED:8687264 Factors associated with change in resuscitation preference of seriously ill patients. The SUPPORT Investigators. Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments. Background: During serious illness, patient preferences regarding life-sustaining treatments play an important role in medical decisions. However, little is known about life-sustaining preference stability in this population or about factors associated with preference change. Methods: We evaluated 2-month cardiopulmonary resuscitation (CPR) preference stability in a cohort of 1590 seriously ill hospitalized patients at 5 acute care teaching hospitals. Using multiple logistic regression, we measured the association of patient demographic and health-related factors (quality of life, function, depression, prognosis, and diagnostic group) with change in CPR preference between interviews. Results: Of 1590 patients analyzed, 73% of patients preferred CPR at baseline interview and 70% chose CPR at follow-up. Preference stability was 80% overall-85% in patients initially preferring CPR and 69% in those initially choosing do not resuscitate (DNR). For patients initially preferring CPR, older age, non-African American race, and greater depression at baseline were independently associated with a change to preferring DNR at follow-up. For patients initially preferring DNR, younger age, male gender, less depression at baseline, improvement in depression between interviews, and an initial admission diagnosis of acute respiratory failure or multiorgan system failure were associated with a change to preferring CPR at follow-up. For patients initially preferring DNR, patients with substantial improvements in depression score between interviews were more than 5 times as likely to change preference to CPR than were patients with substantial worsening in depression score. Conclusions: More than two thirds of seriously ill patients prefer CPR for cardiac arrest and 80% had stable preferences over 2 months. Factors associated with preference change suggest that depression may lead patients to refuse life-sustaining care. Providers should evaluate mood state when eliciting patients' preferences for life-sustaining treatments. abstract_id: PUBMED:30062033 The ability to obtain, appraise and understand health information among undergraduate nursing students in a medical university in Chongqing, China. Aim: The aim of this study was to survey the ability of nursing students to obtain, appraise and understand health information and its influencing factors among undergraduate nursing students in a medical university in Chongqing, China. Design: A cross-sectional survey. Method: The sample was obtained using stratified sampling methods. We used the internationally validated Health Literacy Questionnaire. Six hundred and fifteen (76.88%) of 800 nursing students completed participated anonymous questionnaires that measured their ability to obtain, appraise and understand health information. Results: Mean scores of nursing students to obtain, appraise and understand health information were 17.13, 13.07 and 17.78 respectively. Academic level, parental educational level and socioeconomic status were significantly associated with scores in obtaining, appraising and understanding health information. abstract_id: PUBMED:3940836 Do doctors understand their patients? The extent to which doctors understand the complaints of their patients has been investigated by comparing patient complaints with the observation and assessment of these by the doctor. This involved questioning 259 patients and 30 doctor. From the 259 patients, three groups (a total of 57 patients) were formed, having common main symptomatologies. The extent to which the complaints of each of these patients corresponded with the clinical assessment was evaluated. The results revealed a close agreement between patient complaints and clinical assessment, not only for those patients with organo-medically diffuse symptoms (feeling of illness, tiredness, nervousness) but also for those with organ-medically clearly defined, localised (cardiac and thoracic pain) and mixed diffuse-localised complaints. abstract_id: PUBMED:18694265 Do patients understand how PHRs work? As Personal Health Records (PHRs) gain momentum, designers need to ensure that users understand the functions and benefits of PHRs. This study examines patients' readiness to use PHRs. Results show that although most participants envision themselves using the system, five areas of concern about the use of PHRs remain. The gap between current and ideal understanding highlights the need for formalized methods to help patients understand what PHRs are and how to use them. abstract_id: PUBMED:10809474 Communication and decision-making in seriously ill patients: findings of the SUPPORT project. The Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments. Objectives: The Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT) represents one of the largest and most comprehensive efforts to describe patient preferences in seriously ill patients, and to evaluate how effectively patient preferences are communicated. Our objective was to review findings from SUPPORT describing the communication of seriously ill patients' preferences for end-of-life care. Methods: We identified published reports from SUPPORT describing patient preferences and the communication of those preferences. We abstracted findings that addressed each of the following questions: What patient characteristics predict patient preferences for end of life care? How well do physicians, nurses, and surrogates understand their patients' preferences, and what variables are correlated with this understanding? Does increasing the documentation of existing advance directives result in care more consistent with patients' preferences? Results: Patients who are older, have cancer, are women, believe their prognoses are poor, and are more dependent in ADL function are less likely to want CPR. However, there is considerable variability and geographic variation in these preferences. Physician, nurse, and surrogate understanding of their patient's preferences is only moderately better than chance. Most patients do not discuss their preferences with their physicians, and only about half of patients who do not wish to receive CPR receive DNR orders. Factors other than the patients' preferences and prognoses, including the patient's age, the physician's specialty, and the geographic site of care were strong determinants of whether DNR orders were written. In SUPPORT patients, there was no evidence that increasing the rates of documentation of advance directives results in care that is more consistent with patients' preferences. Conclusions: SUPPORT documents that physicians and surrogates are often unaware of seriously ill patients' preferences. The care provided to patients is often not consistent with their preferences and is often associated with factors other than preferences or prognoses. Improving these deficiencies in end-of-life care may require systematic change rather than simple interventions. abstract_id: PUBMED:16272918 Do your patients understand? Determining your patients' health literacy skills. Despite teaching endeavors, nurses are constantly faced with patients who do not understand how to manage their healthcare. This problem has come to the forefront of healthcare issues. As a society, there is concern that despite medical advances, progress with healthcare may be in jeopardy because the skills needed by patients to manage their care are insufficient. This issue is affected by many factors. One of the most prominent factors is the lack of patient health literacy skill assessment. One of the first and most basic parts of the nursing process is to assess the patient. To teach patients, we must identify their learning needs, but the assessment cannot stop there. Nurses need to know patients' health literacy skills so that they can teach them in the best manner possible. This article provides specific information on health literacy assessment tools and the skills needed by nurses to use these tools. Each nurse must decide what tools will work for his or her patients, so that in the end, each patient will understand how to manage his or her healthcare. abstract_id: PUBMED:9107618 Is experience a good teacher? How interns and attending physicians understand patients' choices for end-of-life care. SUPPORT Investigators. Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments. Background: Recent studies have shown that physicians do not accurately assess patients' health status or treatment preferences. Little is known, however, about how physicians' levels of training or experience relate to their abilities to assess these preferences. To better understand this phenomenon, the authors compared the abilities of medical interns and attending physicians to predict the choices of their adult patients for end-of-life care. Methods: 230 seriously-ill adult inpatients were surveyed about their desires for cardiopulmonary resuscitation, their current quality of life, and their attitudes toward six other common adverse outcomes. The medical intern and attending physician who cared for these patients were asked to estimate the patient's responses for all of the same items. Agreement was assessed using the kappa statistic. Results: Compared with interns, attending physicians had known patients longer, had talked with patients more frequently about prognosis, and felt they knew more about their patients' preferences (all p &lt; .0001). Despite this, the attending physicians were no more accurate than the interns in assessing patients' preferences. Both interns and attending physicians had only a fair understanding of patients' preferences for cardiopulmonary resuscitation or their quality of life (kappa statistics 0.32 to 0.47), and even less understanding of their willingness to tolerate adverse outcomes (kappa statistics -0.03 to 0.37). Conclusions: For this cohort of seriously ill patients, neither medical interns nor their attending physicians were consistently accurate in assessing patients' preferences, and attending physicians were not more accurate than medical interns. Attending physicians should not assume that they can infer patients' preferences any better than the interns caring for these hospitalized patients. abstract_id: PUBMED:10809465 Dying with end stage liver disease with cirrhosis: insights from SUPPORT. Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatment. Objectives: To understand patterns of care and end-of-life preferences for patients dying with end stage liver disease with cirrhosis (ESLDC). Methods: Data were collected during the Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatment (SUPPORT), a prospective cohort study of seriously ill hospitalized adults at five teaching hospitals in the United States, and included all patients enrolled in SUPPORT with ESLDC. Results: Of 575 patients with ESLDC, 166 died during index hospitalization, and 168 died in the following year. The majority were male (65%) and white (80%); the median age was 52 years. Most rated their quality of life as poor or fair, and multiple comorbidities were common. Most spent their last few days completely disabled. Families often reported loss of most income and the need to leave work or other activities in order to care for patients. Pain was at least moderately severe most of the time in one-third of patients. End-of-life preferences were not associated with survival. Most patients (66.8%) preferred CPR, but DNR orders and orders against ventilator use increased near death. Conclusions: Patients with liver disease were young, likely to be male, and often had low incomes. The high burden of pain was comparable to that reported for patients with lung and colon cancer. Persons dying with liver disease may benefit from increased attention to relief of symptoms, improved home care, and advanced care planning. abstract_id: PUBMED:9758029 Surrogates' agreement with patients' resuscitation preferences: effect of age, relationship, and SUPPORT intervention. Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatment. Purpose: The purpose of this study was to evaluate an intervention to improve patient-surrogate agreement on end-of-life resuscitation preferences. Materials And Methods: Seven hundred seventeen patients with a 50% 6-month survival rate and their surrogate decision-makers were recruited for a randomized clinical trial from five teaching hospitals participating in the Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT). Intervention patients (n=386) were assigned specially trained nurses who spent extra time with patients and families explaining prognoses and treatments. Control patients (n=331) received usual care. Patient preferences and surrogate's perception of those preferences at pre- and post-intervention interviews were compared. Results: Agreement between patients and surrogates was 75.0% at the day 3 interview and 79.6% at the month 2 interview, increasing 4.6% (95% CI: 0.1%, 9.1%). Improvements in agreement from day 3 to month 2 were seen equally in both study groups. A multivariable analysis verified that the intervention did not have an effect on agreement and indicated a decrease in agreement among older patients and among surrogates not in the immediate family. Conclusions: The SUPPORT intervention was not successful in increasing agreement between patients and surrogates. Because of the complex issues involved in end-of-life decision-making, a more aggressive intervention may be needed. Other findings suggest that improvements in communication are particularly needed when patients are older and when the surrogate is not a patient's immediate relative. Answer: The question "Do our patients understand?" can be addressed by examining the findings from several studies that have explored patients' understanding of their health, preferences, and the medical decisions that affect them. Patients' understanding of their resuscitation preferences and the factors influencing these decisions have been studied extensively. The SUPPORT study found that patients' preferences for cardiopulmonary resuscitation (CPR) were the most important predictor of the timing of do-not-resuscitate (DNR) orders, but only 52% of patients who preferred not to be resuscitated actually had DNR orders written. This suggests that there may be a gap between patients' preferences and the medical orders that reflect those preferences (PUBMED:8678391). Additionally, the stability of patients' CPR preferences over time was found to be 80%, with factors such as age, race, and depression influencing changes in these preferences (PUBMED:8687264). The ability of nursing students to obtain, appraise, and understand health information was found to be influenced by academic level, parental educational level, and socioeconomic status, indicating that not all patients may have the same capacity to understand health information (PUBMED:30062033). Physicians' understanding of patients' complaints and preferences has also been examined. One study showed a close agreement between patient complaints and clinical assessment (PUBMED:3940836), while another highlighted that physicians and surrogates often do not accurately understand seriously ill patients' preferences for end-of-life care (PUBMED:10809474). Furthermore, the accuracy of physicians' assessments of patients' preferences did not significantly differ between medical interns and attending physicians (PUBMED:9107618). Patients' readiness to use Personal Health Records (PHRs) and their understanding of how PHRs work have been identified as areas needing improvement, with a gap existing between current and ideal understanding (PUBMED:18694265). In summary, while some patients do understand their health information and preferences, there are significant gaps and variability in understanding among different patient populations and between patients and their healthcare providers. Factors such as age, race, depression, socioeconomic status, and the quality of communication with healthcare providers play a role in this understanding. These findings suggest that there is a need for improved communication and education strategies to ensure that patients understand their health information and that their preferences are accurately reflected in medical decisions.
Instruction: Dropping of instrument by scrub team during hip and knee arthroplasty - Does patient's size matters? Abstracts: abstract_id: PUBMED:24378380 Dropping of instrument by scrub team during hip and knee arthroplasty - Does patient's size matters? Background: Medical literature suggest that Hip and Knee replacement surgery takes far more time to conduct in overweight and obese patients than in general population. Reasons for increase in operating time in obese patient are difficult positioning, more bleeding from fatty tissue and difficult retraction. One very interesting and to our knowledge never reported cause of increased operating time in obese patients undergoing hip and knee arthroplasty is accidental dropping of instruments onto the floor by operating team. We looked into the relationship between patient's size and dropping of instruments during hip and knee arthroplasty. Material And Methods: A prospective cohort study was done were we included twenty five patients with BMI &lt;30 and 25 patients with BMI ≥30 undergoing hip and knee arthroplasty. Results: Instruments were dropped in 31 out of a total of 50 operations giving a dropping rate of 62%, out of these 19 were in patients with BMI &gt;30 and 12 were in patients with BMI &lt;30, inferential analysis gave a p value of 0.04997 which is statistically significant. Conclusions: Operative time in obese patients may be indirectly affected by time take to replace dropped instruments. We recommend that there should be a backup of instruments in centres where obese patients are undergoing arthroplasty. abstract_id: PUBMED:33517742 Influence of team composition on turnover and efficiency of total hip and knee arthroplasty. Aims: Surgical costs are a major component of healthcare expenditures in the USA. Intraoperative communication is a key factor contributing to patient outcomes. However, the effectiveness of communication is only partially determined by the surgeon, and understanding how non-surgeon personnel affect intraoperative communication is critical for the development of safe and cost-effective staffing guidelines. Operative efficiency is also dependent on high-functioning teams and can offer a proxy for effective communication in highly standardized procedures like primary total hip and knee arthroplasty. We aimed to evaluate how the composition and dynamics of surgical teams impact operative efficiency during arthroplasty. Methods: We performed a retrospective review of staff characteristics and operating times for 112 surgeries (70 primary total hip arthroplasties (THAs) and 42 primary total knee arthroplasties (TKAs)) conducted by a single surgeon over a one-year period. Each surgery was evaluated in terms of operative duration, presence of surgeon-preferred staff, and turnover of trainees, nurses, and other non-surgical personnel, controlling cases for body mass index, presence of osteoarthritis, and American Society of Anesthesiologists (ASA) score. Results: Turnover among specific types of operating room staff, including the anaesthesiologist (p = 0.011), circulating nurse (p = 0.027), and scrub nurse (p = 0.006), was significantly associated with increased operative duration. Furthermore, the presence of medical students and nursing students were associated with improved intraoperative efficiency in TKA (p = 0.048) and THA (p = 0.015), respectively. The presence of surgical fellows (p &gt; 0.05), vendor representatives (p &gt; 0.05), and physician assistants (p &gt; 0.05) had no effect on intraoperative efficiency. Finally, the presence of the surgeon's 'preferred' staff did not significantly shorten operative duration, except in the case of residents (p = 0.043). Conclusion: Our findings suggest that active management of surgical team turnover and composition may provide a means of improving intraoperative efficiency during THA and TKA. Cite this article: Bone Joint J 2021;103-B(2):347-352. abstract_id: PUBMED:9418633 Effect of a patient management system on outcomes of total hip and knee arthroplasty. Five hundred fifty-three patients undergoing hip and knee reconstructive procedures in one institution that used a patient management system were compared with a retrospective group of 340 patients undergoing similar procedures in the same institution. All procedures were performed by one surgeon and the same patient management team. Measures of length of stay, discharge disposition, and hospital charges were recorded for all patients in each subgroup of total hip arthroplasty, revision total hip arthroplasty, total knee arthroplasty, revision total knee arthroplasty, unicompartmental knee arthroplasty, and bilateral procedures. The length of stay and hospital charges were reduced significantly in all groups, whereas the percentage of patients discharged to home was unchanged. There was no significant difference in complication rates between the two groups. abstract_id: PUBMED:36571779 The impact of frailty on patient-reported outcomes following hip and knee arthroplasty. Aim: to determine the impact of frailty on patient-reported outcomes following hip and knee arthroplasty. Methods: we used linked primary and secondary care electronic health records. Frailty was assessed using the electronic frailty index (categorised: fit, mild, moderate, severe frailty). We determined the association between frailty category and post-operative Oxford hip/knee score (OHS/OKS) using Tobit regression. We calculated the proportion of patients in each frailty category who achieved the minimally important change (MIC) in OHS (≥8 points) and OKS (≥7 points) and the proportion who reported a successful outcome (hip/knee problems either 'much better' or 'a little better' following surgery). Results: About 42,512 people who had a hip arthroplasty and 49,208 who had a knee arthroplasty contributed data. In a Tobit model adjusted for pre-operative OHS/OKS, age, sex and quintile of index of multiple deprivation, increasing frailty was associated with decreasing post-operative OHS and OKS, respectively, β-coefficient (95% CI) in severely frail versus fit, -6.97 (-7.44, -6.49) and - 5.88 (-6.28, -5.47). The proportion of people who achieved the MIC in OHS and OKS, respectively, decreased from 92 and 86% among fit individuals to 84 and 78% among those with severe frailty. Patient-reported success following hip and knee arthroplasty, respectively, decreased from 97 and 93% among fit individuals to 90 and 83% among those with severe frailty. Conclusion: frailty adversely impacts on patient-reported outcomes following hip and knee arthroplasty. However, even among those with severe frailty, the large majority achieved the MIC in OHS/OKS and reported a successful outcome. abstract_id: PUBMED:17919586 Pain management and accelerated rehabilitation for total hip and total knee arthroplasty. Improved pain management techniques and accelerated rehabilitation programs are revolutionizing our patients' postoperative experience after total hip and knee arthroplasty. The process involves regional anesthesia with multimodal pain control using local periarticular injections in combination with enhanced patient education and accelerated rehabilitation provided by a dedicated team of surgeons, physicians, anesthesiologists, physician assistants, physical therapists, and social workers. With this system, it is now possible to achieve a painless recovery after total hip arthroplasty and total knee arthroplasty. Although this is not always the case, it was unheard of in prior years. It is our hope that future research into this area will make painful, difficult recoveries after total hip arthroplasty and total knee arthroplasty a distant memory. abstract_id: PUBMED:27601761 Orthopaedic Enhanced Recovery Programme for Elective Hip and Knee Arthroplasty - Could a Regional Programme be Beneficial? Introduction: Arthroplasty is commonplace in orthopaedic practice, and post operative pain has been shown to substantially hinder recovery and discharge from hospital. Objectives: The current study assessed a multidisciplinary, multimodal Orthopaedic ERP in terms of its effect on patient perceived post operative pain in hip and knee arthroplasty. Secondary outcome was in the form of a cost analysis. Methods: A prospective study was performed on consecutive arthroplasty patients across a 6 week period in a district orthopaedic unit. A multidisciplinary approach to devising an ERP was undertaken between anaesthetists, surgeons and physiotherapists. Domains included optimising pre-operative nutrition, anaesthetic pre-meds, standardised anaesthetic technique, standardised intra-operative technique and use of locally infiltrated anaesthetic (LIA), as well as a post operative pain regimen. The multidisciplinary team (MDT) involved physiotherapy for the patient on day 0. Demographic data, day 1 and day 2 post operative subjective pain scores using an analogue scale were recorded. Data was collated and analysed using appropriate statistical methods. A p-value of &lt;0.05 was considered significant. Results: A total of 40 patients (25 total hip replacements and 15 total knee replacements) were included. All conformed to the ERP. Reductions in patient reported pain scores were observed. Specifically, in total hip arthroplasty (THA), day 1 scores were not significantly improved (p=0.25), however day 2 scores improved significantly (p=0.02). For total knee arthroplasty (TKA), both day 1 and day 2 scores improved significantly (p=0.02 &amp; p&lt;0.001, respectively) Analgesic requirements were not significantly different between hip and knee replacements. Early mobilization occurred in 95% of patients. Length of stay was reduced significantly in hip (1.8 days, p=0.003) and knee (1.9 days(p&lt;0.001) replacements following ERP. Cost analysis demonstrated a potential annual saving of approximately £200,000 for the study unit if ERP was applied to all elective hip and knee arthroplasty procedures. Conclusions: The study demonstrates that a tailored, MDT orientated ERP can be beneficial in elective hip and knee arthroplasty. Reductions in pain scores, early ambulation and facilitated early discharge are beneficial to the patient, and cost effective for the unit. The implementation across the region may result in further cost savings. abstract_id: PUBMED:32001083 The Effects of Bundled Payment Programs for Hip and Knee Arthroplasty on Patient-Reported Outcomes. Background: Patient-reported outcomes are essential to demonstrate the value of hip and knee arthroplasty, a common target for payment reforms. We compare patient-reported global and condition-specific outcomes after hip and knee arthroplasty based on hospital participation in Medicare's bundled payment programs. Methods: We performed a prospective observational study using the Comparative Effectiveness of Pulmonary Embolism Prevention after Hip and Knee Replacement trial. Differences in patient-reported outcomes through 6 months were compared between bundle and nonbundle hospitals using mixed-effects regression, controlling for baseline patient characteristics. Outcomes were the brief Knee Injury and Osteoarthritis Outcomes Score or the brief Hip Disability and Osteoarthritis Outcomes Score, the Patient-Reported Outcomes Measurement Information System Physical Health Score, and the Numeric Pain Rating Scale, measures of joint function, overall health, and pain, respectively. Results: Relative to nonbundled hospitals, arthroplasty patients at bundled hospitals had slightly lower improvement in Knee Injury and Osteoarthritis Outcomes Score (-1.8 point relative difference at 6 months; 95% confidence interval -3.2 to -0.4; P = .011) and Hip Disability and Osteoarthritis Outcomes Score (-2.3 point relative difference at 6 months; 95% confidence interval -4.0 to -0.5; P = .010). However, these effects were small, and the proportions of patients who achieved a minimum clinically important difference were similar. Preoperative to postoperative change in the Patient-Reported Outcomes Measurement Information System Physical Health Score and Numeric Pain Rating Scale demonstrated a similar pattern of slightly worse outcomes at bundled hospitals with similar rates of achieving a minimum clinically important difference. Conclusions: Patients receiving care at hospitals participating in Medicare's bundled payment programs do not have meaningfully worse improvements in patient-reported measures of function, health, or pain after hip or knee arthroplasty. abstract_id: PUBMED:32102120 Kinematically Aligned Total Knee Arthroplasty with Patient-Specific Instrument. Kinematically aligned total knee arthroplasty (TKA) is a new alignment technique. Kinematic alignment corrects arthritic deformity to the patient's constitutional alignment in order to position the femoral and tibial components, as well as to restore the knee's natural tibial-femoral articular surface, alignment, and natural laxity. Kinematic knee motion moves around a single flexion-extension axis of the distal femur, passing through the center of cylindrically shaped posterior femoral condyles. Since it can be difficult to locate cylindrical axis with conventional instrument, patient-specific instrument (PSI) is used to align the kinematic axes. PSI was recently introduced as a new technology with the goal of improving the accuracy of operative technique, avoiding practical issues related to the complexity of navigation and robotic system, such as the costs and higher number of personnel required. There are several limitations to implement the kinematically aligned TKA with the implant for mechanical alignment. Therefore, it is important to design an implant with the optimal shape for restoring natural knee kinematics that might improve patient-reported satisfaction and function. abstract_id: PUBMED:28870302 Perioperative Pain Management in Hip and Knee Arthroplasty. Total hip and knee arthroplasty is associated with significant perioperative pain, which can adversely affect recovery by increasing risk of complications, length of stay, and cost. Historically, opioids were the mainstay of perioperative pain control. However, opioids are associated with significant downsides. Preemptive use of a multimodal pain management approach has become the standard of care to manage pain after hip and knee arthroplasty. Multimodal pain management uses oral medicines, peripheral nerve blocks, intra-articular injections, and other tools to reduce the need for opioids. Use of a multimodal approach promises to decrease complications, improve outcomes, and increase patient satisfaction after hip and knee arthroplasty. abstract_id: PUBMED:32600142 The use of patient-reported outcome measures to guide referral for hip and knee arthroplasty. Aims: To calculate how the likelihood of obtaining measurable benefit from hip or knee arthroplasty varies with preoperative patient-reported scores. Methods: Existing UK data from 222,933 knee and 209,760 hip arthroplasty patients were used to model an individual's probability of gaining meaningful improvement after surgery based on their preoperative Oxford Knee or Hip Score (OKS/OHS). A clinically meaningful improvement after arthroplasty was defined as ≥ 8 point improvement in OHS, and ≥ 7 in OKS. Results: The upper preoperative score threshold, above which patients are unlikely to achieve any meaningful improvement from surgery, is 41 for knees and 40 for hips. At lower scores, the probability of improvement increased towards a maximum of 88% (knee) and 95% for (hips). Conclusion: By our definition of meaningful improvement, patients with preoperative scores above 41 (OKS) and 40 (OHS) should not be routinely referred to secondary care for possible arthroplasty. Using lower thresholds would incrementally increase the probability of meaningful benefit for those referred but will exclude some patients with potential to benefit. The findings are useful to support the complex shared decision-making process in primary care for referral to secondary care; and in secondary care for experienced clinicians counselling patients considering knee or hip arthroplasty, but should not be used in isolation. Cite this article: Bone Joint J 2020;102-B(7):941-949. Answer: Yes, patient size does matter when it comes to the dropping of instruments by the scrub team during hip and knee arthroplasty. A prospective cohort study found that instruments were dropped in 62% of the operations, with a higher incidence in patients with a BMI greater than 30 compared to those with a BMI less than 30. The study concluded that the operative time in obese patients may be indirectly affected by the time taken to replace dropped instruments, and recommended that there should be a backup of instruments in centers where obese patients are undergoing arthroplasty (PUBMED:24378380).
Instruction: Is frosting effective? Abstracts: abstract_id: PUBMED:36296847 Fabrication of Metallic Superhydrophobic Surfaces with Tunable Condensate Self-Removal Capability and Excellent Anti-Frosting Performance. Laser fabrication of metallic superhydrophobic surfaces (SHSs) for anti-frosting has recently attracted considerable attention. Effective anti-frosting SHSs require the efficient removal of condensed microdroplets through self-propelled droplet jumping, which is strongly influenced by the surface morphology. However, detailed analyses of the condensate self-removal capability of laser-structured surfaces are limited, and guidelines for laser processing parameter control for fabricating rationally structured SHSs for anti-frosting have not yet been established. Herein, a series of nanostructured copper-zinc alloy SHSs are facilely constructed through ultrafast laser processing. The surface morphology can be properly tuned by adjusting the laser processing parameters. The relationship between the surface morphologies and condensate self-removal capability is investigated, and a guideline for laser processing parameterization for fabricating optimal anti-frosting SHSs is established. After 120 min of the frosting test, the optimized surface exhibits less than 70% frost coverage because the remarkably enhanced condensate self-removal capability reduces the water accumulation amount and frost propagation speed (&lt;1 μm/s). Additionally, the material adaptability of the proposed technique is validated by extending this methodology to other metals and metal alloys. This study provides valuable and instructive insights into the design and optimization of metallic anti-frosting SHSs by ultrafast laser processing. abstract_id: PUBMED:36363927 Localized Characteristics of the First Three Typical Condensation Frosting Stages in the Edge Region of a Horizontal Cold Plate. Condensation frosting usually causes a negative influence on heat exchangers employed in engineering fields. As the relationships among the first three typical condensation frosting stages in the edge regions of cold plates are still unclear, an experimental study on the localized condensation frosting characteristics in the edge region of a cold plate was conducted. The edge effects on the water droplet condensation (WDC), water droplet frozen (WDF) and frost layer growth characteristics were quantitatively investigated. The results showed that the number of droplets coalescing in the edge-affected regions was around 50% greater than in the unaffected regions. At the end of the WDC stages, the area-average equivalent contact diameter and coverage area ratio of water droplets in the edge-affected regions were 2.69 times and 11.6% greater than those in the unaffected regions under natural convection, and the corresponding values were 2.24 times and 9.9% under forced convection. Compared with the unaffected regions, the WDF stage duration in the edge-affected regions decreased by 63.6% and 95.3% under natural and forced convection, respectively. Additionally, plate-type and feather-type frost crystals were, respectively, observed in natural and forced convection. The results of this study can help in the better understanding of the condensation frosting mechanism on a cold plate, which provides guidelines for optimizing the design of heat exchanger structures and system control strategies facing frosting problems. abstract_id: PUBMED:35926320 Plasmonic heating of protected silver nanowires for anti-frosting superhydrophobic coating. Atmospheric frosting and icing pose significant problems for critical and common-use infrastructures. Passive anti-frosting and anti-icing strategies that require no energy input have been actively sought, with no viable and permanent solutions known yet. Bioinspired superhydrophobic (SH) materials have been considered promising path to explore; however, the outcome has been less than compelling because of their low resistance to atmospheric humidity. In most cases, condensing water on an SH surface eventually leads to mechanical locking of ice instead of ice removal. Hybrid strategies involving some form of limited energy input are being increasingly considered, each with its own challenges. Here, we propose the application of plasmonic heating of silver nanowires (AgNWs) for remote frost removal, utilizing an SH hybrid passive-active system. This novel system comprises a durable nanocomposite covered with a hydrophobized mesh of AgNWs, protected against environmental degradation by a tin oxide (SnO2) shell. We demonstrate the frost removal ability at -10 °C and 30% RH, achieved by a combination of plasmonic heating of AgNWs with a non-sticking behavior of submicrometric droplets of molten frost on the SH surface. Heating was realized by illuminating the mesh with low-power blue laser light. Adjustment of the nanowire (NW) and shell dimensions allows the generation of surface plasmon resonance in illuminated NWs at a wavelength overlapping the emission maximum of the light used. In environmental stability tests, the nanostructures exhibited high atmospheric, mechanical, and thermal stability. The narrow-wavelength absorption of the structure in the blue light range and the reflective properties in the infrared range were designed to prevent protected surfaces from overheating in direct sunlight. abstract_id: PUBMED:30001108 Desublimation Frosting on Nanoengineered Surfaces. Ice nucleation from vapor presents a variety of challenges across a wide range of industries and applications including refrigeration, transportation, and energy generation. However, a rational comprehensive approach to fabricating intrinsically icephobic surfaces for frost formation-both from water condensation (followed by freezing) and in particular from desublimation (direct growth of ice crystals from vapor)-remains elusive. Here, guided by nucleation physics, we investigate the effect of material composition and surface texturing (atomically smooth to nanorough) on the nucleation and growth mechanism of frost for a range of conditions within the sublimation domain (0 °C to -55 °C; partial water vapor pressures 6 to 0.02 mbar). Surprisingly, we observe that on silicon at very cold temperatures-below the homogeneous ice solidification nucleation limit (&lt;-46 °C)-desublimation does not become the favorable pathway to frosting. Furthermore, we show that surface nanoroughness makes frost formation on silicon more probable. We experimentally demonstrate at temperatures between -48 °C and -55 °C that nanotexture with radii of curvature within 1 order of magnitude of the critical radius of nucleation favors frost growth, facilitated by capillary condensation, consistent with Kelvin's equation. Our findings show that such nanoscale surface morphology imposed by design to impart desired functionalities-such as superhydrophobicity-or from defects can be highly detrimental for frost icephobicity at low temperatures and water vapor partial pressures (&lt;0.05 mbar). Our work contributes to the fundamental understanding of phase transitions well within the equilibrium sublimation domain and has implications for applications such as travel, power generation, and refrigeration. abstract_id: PUBMED:38063731 Fabrication of Silver Iodide (AgI) Patterns via Photolithography and Its Application to In-Situ Observation of Condensation Frosting. This study introduces an innovative photolithography-based method for patterning ionic and inorganic particle materials such as silver iodide (AgI). Conventional methods lack precision when patterning powdered materials, which limits their applicability. The proposed method stacks layers of a particle material (AgI) and negative-tone photoresist for simultaneous ultraviolet exposure and development, resulting in well-defined AgI patterns. The sintering process successfully removed binders from the material layer and photoresist, yielding standalone AgI patterns on the Si substrate with good adhesion. The pitch remained consistent with the design values of the photomask when the pattern size was changed. In-situ observation of condensation frosting on the patterns was conducted, which confirmed the practicality of the developed patterning process. This versatile method is applicable to large areas with a high throughput and presents new opportunities for modifying functional surfaces. abstract_id: PUBMED:36897285 Infusing Silicone and Camellia Seed Oils into Micro-/Nanostructures for Developing Novel Anti-Icing/Frosting Surfaces for Food Freezing Applications. Undesired ice/frost formation and accretion often occur on food freezing facility surfaces, lowering freezing efficiency. In the current study, two slippery liquid-infused porous surfaces (SLIPS) were fabricated by spraying hexadecyltrimethoxysilane (HDTMS) and stearic acid (SA)-modified SiO2 nanoparticles (NPs) suspensions, separately onto aluminum (Al) substrates coated with epoxy resin to obtain two superhydrophobic surfaces (SHS), and then infusing food-safe silicone and camellia seed oils into the SHS, respectively, achieving anti-frosting/icing performance. In comparison with bare Al, SLIPS not only exhibited excellent frost resistance and defrost properties but also showed ice adhesion strength much lower than that of SHS. In addition, pork and potato were frozen on SLIPS, showing an extremely low adhesion strength of &lt;10 kPa, and after 10 icing/deicing cycles, the final ice adhesion strength of 29.07 kPa was still much lower than that of SHS (112.13 kPa). Therefore, the SLIPS showed great potential for developing into robust anti-icing/frosting materials for the freezing industry. abstract_id: PUBMED:32050479 The Inhibition of Icing and Frosting on Glass Surfaces by the Coating of Polyethylene Glycol and Polypeptide Mimicking Antifreeze Protein. The development of anti-icing, anti-frosting transparent plates is important for many reasons, such as poor visibility through the ice-covered windshields of vehicles. We have fabricated new glass surfaces coated with polypeptides which mimic a part of winter flounder antifreeze protein. We adopted glutaraldehyde and polyethylene glycol as linkers between these polypeptides and silane coupling agents applied to the glass surfaces. We have measured the contact angle, the temperature of water droplets on the cooling surfaces, and the frost weight. In addition, we have conducted surface roughness observation and surface elemental analysis. It was found that peaks in the height profile, obtained with the atomic force microscope for the polypeptide-coated surface with polyethylene glycol, were much higher than those for the surface without the polypeptide. This shows the adhesion of many polypeptide aggregates to the polyethylene glycol locally. The average supercooling temperature of the droplet for the polypeptide-coated surface with the polyethylene glycol was lower than for the polypeptide-coated surface with glutaraldehyde and the polyethylene-glycol-coated surface without the polypeptide. In addition, the average weight of frost cover on the specimen was lowest for the polypeptide-coated surface with the polyethylene glycol. These results argue for the effects of combined polyethylene glycol and polypeptide aggregates on the locations of ice nuclei and condensation droplets. Thus, this polypeptide-coating with the polyethylene glycol is a potential contender to improve the anti-icing and anti-frosting of glasses. abstract_id: PUBMED:33647197 How Frost Forms and Grows on Lubricated Micro- and Nanostructured Surfaces. Frost is ubiquitously observed in nature whenever warmer and more humid air encounters colder than melting point surfaces (e.g., morning dew frosting). However, frost formation is problematic as it damages infrastructure, roads, crops, and the efficient operation of industrial equipment (i.e., heat exchangers, cooling fins). While lubricant-infused surfaces offer promising antifrosting properties, underlying mechanisms of frost formation and its consequential effect on frost-to-surface dynamics remain elusive. Here, we monitor the dynamics of condensation frosting on micro- and hierarchically structured surfaces (the latter combines micro- with nano- features) infused with lubricant, temporally and spatially resolved using laser scanning confocal microscopy. The growth dynamics of water droplets differs for micro- and hierarchically structured surfaces, by hindered drop coalescence on the hierarchical ones. However, the growth and propagation of frost dendrites follow the same scaling on both surface types. Frost propagation is accompanied by a reorganization of the lubricant thin film. We numerically quantify the experimentally observed flow profile using an asymptotic long-wave model. Our results reveal that lubricant reorganization is governed by two distinct driving mechanisms, namely: (1) frost propagation speed and (2) frost dendrite morphology. These in-depth insights into the coupling between lubricant flow and frost formation/propagation enable an improved control over frosting by adjusting the design and features of the surface. abstract_id: PUBMED:28829603 Frosting Behavior of Superhydrophobic Nanoarrays under Ultralow Temperature. Retarding and preventing frost formation at ultralow temperature has an increasing importance due to a wide range of applications of ultralow fluids in aerospace and industrial facilities. Recent efforts for developing antifrosting surfaces have been mostly devoted to utilizing lotus-leaf-inspired superhydrophobic surfaces. Whether the antifrosting performance of the superhydrophobic surface is still effective under ultralow temperature has not been elucidated clearly. Here, we investigated the frosting behavior of fabricated superhydrophobic ZnO nanoarrays under different temperature and different environment. The surface showed excellent performance in anticondensation and antifrosting when the surface temperature was approximately -20 °C. Although the frosting event inevitably occurs on all surfaces when the temperature is decreased from -50 to -150 °C, the frost accumulation on the superhydrophobic surfaces is always less than that on the untreated surfaces. Interestingly, the frost layer detaches from the surface within a short time and keeps the surface dry in the very beginning of the defrosting process. Further, there is no frost formation on the surface at -20 °C during 10 min testing when blowing compressed air and spraying methanol together or spraying methanol individually. It can reduce the height of the frost layer and increases the density when spraying methanol at -150 °C. Furthermore, the frost crystals on the top surface can been blown away due to the low adhesion of ice or frost. It provides a basic idea for solving the frosting problem under ultralow temperature while combined with other defrosting methods. abstract_id: PUBMED:31212499 Generalized lattice Boltzmann model for frosting. Frosting is a multiscale and multiphysics problem, which presents a significant challenge for numerical methods. In this study, a generalized lattice Boltzmann (LB) model is developed to simulate the frosting of humid air at representative elementary volume scale. In this model, three LB equations are introduced to describe the evolution of distribution functions for velocity, temperature, and humidity (i.e., mass fraction of water vapor in the humid air) fields, respectively. The frost layer is regarded as a porous medium, while the humid air is treated as a plain one. This unified LB model can be applied to describe the phase change and transport processes in these two subdomains seamlessly. Through the Chapman-Enskog analysis, the macroscopic equations for the frosting process can be recovered from the present LB model. Benchmark problems in conduction solidification, convection melting and frosting are simulated, and the numerical results match well with analytical or experimental solutions. Finally, this model is applied to simulate frost formation between two parallel plates, and the influences of air velocity, humidity, temperature, and cold wall temperature are evaluated. Answer: The effectiveness of frosting, in the context of the provided abstracts, refers to the ability of various surfaces or materials to resist or delay the formation of frost, which can be detrimental in many applications such as refrigeration, transportation, and energy generation. The abstracts discuss different approaches to enhance anti-frosting performance, including the fabrication of superhydrophobic surfaces, the use of lubricant-infused surfaces, and the application of coatings that mimic antifreeze proteins. For instance, the fabrication of metallic superhydrophobic surfaces through ultrafast laser processing has shown to enhance condensate self-removal capability, which reduces water accumulation and frost propagation, thereby exhibiting excellent anti-frosting performance (PUBMED:36296847). Similarly, the use of silver iodide patterns via photolithography has been applied to in-situ observation of condensation frosting, providing a method for modifying functional surfaces to resist frosting (PUBMED:38063731). The infusion of silicone and camellia seed oils into micro-/nanostructures has been demonstrated to develop novel anti-icing/frosting surfaces for food freezing applications, showing great potential for robust anti-icing/frosting materials in the freezing industry (PUBMED:36897285). Additionally, the coating of glass surfaces with polyethylene glycol and polypeptides mimicking antifreeze proteins has been found to improve the anti-icing and anti-frosting properties of glasses (PUBMED:32050479). Furthermore, the study of frosting behavior on superhydrophobic nanoarrays under ultralow temperatures has revealed that while frosting is inevitable at extremely low temperatures, the frost accumulation on superhydrophobic surfaces is less than on untreated surfaces, and the frost layer can detach more easily during the defrosting process (PUBMED:28829603). Overall, the abstracts suggest that frosting can be effectively mitigated or delayed through the use of engineered surfaces and coatings, which can have significant implications for applications where frost formation is a concern. However, the effectiveness of these approaches can vary based on environmental conditions and the specific design of the anti-frosting features.
Instruction: Timing in hip arthroscopy: does surgical timing change clinical results? Abstracts: abstract_id: PUBMED:25360345 Timing of hip fracture surgery in the elderly. The effect of preoperative wait time for surgery is a long-standing subject of debate. Although there is disagreement among clinicians on whether early surgery confers a survival benefit per se, most reports agree that early surgery improves other outcomes such as length of stay, the incidence of pressure sores, and return to independent living. Therefore, it would seem prudent to surgically treat elderly patients with hip fractures within the first 48 hours of admission. However, the current body of evidence is observational in nature and carries the potential for bias inherent in such analyses. Evidence in the form of a large randomized controlled trial may ultimately be required to fully evaluate the impact of surgical timing on patients with fractures of the hip. abstract_id: PUBMED:32158724 Timing of Hip-fracture Surgery in Elderly Patients: Literature Review and Recommendations. The incidence of hip fractures is rapidly increasing with an aging population and is now one of the most important health concerns worldwide due to a high mortality rate. The effect of delayed surgery on postoperative outcomes has been widely discussed. Although various treatment guidelines for hip fractures in the elderly exist, most institutions recommend that operations are conducted as soon as possible to help achieve the most favorable outcomes. While opinions differ on the relationship between delayed surgery and postoperative mortality, a strong association between earlier surgery and improvement in postoperative outcomes (e.g., length of hospital stay, bedsore occurrence, return to an independent lifestyle), has been reported. Taken together, performing operations for hip fractures in the elderly within 48 hours of admission appears to be best practice. Importantly, however, existing evidence is based primarily on observational studies which are susceptible to inherent bias. Here, we share the results of a literature search to summarize data that helps inform the most appropriate surgical timing for hip fractures in the elderly and the effects of delayed surgery on postoperative outcome. In addition, we expect to be able to provide a more accurate basis for these correlations through a large-scale randomized controlled trial in the future and to present data supporting recommendations for appropriate surgical timing. abstract_id: PUBMED:25306929 Relationship between admission day and timing of surgery for patients with hip fracture. Aim: This study investigated: (i) the relationship between admission day of the week and the timing of surgery; (ii) whether the admission day of the week predicted length of stay or patients' outcomes; and (iii) the relationship between the timing of surgery and mortality. Methods: This was a retrospective, observational study of two community general hospitals in Japan. The inclusion criteria were patients aged 65 years or older who had experienced a hip fracture and undergone surgery during April 2007 to March 2011. Data on demographics, care processes, and health outcomes during hospital stays were collected from hospital records. A questionnaire was sent to patients and/or their family members about the patients' health outcomes after discharge from hospital for hip fracture surgery. Results: Data were collected from a total of 714 patients. In both hospitals, orthopedic surgery was not scheduled every day, and the admission day was significantly related to the timing of surgery. In hospital 1, the admission day explained 38.1% of the variance in the timing of surgery, and in hospital 2, it explained 8.3%. The admission day with early surgery predicted an early discharge. The admission day with delayed surgery predicted better survival. There was no significant relationship between the timing of surgery and mortality in either hospital. Conclusion: Earlier surgery, by daily operations, may reduce the length of hospital stays, but its effect on patient outcome remains unclear. It is necessary to carefully determine which patients will benefit from earlier surgery. abstract_id: PUBMED:29214329 Time to surgery after hip fracture across Canada by timing of admission. The extent of Canadian provincial variation in hip fracture surgical timing is unclear. Provinces performed a similar proportion of surgeries within three inpatient days after adjustment. Time to surgery varied by timing of admission across provinces. This may reflect different approaches to providing access to hip fracture surgery. Introduction: The aim of this study was to compare whether time to surgery after hip fracture varies across Canadian provinces for surgically fit patients and their subgroups defined by timing of admission. Methods: We retrieved hospitalization records for 140,235 patients 65 years and older, treated surgically for hip fracture between 2004 and 2012 in Canada (excluding Quebec). We studied the proportion of surgeries on admission day and within 3 inpatient days, and times required for 33%, 66%, and 90% of surgeries across provinces and by subgroups defined by timing of admission. Differences were adjusted for patient, injury, and care characteristics. Results: Overall, provinces performed similar proportions of surgeries within the recommended three inpatient days, with all provinces requiring one additional day to perform the recommended 90% of surgeries. Prince Edward Island performed 7.0% more surgeries on admission day than Ontario irrespective of timing of admission (difference = 7.0; 95% CI 4.0, 9.9). The proportion of surgeries on admission day was 6.3% lower in Manitoba (difference = - 6.3; 95% CI - 12.1, - 0.6), and 7.7% lower in Saskatchewan (difference = - 7.7; 95% CI - 12.7, - 2.8) compared to Ontario. These differences persisted for late weekday and weekend admissions. The time required for 33%, 66%, and 90% of surgeries ranged from 1 to 2, 2-3, and 3-4 days, respectively, across provinces by timing of admission. Conclusions: Provinces performed similarly with respect to recommended time for hip fracture surgery. The proportion of surgeries on admission day, and time required to complete 33% and 66% of surgeries, varied across provinces and by timing of admission. This may reflect different provincial approaches to providing access to hip fracture surgery. abstract_id: PUBMED:31528066 Timing of complications following surgery for geriatric hip fractures. Introduction: Despite abundant literature present on complications following hip fracture surgery, few studies have focused on the timing of these complications. Materials And Methods: The 2015-2016 American College of Surgeons - National Surgical Quality Improvement Program database was queried for patients ≥65 years of age undergoing hip fracture surgery, due to trauma, using CPT-Codes for total hip arthroplasty (27130), Hemiarthroplasty (27125) and Open Reduction/Internal Fixation (ORIF) (27236, 27244, 27245). For each complication being studied, the median time to diagnosis was determined along with the interquartile range (IQR). Cox-regression analyses were used to assess complication timings between various surgeries. Results: A total of 31,738 were included in the final cohort. The median time of occurrence (days) for myocardial infarction was 2 [IQR 1-6], pneumonia 4 [IQR 2-12], stroke/CVA 3 [IQR 1-10], pulmonary embolism 5 [IQR 2-14], urinary tract infection (UTI) 8 [IQR 2-15], deep venous thrombosis (DVT) 9 [IQR 4-17], sepsis 11 [IQR 5-19], death 12 [IQR 6-20], superficial surgical site infection (SSI) 16 [IQR 12-22], deep SSI 23 [IQR 15-24] and organ/space SSI 19 [IQR 15-23]. Undergoing a THA vs. ORIF for hip fracture was associated a relatively early occurrence of pneumonia (day 3 [IQR 1-5.25]; p = 0.029) and urinary tract infection (day 4 [IQR 1-13]; p = 0.035) and a later occurrence of organ/space SSI (day 23.5 [IQR 19.5-26.75]; p = 0.002). Conclusion: Orthopaedic trauma surgeons can utilize this data to optimize care strategies during the time-periods of highest risk to prevent complications from occurring early on in the course of post-operative care. abstract_id: PUBMED:26659463 Effect of Preoperative Transthoracic Echocardiogram on Mortality and Surgical Timing in Elderly Adults with Hip Fracture. Objectives: To evaluate the effect of preoperative transthoracic echocardiogram (TTE) on mortality, postoperative complications, surgical timing, and length of stay in individuals with surgically treated hip fracture. Design: Retrospective chart review of hospital records. Setting: Level I and II trauma centers. Participants: Individuals consecutively surgically treated for hip fracture (N = 694). Measurements: Demographic and injury characteristic, operative timing, preoperative echocardiogram, complications, mortality. Primary outcome measure was in hospital, 30-day, and 1-year mortality. Secondary outcome measures were complications (particularly cardiovascular) and time required for medical clearance and operative treatment. Results: Preoperative TTE was performed on 131 individuals (18.9%). There was no difference between the TTE group and the control group in hospital (3.8% vs 1.8%, P = .18), 30-day (6.9% vs 6.6%, P = .90), or 1-year (20.6% versus 20.1%, P = .89) mortality. There was no significant difference in major cardiac complications. Average time from admission to operative treatment was 66.5 hours in the TTE group and 34.8 hours in the control group (P &lt; .001). Average time from admission to medical clearance was 43.2 hours in the TTE group and 12.4 hours in the control group (P &lt; .001). The TTE group also had a significantly longer length of stay (8.68 vs 6.44 days, P &lt; .001). Conclusion: Preoperative TTE was not associated with lower mortality in elderly adults with hip fracture in the short- or long-term postoperative period. TTE was associated with delayed surgical treatment and longer length of stay and resulted in no cardiac intervention (e.g., cardiac catheterization, stent, stress test). abstract_id: PUBMED:26284021 Change blindness in pigeons (Columba livia): the effects of change salience and timing. Change blindness is a well-established phenomenon in humans, in which plainly visible changes in the environment go unnoticed. Recently a parallel change blindness phenomenon has been demonstrated in pigeons. The reported experiment follows up on this finding by investigating whether change salience affects change blindness in pigeons the same way it affects change blindness in humans. Birds viewed alternating displays of randomly generated lines back-projected onto three response keys, with one or more line features on a single key differing between consecutive displays. Change salience was manipulated by varying the number of line features that changed on the critical response key. Results indicated that change blindness is reduced if a change is made more salient, and this matches previous human results. Furthermore, accuracy patterns indicate that pigeons' effective search area expanded over the course of a trial to encompass a larger portion of the stimulus environment. Thus, the data indicate two important aspects of temporal cognition. First, the timing of a change has a profound influence on whether or not that change will be perceived. Second, pigeons appear to engage in a serial search for changes, in which additional time is required to search additional locations. abstract_id: PUBMED:37287528 The Surgical Timing and Prognoses of Elderly Patients with Hip Fractures: A Retrospective Analysis. Background: Guidelines exist for the surgical treatment of hip fractures, but the association between the surgical timing and the incidence of postoperative complications and other important outcomes in elderly patients with hip fracture remains controversial. Objective: This study aims to explore the association between the surgical timing and the prognoses in elderly patients with hip fracture. Methods: A total of 701 elderly patients (age ≥ 65 years) with hip fractures who were treated in our hospital from June 2020 to June 2021 were selected. Patients who underwent surgery within 2 d of admission were assigned to the early surgery group, and those who underwent surgery after 2 d of admission were assigned to the delayed surgery group. The prognosis indices of the patients in the two groups were recorded and compared. Results: The length of postoperative hospitalisation in the early surgery group was significantly lower than that in the delayed surgery group (P &lt; 0.001). The European quality of life questionnaire (EQ-5D) utility in the delayed surgery group was significantly lower than that in the early surgery group at 30 days and 6 months after operation (P&lt;0.05). Compared with the delayed surgery group, the incidence of pulmonary infection, urinary tract infection (UTI) and deep vein thrombosis (DVT) in the early surgery group were significantly lower. There were no significant differences between the two groups in terms of mortality and excellent rates of the HHS at six months after the operation. In addition, the early surgery group had a lower readmission rate than the delayed surgery group [34 (9.5%) vs 56 (16.3%), P = 0.008]. Conclusion: Earlier surgery can reduce the incidence of pulmonary infections, UTI, DVT and readmission rate among elderly patients with hip fractures, shorten postoperative hospitalisation. abstract_id: PUBMED:38313635 Timing theory integrated nursing combined behavior change integrated theory of nursing on primiparous influence. Background: The comprehension and utilization of timing theory and behavior change can offer a more extensive and individualized provision of support and treatment alternatives for primipara. This has the potential to enhance the psychological well-being and overall quality of life for primipara, while also furnishing healthcare providers with efficacious interventions to tackle the psychological and physiological obstacles encountered during the stages of pregnancy and postpartum. Aim: To explore the effect of timing theory combined with behavior change on self-efficacy, negative emotions and quality of life in patients with primipara. Methods: A total of 80 primipara cases were selected and admitted to our hospital between August 2020 and May 2022. These cases were divided into two groups, namely the observation group and the control group, with 40 cases in each group. The nursing interventions differed between the two groups, with the control group receiving routine nursing and the observation group receiving integrated nursing based on the timing theory and behavior change. The study aimed to compare the pre- and post-nursing scores of Chinese Perceived Stress Scale (CPSS), Edinburgh Postpartum Depression Scale (EPDS), Self-rating Anxiety Scale (SAS), breast milk knowledge, self-efficacy, and SF-36 quality of life in both groups. Results: After nursing, the CPSS, EPDS, and SAS scores of the two groups was significantly lower than that before nursing, and the CPSS, EPDS, and SAS scores of the observation group was significantly lower than that of the control group (P = 0.002, P = 0.011, and P = 0.001 respectively). After nursing, the breastfeeding knowledge mastery, self-efficacy, and SF-36 quality of life scores was significantly higher than that before nursing, and the breastfeeding knowledge mastery (P = 0.013), self-efficacy (P = 0.008), and SF-36 quality of life (P = 0.011) scores of the observation group was significantly higher than that of the control group. Conclusion: The integration of timing theory and behavior change integrated theory has been found to be an effective approach in alleviating negative mood and stress experienced by primipara individuals, while also enhancing their self-efficacy and overall quality of life. This study focuses on the key concepts of timing theory, behavior change, primipara individuals, negative mood, and quality of life. abstract_id: PUBMED:24454545 Climate change and timing of avian breeding and migration: evolutionary versus plastic changes. There are multiple observations around the globe showing that in many avian species, both the timing of migration and breeding have advanced, due to warmer springs. Here, we review the literature to disentangle the actions of evolutionary changes in response to selection induced by climate change versus changes due to individual plasticity, that is, the capacity of an individual to adjust its phenology to environmental variables. Within the abundant literature on climate change effects on bird phenology, only a small fraction of studies are based on individual data, yet individual data are required to quantify the relative importance of plastic versus evolutionary responses. While plasticity seems common and often adaptive, no study so far has provided direct evidence for an evolutionary response of bird phenology to current climate change. This assessment leads us to notice the alarming lack of tests for microevolutionary changes in bird phenology in response to climate change, in contrast with the abundant claims on this issue. In short, at present we cannot draw reliable conclusions on the processes underlying the observed patterns of advanced phenology in birds. Rapid improvements in techniques for gathering and analysing individual data offer exciting possibilities that should encourage research activity to fill this knowledge gap. Answer: The provided abstracts do not specifically address the question of surgical timing in hip arthroscopy and its impact on clinical results. The abstracts focus on hip fracture surgery in elderly patients and the timing of such surgeries (PUBMED:25360345, PUBMED:32158724, PUBMED:25306929, PUBMED:29214329, PUBMED:31528066, PUBMED:26659463, PUBMED:37287528), as well as other unrelated topics such as change blindness in pigeons (PUBMED:26284021), timing theory in nursing (PUBMED:38313635), and climate change effects on avian breeding and migration (PUBMED:24454545). However, some of the abstracts on hip fracture surgery in the elderly do provide insights that could be considered when discussing the timing of surgeries in general. For instance, it is suggested that early surgery (within 48 hours of admission) for hip fractures in the elderly may improve outcomes such as length of stay, incidence of pressure sores, and return to independent living (PUBMED:25360345, PUBMED:32158724). Additionally, one study found that earlier surgery may reduce the length of hospital stays, but its effect on patient outcome remains unclear (PUBMED:25306929). Another study indicated that earlier surgery can reduce the incidence of certain complications and readmission rates among elderly patients with hip fractures (PUBMED:37287528). While these findings relate to hip fracture surgery rather than hip arthroscopy, they highlight the potential importance of surgical timing in patient outcomes. To answer the question about the impact of surgical timing on clinical results in hip arthroscopy specifically, one would need to review literature that directly addresses this procedure.
Instruction: Does rheumatic valvular heart disease affect right ventricular performance? Abstracts: abstract_id: PUBMED:23550424 Does rheumatic valvular heart disease affect right ventricular performance? Aim: Right ventricular (RV) function often determines clinical outcome in patients with valvular heart disease. Though difficult to assess by echocardiography, Tei index is useful in its assessment. The aims of the study were to evaluate global RV function using the Tei index in patients with rheumatic heart disease and to observe if such abnormalities in RV function were reversible post-operatively. Method: The study included patients with atrial septal defect (ASD, Group I, n = 15) and rheumatic valvular heart disease (RVHD, Group II, n = 18). Patients with atrial fibrillation were excluded from the study. Conventional 2-D echocardiography was performed preoperatively, immediate postoperative and in last follow-up. Result: ASD group had lower left LVES and LVED dimensions as compared to RHVD (p = 0.001) and better ejection fraction (EF) than RVHD group (p = 0.02). LV Tei in the ASD group was above the normal limit (&gt; 0.5), while RV Tei was increased in the RHVD group. The median RVSP was similar in two groups (p = 0.9). The impaired LVMPI in the ASD group improved as early as 2 weeks following surgery (p = 0.09) while in patients with RHVD it deteriorated which mirrored the reduction in median LVEF (p = 0.04). Group II that had an abnormal RV Tei pre-operatively demonstrated improvement following surgery (p = 0.03). Conclusion: RVHD is associated with impairment of RV function. Volume overload of RV in patients of ASD is associated with normal MPI. The abnormalities in RVMPI improved as early as 2 weeks after valve surgery with sustained improvement noted at follow up. abstract_id: PUBMED:34341208 Right ventricular dysfunction in rheumatic heart valve disease: A clinicopathological evaluation. Background: . Dysfunction of the right ventricle (RV) in rheumatic heart disease (RHD) is a poor prognostic factor. We planned to observe the clinicopathological changes in the RV of patients with RHD. Methods: . We defined RV dysfunction by a myocardial performance index value of &gt;0.4 on transthoracic echo-cardiography and included patients with isolated severe mitral stenosis in sinus rhythm with normal left ventricular (LV) function from April 2014 to April 2016. The patients were divided into two groups based on the absence (group I, n=21) and presence (group II, n=22) of RV dysfunction. RV muscle biopsy was evaluated for the presence of apoptosis, fibrosis and fat deposition apart from other clinical and echocardiography parameters. Results: . Patients in both the groups had a similar demographic profile and LV dimensions and function. The age of the patients in the two groups was the only clinical parameter that was significantly different; older patients were in group II. A higher value for RV systolic pressure (RVSP) and the grade of tricuspid regurgitation was seen in group II. Though there was no significant difference in the presence of fibrosis and intensity of apoptosis in the RV biopsy samples, the deposition of fat in the interstitial spaces was decreased in group II. Age at presentation had no significant difference or correlation with the deposition of fibrosis or fat in the RV myocardial biopsy. Conclusions: . Patients with RV dysfunction were older in age and their RVSP was raised at operation, suggesting that earlier intervention may help in preserving RV function. abstract_id: PUBMED:19497952 Pulmonary arterial hypertension in rheumatic mitral stenosis: does it affect right ventricular function and outcome after mitral valve replacement? Right ventricular function affects the outcome in valvular heart disease but less is known about the relation between indices of dysfunction and outcome. Seventy patients undergoing mitral valve replacement between April 2007 and April 2008 for predominant rheumatic mitral stenosis were included in the study. Two groups were formed based on right ventricular systolic pressure (RVSP), &lt;or=40 mmHg (group I, n=16) and &gt;41 mmHg (group II, n=54). Right ventricle (RV) function indices were studied by echocardiography. RVSP reduced significantly in group II (P=0.0001) but not in group I. Brain natriuretic peptide (BNP) was raised in all cases and reduced significantly postoperatively. Tricuspid annular plane excursion, myocardial performance index, RV descent and tricuspid valve annular shortening (TV shortening) conformed to RV dysfunction in both groups, and did not change significantly postoperatively. Regression analysis for outcome revealed TV shortening as the only significant factor (P=0.03). Receiver operating characteristic of TV shortening and adverse outcome showed worse outcome with TV shortening of &lt;11%. RV dysfunction was observed in all cases irrespective of RVSP. TV shortening of &lt;11% was associated with adverse outcome. Postoperative fall in BNP levels may indicate a trend towards recovery. abstract_id: PUBMED:28361825 Evidence of apoptosis in right ventricular dysfunction in rheumatic mitral valve stenosis. Background & Objectives: Right ventricular (RV) dysfunction is one of the causes of morbidity and mortality in valvular heart disease. The phenomenon of apoptosis, though rare in cardiac muscle may contribute to loss of its function. Role of apoptosis in RV in patients with rheumatic valvular heart disease is investigated in this study. Methods: Patients with rheumatic mitral valve stenosis formed two groups based on RV systolic pressure (RVSP) as RVSP &lt;40 mmHg (group I, n=9) and RVSP ≥40 mmHg (group II, n=30). Patients having atrial septal defect (ASD) with RVSP &lt;40 mmHg served as control (group III, n=15). Myocardial performance index was assessed for RV function. Real-time polymerase chain reaction was performed on muscle biopsy procured from RV to assess expression of pro-apoptotic genes (Bax, cytochrome c, caspase 3 and Fas) and anti-apoptotic genes (Bcl-2). Apoptosis was confirmed by histopathology and terminal deoxynucleotide-transferase-mediated dUTP nick end labelling. Results: Group II had significant RV dysfunction compared to group I (P=0.05) while caspase 3 (P=0.01) and cytochrome c (P=0.03) were expressed excessively in group I. When group I was compared to group III (control), though there was no difference in RV function, a highly significant expression of pro-apoptotic genes was observed in group I (Bax, P=0.02, cytochrome c=0.001 and caspase 3=0.01). There was a positive correlation between pro-apoptotic genes. Nuclear degeneration was present conforming to apoptosis in valve disease patients (groups I and II) while it was absent in patients with ASD. Interpretation & Conclusion: Our findings showed evidence of apoptosis in RV of patients with valvular heart disease. Apoptosis was set early in the course of rheumatic valve disease even with lower RVSP, followed by RV dysfunction; however, expression of pro-apoptotic genes regressed. abstract_id: PUBMED:7990282 Left ventricular-right atrial shunt with bacterial and rheumatic endocarditis of tricuspid valve A 14-year-old boy who was operated on for left ventricular-right atrial shunt with bacterial and rheumatic endocarditis of the tricuspid valve (TV) is reported. The patient was pointed out perimembranous ventricular septal defect (VSD) at 4-month old and had complaint of occasional high fever for a half year. The echocardiography showed a vegetation at the TV. The vegetation did not disappear despite of administration of antibiotics for 2 weeks, and pulmonary embolism occurred. Then, the patient underwent resection of the vegetation and direct closure of VSD. The left ventricular-right atrial shunt was formed by the direct communication between VSD below TV annulus and the perforation of septal leaflet of TV adhered to the ventricular septum due to bacterial and rheumatic endocarditis. Earlier operation is needed for this type of case because antibiotics is not effective for the valvular disease. abstract_id: PUBMED:15615247 Tricuspid aseptic endocarditis revealing right endomyocardial fibrosis during an unrecognized Behçet's disease. A case report. Introduction: Aseptic endocarditis or/and endomyocardial fibrosis are rarely reported in Behçet's disease. Observation: We report on a case of a 21-year-old man living in Algeria, revealed by verrucous tricuspid valvulitis extending to the ventricular endomyocardium and complicated with right heart failure, initially misdiagnosed and treated as infective endocarditis occurring on rheumatic cardiac after-effects. DISCUSSION; We discuss the lack of specificity of Jones criteria and emphasize the need to include cardiac involvement in Behçet's disease in the differential diagnosis of rheumatic fever carditis. This message is notably important in some countries where the prevalence of these two entities are among the highest in the world. abstract_id: PUBMED:28465949 Giant Left and Right Atrium in Rheumatic Mitral Stenosis and Tricuspid Regurgitation. Dilation of atria occurs in patients with valvular heart disease, especially in rheumatic mitral regurgitation, mitral stenosis, or tricuspid valve abnormalities. We report a case of giant left and right atrium in the context of rheumatic mitral stenosis and severe tricuspid regurgitation in a 68-year-old woman. abstract_id: PUBMED:9021423 Right ventricular function before and after percutaneous balloon mitral valvuloplasty. Aim of this study was to evaluate right ventricular performance in patients with mitral stenosis and its modification by balloon valvuloplasty. Right ventricular volumes of 24 patients with postrheumatic mitral stenosis were determined by thermodilution 1 or 2 days before and 1 or 2 days after valvuloplasty. Right ventricular ejection fraction at rest was 43 (36-47)% (median and interquartile range). Right ventricular end-diastolic volume was 100 (86-119) ml/m2. Supine bicycle exercise (50 Watt) reduced right ventricular ejection fraction to 30 (29-37)% (P &lt; 0.0001) and increased right ventricular end-diastolic volume to 124 (112-141) ml/m2 (P &lt; 0.0001). At rest, right ventricular ejection fraction correlated inversely with pulmonary vascular resistance (r = -0.64, P &lt; 0.0001), while no significant correlation with mitral valve area was found. Valvuloplasty increased right ventricular ejection fraction at rest to 48 (44-50)% (P &lt; 0.005), and during exercise to 42 (38-45)% (P &lt; 0.0001). This improvement of right ventricular ejection fraction correlated inversely with the value of this parameter before valvuloplasty (r = -0.88, P &lt; 0.0001) and with the gain in stroke volume (r = 0.57, P &lt; 0.01). The right ventricular function curve, disturbed before commissurotomy, was reestablished by the procedure. In conclusion, at the here investigated stage of mitral stenosis right ventricular function is reversibly impaired. This is predominantly caused by the hemodynamic consequences of the valvular defect and not by an impairment of right ventricular myocardial function. abstract_id: PUBMED:22548812 Right ventricular performance in congenital heart disease: a physiologic and pathophysiologic perspective. Underappreciated is the fact that the right ventricle is often the primary determinant of long-term morbidity and mortality in patients with congenital heart disease. Right ventricular performance in these patients depends on a unique set of physiologic and pathophysiologic factors that are rarely considered in acquired heart disease. This article explores this unique physiology and pathophysiology in the hope that it will enhance understanding of a wide variety of congenital cardiac anomalies. abstract_id: PUBMED:19480833 Right ventricular assessment with echocardiography Right ventricular (RV) function is essential in cardio--pulmonary physiology and physiopathology. RV dysfunction has prognostic impact in inferior myocardial infarction, significant valvulopathies, congenital cardiopathies, arterial pulmonary hypertension and in patients suffering from acute or chronic heart failure. RV analysis relies on non invasive (echocardiography-Doppler, isotopic technology, cardiac magnetic resonance imaging) and/or invasive approaches (right cardiac catheterization). Neglected a short time ago, RV assessment has regained interest with tissular Doppler imaging, strain imaging and 2D speckle tracking. We review echocardiography and Doppler -parameters used for right -ventricular assessment. Answer: Yes, rheumatic valvular heart disease does affect right ventricular (RV) performance. Studies have shown that RV dysfunction is a poor prognostic factor in rheumatic heart disease (RHD) and is associated with impairment of RV function (PUBMED:23550424; PUBMED:34341208). The myocardial performance index (MPI), which is used to define RV dysfunction, has been found to be greater than 0.4 in patients with severe mitral stenosis and normal left ventricular function, indicating RV dysfunction (PUBMED:34341208). Additionally, evidence of apoptosis in the RV has been observed in patients with rheumatic mitral valve stenosis, which may contribute to the loss of RV function (PUBMED:28361825). Patients with RV dysfunction due to RHD were found to be older in age and had raised RV systolic pressure (RVSP) at operation, suggesting that earlier intervention may help in preserving RV function (PUBMED:34341208). Furthermore, RV dysfunction was observed in all cases of a study irrespective of RVSP, and tricuspid valve annular shortening was identified as a significant factor associated with adverse outcomes (PUBMED:19497952). In patients with atrial septal defect (ASD) and rheumatic valvular heart disease (RVHD), the RV Tei index, which is useful in assessing global RV function, was increased in the RVHD group. The abnormalities in RV MPI improved as early as 2 weeks after valve surgery with sustained improvement noted at follow-up (PUBMED:23550424). This suggests that RV dysfunction in RHD can be reversible post-operatively. In summary, rheumatic valvular heart disease has a significant impact on right ventricular performance, leading to RV dysfunction, which can be assessed by various echocardiographic measures and may improve with appropriate surgical intervention.
Instruction: Endothelial dysfunction of central retinal vessels: a prognostic parameter for diabetic retinopathy? Abstracts: abstract_id: PUBMED:18034395 Endothelial dysfunction of central retinal vessels: a prognostic parameter for diabetic retinopathy? Background: Endothelial dysfunction as a possible prognostic parameter seems to play a role in the course of diabetic retinopathy. Flicker-induced endothelial NO release may be used as an indicator for endothelial functionality of the central retinal vessels. Methods: Flicker-induced arterial vasodilation as well as complete internal medicine status were determined in 65 type 1 and 170 type 2 diabetics. Diabetic retinopathy was classified according ETDRS criteria. Furthermore, a group of 55 healthy subjects was used as control group. Results: Diabetic subjects showed with 2.1+/-2.2 (type 1) and 2.2+/-2.4 (type 2) a significantly decreased percent arterial vasodilation in comparison to healthy subjects (3.6+/-2.1; p&lt;or=0.001). With increasing stage of the diabetic retinopathy dilation of the retinal arterioles decreased significantly (p=0.002) while static arterial measurements before flicker testing did not show significant differences in the different stages of diabetic retinopathy. Diabetic patients without retinopathy already showed a noticeably reduced arterial dilation in comparison to healthy controls. These changes could be seen both in type 1 and type 2 diabetics. Patients with type 1 diabetes with proliferative diabetic retinopathy showed a mean percent dilation of 1.80+/-2.11, while these reactions had nearly disappeared in patients with type 2 diabetes (0.31+/-1.08). Conclusions: Both type 1 and type 2 diabetics showed significantly decreased flicker-induced arterial dilation as a sign of endothelial dysfunction in comparison to healthy controls. With increasing stage of the diabetic retinopathy dilation of the retinal arterioles decreased significantly. Diabetics without retinopathy already showed decreased flicker-induced reactions in comparison to healthy controls. Measurement of arterial flicker response may be useful for prognostic approaches in the case diabetes care. abstract_id: PUBMED:16890449 Retinal vascular manifestations of metabolic disorders. Metabolic diseases have profound effects on the structure and function of the retinal circulation. The recent development of retinal photography and digital imaging has enabled more precise documentation of diabetic retinopathy, as well as other retinal microvascular changes, such as retinal arteriolar narrowing, venular dilation and isolated retinopathy signs in nondiabetic individuals. These retinal microvascular signs have been shown to be associated with long-term risks of type 2 diabetes and hypertension, components of the metabolic syndrome (e.g. obesity, dyslipidemia), and a range of macro- and micro-vascular conditions (e.g. stroke, cardiovascular mortality). There is evidence that endothelial dysfunction and inflammation might be possible mechanisms involved in the development of various retinal microvascular changes in patients with diabetes, hypertension and other metabolic disorders. Further understanding of how these processes influence the retinal vasculature might help to elucidate the diverse vascular manifestations of metabolic diseases. abstract_id: PUBMED:25669631 The clinical implications of recent studies on the structure and function of the retinal microvasculature in diabetes. The retinal blood vessels provide the opportunity to study early structural and functional changes in the microvasculature prior to clinically significant microvascular and macrovascular complications of diabetes. Advances in digital retinal photography and computerised assessment of the retinal vasculature have provided more objective and precise measurements of retinal vascular changes. Clinic- and population-based studies have reported that these quantitatively measured retinal vascular changes (e.g. retinal arteriolar narrowing and venular widening) are associated with preclinical structural changes in other microvascular systems (e.g. infarct in the cerebral microcirculation), as well as diabetes and diabetic complications, suggesting that they are markers of early microvascular dysfunction. In addition, there are new retinal imaging techniques to further assess alterations in retinal vascular function (e.g. flicker-induced vasodilatory response, blood flow and oxygen saturation) in diabetes and complications that result from the effects of chronic hyperglycaemia, inflammation and endothelial dysfunction. In this review, we summarise the latest findings on the relationships between quantitatively measured structural and functional retinal vascular changes with diabetes and diabetic complications. We also discuss clinical implications and future research to evaluate whether detection of retinal vascular changes has additional value beyond that achieved with methods currently used to stratify the risk of diabetes and its complications. abstract_id: PUBMED:28149006 Role of Lipids in Retinal Vascular and Macular Disorders. Retinal diseases are significant by increasing problem in every part of the world. While excellent treatment has emerged for various retinal diseases, treatment for early disease is lacking due to an incomplete understanding of all molecular events. With aging, there is a striking accumulation of neutral lipids in Bruch's membrane. These neutral lipids leads to the creation of a lipid wall at the same locations where drusen and basal linear deposit, pathognomonic lesions of Age-related macular degeneration, subsequently form. High lipid levels are also known to cause endothelial dysfunction, an important factor in the pathogenesis of Diabetic Retinopathy. Various studies suggest that 20 % of Retinal Vascular Occlusion is connected to hyperlipidemia. Biochemical studies have implicated mutation in gene encoding ABCA4, a lipid transporter in pathogenesis of Stargardt disease. This article reviews how systemic and local production of lipids might contribute to the pathogenesis of above retinal disorders. abstract_id: PUBMED:19643973 Correlation of light-flicker-induced retinal vasodilation and retinal vascular caliber measurements in diabetes. Purpose: Subtle changes in retinal vascular caliber have been shown to predict diabetic retinopathy and other diabetic complications. This study was undertaken to investigate whether retinal vascular caliber correlates with light-flicker-induced retinal vasodilation, a measure of endothelial function. Methods: The participants were 224 persons with diabetes (85 type 1 and 139 type 2) and 103 persons without diabetes (controls). Flicker-induced retinal vasodilation (percentage increase over baseline diameter) was measured with a vessel analyzer. Retinal vascular caliber was measured from digital retinal photographs according to a standardized, validated protocol. Data from both right and left eyes were used and modeled with generalized estimating equations to account for correlation between eyes. Results: In persons with diabetes, after adjustment for age and sex, reduced flicker-induced vasodilation was associated with wider retinal vascular caliber. Eyes with the lowest tertiles of flicker-induced arteriolar dilation had wider arteriolar caliber (5.40 mum; 95% confidence interval [CI], 1.76-9.05) and eyes with the lowest tertiles of flicker-induced venular dilation had corresponding wider venular caliber (12.4 mum; 95% CI, 6.48-18.2), respectively, than eyes with the highest tertile of vasodilation. These associations persisted after further adjusting for diabetes duration, systolic blood pressure, fasting glucose, lipids, body mass index, current smoking, and presence of diabetic retinopathy. No associations were evident in persons without diabetes. Conclusions: Changes in retinal vascular caliber (wider arterioles and venules) are associated with impaired flicker-induced vasodilation in persons with diabetes. Determining whether endothelial dysfunction explains the link between retinal vascular caliber and risks of diabetic microvascular complications calls for further study. abstract_id: PUBMED:26604209 Age and diabetes related changes of the retinal capillaries: An ultrastructural and immunohistochemical study. Normal human aging and diabetes are associated with a gradual decrease of cerebral flow in the brain with changes in vascular architecture. Thickening of the capillary basement membrane and microvascular fibrosis are evident in the central nervous system of elderly and diabetic patients. Current findings assign a primary role to endothelial dysfunction as a cause of basement membrane (BM) thickening, while retinal alterations are considered to be a secondary cause of either ischemia or exudation. The aim of this study was to reveal any initial retinal alterations and variations in the BM of retinal capillaries during diabetes and aging as compared to healthy controls. Moreover, we investigated the potential role of vascular endothelial growth factor (VEGF) and pro-inflammatory cytokines in diabetic retina.Transmission electron microscopy (TEM) was performed on 46 enucleated human eyes with particular attention to alterations of the retinal capillary wall and Müller glial cells. Inflammatory cytokines expression in the retina was investigated by immunohistochemistry.Our electron microscopy findings demonstrated that thickening of the BM begins primarily at the level of the glial side of the retina during aging and diabetes. The Müller cells showed numerous cytoplasmic endosomes and highly electron-dense lysosomes which surrounded the retinal capillaries. Our study is the first to present morphological evidence that Müller cells start to deposit excessive BM material in retinal capillaries during aging and diabetes. Our results confirm the induction of pro-inflammatory cytokines TNF-α and IL-1β within the retina as a result of diabetes.These observations strongly suggest that inflammatory cytokines and changes in the metabolism of Müller glial cells rather than changes in of endothelial cells may play a primary role in the alteration of retinal capillaries BM during aging and diabetes. abstract_id: PUBMED:35143812 Oxygen saturation in retinal vessels and their correlation with endothelial microparticles in diabetes mellitus and/or cardiovascular disease. Purpose: Retinal oxygen supply is a critical requirement in ocular function, and when inadequate can lead to retinopathy. Endothelial dysfunction is a leading pathophysiology in diabetes and cardiovascular disease and may be assessed by endothelial microparticles (EMPs). We hypothesised links between retinal vessel oxygenation and EMPs, expecting these indices to be more adverse in those with both DM and CVD. Methods: Plasma from 34 patients with diabetes mellitus alone (DM), 40 with cardiovascular disease (CVD) alone and 36 with DM plus CVD was probed for EMPs by flow cytometry, but also for vascular markers soluble E-selectin (sEsel) and von Willebrand factor (vWf) (both ELISA). Retinal vessel fractal dimension, lacunarity, calibres and oxygen saturation were assessed from monochromatic and dual wavelength imaging respectively, intra-ocular pressure by was measured by rebound tonometry (I-CARE). Results: There was no difference in oxygenation (arterial p = 0.725, venous p = 0.264, arterio-venous difference 0.375) between the groups, but there were differences in EMPs (p = 0.049), vWf (p = 0.004) and sEsel (p = 0.032). In the entire cohort, and in diabetes alone, EMPs correlated with venous oxygenation (r = 0.24, p = 0.009 and r = 0.43, p = 0.011 respectively), while in DM + CVD, sEsel correlated with venous oxygenation (r = 0.55, p = 0.002) and with the arterial-venous difference (r = -0.63, p = 0.001). In multivariate regression analysis of vascular markers against retinal oximetry indices in the entire group, EMPs were positively linked to venous oxygenation (p = 0.037). Conclusions: Despite differences in systemic markers of vascular function between DM, CVD and DM + CVD, there was no difference in arterial or venous retinal oxygenation, or their difference. However, EMPs were linked to venous oximetry, and may provide further insight into the mechanisms underlying diabetes and diabetic retinopathy. abstract_id: PUBMED:23742315 Impaired retinal vasodilator responses in prediabetes and type 2 diabetes. Purpose: In diabetes, endothelial dysfunction and subsequent structural damage to blood vessels can lead to heart attacks, retinopathy and strokes. However, it is unclear whether prediabetic subjects exhibit microvascular dysfunction indicating early stages of arteriosclerosis and vascular risk. The purpose of this study was to examine whether retinal reactivity may be impaired early in the hyperglycaemic continuum and may be associated with markers of inflammation. Methods: Individuals with prediabetes (n = 22), type 2 diabetes (n = 25) and healthy age and body composition matched controls (n = 19) were studied. We used the Dynamic Vessel Analyzer to assess retinal vasoreactivity (percentage change in vessel diameter) during a flickering light stimulation. Fasting highly sensitive c-reactive protein (hs-CRP), a marker of inflammation, was measured in blood plasma. Results: Prediabetic and diabetic individuals had attenuated peak vasodilator and relative amplitude changes in retinal vein diameters to the flickering light stimulus compared with healthy controls (peak dilation: prediabetic subjects 3.3 ± 1.8%, diabetic subjects 3.3 ± 2.1% and controls 5.6 ± 2.6%, p = 0.001; relative amplitude: prediabetic subjects 4.3 ± 2.2%, diabetic subjects 5.0 ± 2.6% and control subjects 7.2 ± 3.2%, p = 0.003). Similar findings were observed in retinal arteries. Levels of hs-CRP were not associated with either retinal vessel response parameters. Conclusion: Retinal reactivity was impaired in prediabetic and type 2 diabetic individuals in parallel with reduced insulin sensitivity but not associated with levels of hs-CRP. Retinal vasoreactivity measurements may be a sensitive tool to assess early vascular risk. abstract_id: PUBMED:16396626 Are retinal microvascular abnormalities associated with large artery endothelial dysfunction and intima-media thickness? The Hoorn Study. It has been hypothesized that microvascular dysfunction affects endothelial dysfunction of the large arteries, which may explain the relationship of microvascular disease with macrovascular disease. The aim of the present study was to investigate the relationship of retinal microvascular disorders with endothelium-dependent FMD (flow-mediated vasodilatation) and carotid IMT (intima-media thickness). A total of 256 participants, aged 60-85 years, 70 with normal glucose metabolism, 69 with impaired glucose metabolism and 109 with Type II diabetes, were included in this study. All participants were ophthalmologically examined, including funduscopy and two field 45 degrees fundus photography, and were graded for retinal sclerotic vessel abnormalities and retinopathy. Retinal arteriolar and venular diameters were measured with a computer-assisted method. Brachial artery, endothelium-dependent FMD and carotid IMT were assessed ultrasonically as measurements of endothelial function and early atherosclerosis respectively. After adjustment for age, sex and glucose tolerance status, retinal vessel diameters, retinal sclerotic vessel abnormalities and retinopathy were not significantly associated with FMD. In contrast with other retinal microvascular abnormalities, retinal venular dilatation was associated with increased IMT [standardized beta value (95% confidence interval), 0.14 (0.005-0.25)]. This association was attenuated and lost statistical significance after adjustment for cardiovascular risk factors, in particular after correction for fasting insulin. In the present study, retinal microvascular disorders are not independently associated with impaired FMD. In addition, retinal venular dilatation is associated with increased IMT, although non-significantly after multivariable adjustment for cardiovascular risk factors. Therefore our data provide evidence that retinal microvascular disease is of limited value in risk stratification for future cardiovascular events. abstract_id: PUBMED:19404666 Diabetic patients with retinopathy show increased retinal venous oxygen saturation. Background: Longstanding diabetes mellitus results in a disturbed microcirculation. A new imaging oximeter was used to investigate the effect of this disturbance on retinal vessel oxygen saturation. Methods: The haemoglobin oxygen saturation was measured in the retinal arterioles and venules of 41 diabetic patients (65 +/- 12.3 years) with mild non-proliferative through proliferative diabetic retinopathy (DR). Twelve individuals (61.3 +/- 6.2 years, mean +/- standard deviation) without systemic or ocular disease were investigated as controls. Measurements were taken by an imaging oximeter (oxygen module by Imedos GmbH, Jena). This technique is based on the proportionality of the oxygen saturation and ratio of the optical density of the vessel at two wavelengths (548 nm and 610 nm). Results: Whereas there were no significant differences in the arterial oxygen saturation between controls and diabetic retinopathy at any stage, the venous oxygen saturation increased in diabetic patients with the severity of the retinopathy: controls 63 +/- 5%, mild non-proliferative DR 69 +/- 7%, moderate non-proliferative DR 70 +/- 5%, severe non-proliferative DR, 75 +/- 5%, and proliferative DR 75 +/- 8%. Conclusions: The increase of retinal vessel oxygen saturation in diabetic retinopathy points to a diabetic microvascular alteration. This may be due to occlusions and obliterations in the capillary bead and the formation of arterio-venous shunt vessels. On the other hand, hyperglycaemia-induced endothelial dysfunction, with subsequent suppression of the endothelial NO-synthase and disturbance of the vascular auto-regulation, may contribute to retinal tissue hypoxia. Answer: Endothelial dysfunction of central retinal vessels may indeed serve as a prognostic parameter for diabetic retinopathy. Diabetic patients, both type 1 and type 2, exhibit significantly decreased flicker-induced arterial dilation, which is indicative of endothelial dysfunction when compared to healthy controls. This reduction in dilation of retinal arterioles correlates with the increasing stage of diabetic retinopathy (PUBMED:18034395). The measurement of arterial flicker response could be useful for prognostic approaches in diabetes care, as diabetics without retinopathy already show decreased flicker-induced reactions compared to healthy individuals. Furthermore, retinal vascular manifestations of metabolic disorders, including diabetes, are associated with long-term risks of type 2 diabetes and hypertension. Endothelial dysfunction and inflammation are potential mechanisms involved in the development of retinal microvascular changes in patients with metabolic diseases (PUBMED:16890449). Retinal vasodilator responses are also impaired in prediabetes and type 2 diabetes, which may indicate early stages of arteriosclerosis and vascular risk, even before significant microvascular and macrovascular complications become clinically evident (PUBMED:23742315). Additionally, studies have shown that changes in retinal vascular caliber, such as wider arterioles and venules, are associated with impaired flicker-induced vasodilation in persons with diabetes (PUBMED:19643973). This suggests that endothelial dysfunction could explain the link between retinal vascular caliber and risks of diabetic microvascular complications. In summary, the evidence suggests that endothelial dysfunction in central retinal vessels, as indicated by impaired flicker-induced vasodilation and changes in retinal vascular caliber, could be a prognostic marker for the development and progression of diabetic retinopathy.
Instruction: Is there any role for community involvement in the community-based health planning and services skilled delivery program in rural Ghana? Abstracts: abstract_id: PUBMED:25113017 Is there any role for community involvement in the community-based health planning and services skilled delivery program in rural Ghana? Background: In Ghana, between 1,400 and 3,900 women and girls die annually due to pregnancy related complications and an estimated two-thirds of these deaths occur in late pregnancy through to 48 hours after delivery. The Ghana Health Service piloted a strategy that involved training Community Health Officers (CHOs) as midwives to address the gap in skilled attendance in rural Upper East Region (UER). CHO-midwives collaborated with community members to provide skilled delivery services in rural areas. This paper presents findings from a study designed to assess the extent to which community residents and leaders participated in the skilled delivery program and the specific roles they played in its implementation and effectiveness. Methods: We employed an intrinsic case study design with a qualitative methodology. We conducted 29 in-depth interviews with health professionals and community stakeholders. We used a random sampling technique to select the CHO-midwives in three Community-based Health Planning and Services (CHPS) zones for the interviews and a purposive sampling technique to identify and interview District Directors of Health Services from the three districts, the Regional Coordinator of the CHPS program and community stakeholders. Results: Community members play a significant role in promoting skilled delivery care in CHPS zones in Ghana. We found that community health volunteers and traditional birth attendants (TBAs) helped to provide health education on skilled delivery care, and they also referred or accompanied their clients for skilled attendants at birth. The political authorities, traditional leaders, and community members provide resources to promote the skilled delivery program. Both volunteers and TBAs are given financial and non-financial incentives for referring their clients for skilled delivery. However, inadequate transportation, infrequent supply of drugs, attitude of nurses remains as challenges, hindering women accessing maternity services in rural areas. Conclusions: Mutual collaboration and engagement is possible between health professionals and community members for the skilled delivery program. Community leaders, traditional and political leaders, volunteers, and TBAs have all been instrumental to the success of the CHPS program in the UER, each in their unique way. However, there are problems confronting the program and we have provided recommendations to address these challenges. abstract_id: PUBMED:25518900 Can community health officer-midwives effectively integrate skilled birth attendance in the community-based health planning and services program in rural Ghana? Background: The burden of maternal mortality in sub-Saharan Africa is very high. In Ghana maternal mortality ratio was 380 deaths per 100,000 live births in 2013. Skilled birth attendance has been shown to reduce maternal mortality and morbidity, yet in 2010 only 68 percent of mothers in Ghana gave birth with the assistance of skilled birth attendants. In 2005, the Ghana Health Service piloted a strategy that involved using the integrated Community-based Health Planning and Services (CHPS) program and training Community Health Officers (CHOs) as midwives to address the gap in skilled attendance in rural Upper East Region (UER). The study assesses the feasibility of and extent to which the skilled delivery program has been implemented as an integrated component of the existing CHPS, and documents the benefits and challenges of the integrated program. Methods: We employed an intrinsic case study design with a qualitative methodology. We conducted 41 in-depth interviews with health professionals and community stakeholders. We used a purposive sampling technique to identify and interview our respondents. Results: The CHO-midwives provide integrated services that include skilled delivery in CHPS zones. The midwives collaborate with District Assemblies, Non-Governmental Organizations (NGOs) and communities to offer skilled delivery services in rural communities. They refer pregnant women with complications to district hospitals and health centers for care, and there has been observed improvement in the referral system. Stakeholders reported community members' access to skilled attendants at birth, health education, antenatal attendance and postnatal care in rural communities. The CHO-midwives are provided with financial and non-financial incentives to motivate them for optimal work performance. The primary challenges that remain include inadequate numbers of CHO-midwives, insufficient transportation, and infrastructure weaknesses. Conclusions: Our study demonstrates that CHOs can successfully be trained as midwives and deployed to provide skilled delivery services at the doorsteps of rural households. The integration of the skilled delivery program with the CHPS program appears to be an effective model for improving access to skilled birth attendance in rural communities of the UER of Ghana. abstract_id: PUBMED:28874157 Male involvement in maternal healthcare through Community- based Health Planning and Services: the views of the men in rural Ghana. Background: The need to promote maternal health in Ghana has committed the government to extend maternal healthcare services to the door steps of rural families through the community-based Health Planning and Services. Based on the concerns raised in previous studies that male spouses were indifferent towards maternal healthcare, this study sought the views of men on their involvement in maternal healthcare in their respective communities and at the household levels in the various Community-based Health Planning and Services zones in Awutu-Senya West District in the Central Region of Ghana. Methods: A qualitative method was employed. Focus groups and individual interviews were conducted with married men, community health officers, community health volunteers and community leaders. The participants were selected using purposive, quota and snowball sampling techniques. The study used thematic analysis for analysing the data. Results: The study shows varying involvement of men, some were directly involved in feminine gender roles; others used their female relatives and co-wives to perform the women's roles that did not have space for them. They were not necessarily indifferent towards maternal healthcare, rather, they were involved in the spaces provided by the traditional gender division of labour. Amongst other things, the perpetuation and reinforcement of traditional gender norms around pregnancy and childbirth influenced the nature and level of male involvement. Conclusions: Sustenance of male involvement especially, husbands and CHVs is required at the household and community levels for positive maternal outcomes. Ghana Health Service, health professionals and policy makers should take traditional gender role expectations into consideration in the planning and implementation of maternal health promotion programmes. abstract_id: PUBMED:24721385 Using the community-based health planning and services program to promote skilled delivery in rural Ghana: socio-demographic factors that influence women utilization of skilled attendants at birth in northern Ghana. Background: The burden of maternal mortality in sub-Saharan Africa is enormous. In Ghana the maternal mortality ratio was 350 per 100,000 live births in 2010. Skilled birth attendance has been shown to reduce maternal deaths and disabilities, yet in 2010 only 68% of mothers in Ghana gave birth with skilled birth attendants. In 2005, the Ghana Health Service piloted an enhancement of its Community-Based Health Planning and Services (CHPS) program, training Community Health Officers (CHOs) as midwives, to address the gap in skilled attendance in rural Upper East Region (UER). The study determined the extent to which CHO-midwives skilled delivery program achieved its desired outcomes in UER among birthing women. Methods: We conducted a cross-sectional household survey with women who had ever given birth in the three years prior to the survey. We employed a two stage sampling techniques: In the first stage we proportionally selected enumeration areas, and the second stage involved random selection of households. In each household, where there is more than one woman with a child within the age limit, we interviewed the woman with the youngest child. We collected data on awareness of the program, use of the services and factors that are associated with skilled attendants at birth. Results: A total of 407 households/women were interviewed. Eighty three percent of respondents knew that CHO-midwives provided delivery services in CHPS zones. Seventy nine percent of the deliveries were with skilled attendants; and over half of these skilled births (42% of total) were by CHO-midwives. Multivariate analyses showed that women of the Nankana ethnic group and those with uneducated husbands were less likely to access skilled attendants at birth in rural settings. Conclusions: The implementation of the CHO-midwife program in UER appeared to have contributed to expanded skilled delivery care access and utilization for rural women. However, women of the Nankana ethnic group and uneducated men must be targeted with health education to improve women utilizing skilled delivery services in rural communities of the region. abstract_id: PUBMED:33951049 Assessing selection procedures and roles of Community Health Volunteers and Community Health Management Committees in Ghana's Community-based Health Planning and Services program. Background: Community participation in health care delivery will ensure service availability and accessibility and guarantee community ownership of the program. Community-based strategies such as the involvement of Community Health Volunteers (CHVs) and Community Health Management Committees (CHMCs) are likely to advance primary healthcare in general, but the criteria for selecting CHVs, CHMCs and efforts to sustain these roles are not clear 20 years after implementing the Community-based Health Planning Services program. We examined the process of selecting these cadres of community health workers and their current role within Ghana's flagship program for primary care-the Community-based Health Planning and Services program. Methods: This was an exploratory study design using qualitative methods to appraise the health system and stakeholder participation in Community-based Health Planning and Services program implementation in the Upper East region of Ghana. We conducted 51 in-depth interviews and 33 focus group discussions with health professionals and community members. Results: Community Health Volunteers and Community Health Management Committees are the representatives of the community in the routine implementation of the Community-based Health Planning and Services program. They are selected, appointed, or nominated by their communities. Some inherit the position through apprenticeship and others are recruited through advertisement. The selection is mostly initiated by the health providers and carried out by community members. Community Health Volunteers lead community mobilization efforts, support health providers in health promotion activities, manage minor illnesses, and encourage pregnant women to use maternal health services. Community Health Volunteers also translate health messages delivered by health providers to the people in their local languages. Community Health Management Committees mobilize resources for the development of Community-based Health Planning and Services program compounds. They play a mediatory role between health providers in the health compounds and the community members. Volunteers are sometimes given non-financial incentives but there are suggestions to include financial incentives. Conclusion: Community Health Volunteers and Community Health Management Committees play a critical role in primary health care. The criteria for selecting Community Health Volunteers and Community Health Management Committees vary but need to be standardized to ensure that only self-motivated individuals are selected. Thus, CHVs and CHMCs should contest for their positions and be endorsed by their community members and assigned roles by health professionals in the CHPS zones. Efforts to sustain them within the health system should include the provision of financial incentives. abstract_id: PUBMED:17401450 Strategies of immunization in Ghana: the experience of a "community-based" health planning in a rural country Access to immunization of children and to prevention services is a relevant issue in poor-resource settings like rural areas of Western Africa. Ghana government has launched the "Community-based Health Planning and Services initiative-CHPS", a programme that by the institution of Local Clinics in small villages, the activity of their nurse, and the involvement of local communities and traditional institutions improves the access of population to primary care and prevention. Our survey in Jomoro district has confirmed that this model is effective in determining higher coverage rate for all the immunizations of the children. abstract_id: PUBMED:38504682 Community-based Health Planning and Services programme in Ghana: a systematic review. Introduction: Ghana established Community-based Health Planning and Services (CHPS) as the primary point of contact for primary healthcare in 1999. CHPS has since emerged as the country's primary strategy for providing close-to-client healthcare delivery, with numerous positive health outcomes recorded as a result of its implementation. There is, however currently a paucity of systematic reviews of the literature on CHPS. The purpose of this study was not only to investigate dominant trends and research themes in Community-based Health Planning and Services, but also to track the evolution of the CHPS intervention from its inception to the present. Method: We adopted a systematic review approach for selected articles that were searched on Google Scholar, PubMed, and Scopus databases. The study was conducted and guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline. We then applied a reflexive thematic analysis approach in synthesizing the results. Results: The search resulted in 127 articles of which 59 were included in the final review. Twenty (20) papers targeted the national level, eighteen (18) for the regional level, sixteen (16) for the district level, two (2) for the sub-district level, and three (3) papers targeted the community. The years 2017 and 2019 were recorded to be the years with the highest number of publications on CHPS in Ghana. Conclusion: Community-based Health Planning and Services (CHPS) is an effective tool in addressing barriers and challenges to accessing quality and affordable health care causing significant effects on health. It provides close-to-client healthcare delivery in the community. abstract_id: PUBMED:30443928 The influence of the Community-based Health Planning and Services (CHPS) program on community health sustainability in the Upper West Region of Ghana. Ghana introduced Community-based Health Planning and Services (CHPS) to improve primary health care in rural areas. The extension of health care services to rural areas has the potential to increase sustainability of community health. Drawing on the capitals framework, this study aims to understand the contribution of CHPS to the sustainability of community health in the Upper West Region of Ghana-the poorest region in the country. We conducted in-depth interviews with community members (n = 25), key informant interviews with health officials (n = 8), and focus group discussions (n = 12: made up of six to eight participants per group) in six communities from two districts. Findings show that through their mandate of primary health care provision, CHPS contributed directly to improvement in community health (eg, access to family planning services) and indirectly through strengthening social, human, and economic capital and thereby improving social cohesion, awareness of health care needs, and willingness to take action at the community level. Despite the current contributions of CHPS in improving the sustainability of community health, there are several challenges, based on which we recommend, that government should increase staffing and infrastructure in order to strengthen and maintain the functionality of CHPS. abstract_id: PUBMED:32396035 'I have a lot of faith in her': Value of community health workers in addressing family planning in rural Ghana. In rural settings with shortages in trained health care workers, community health workers (CHWs) play an important role in the delivery of health care services. The Ghana Health Service initiated a national CHW programme in 2016 to expand health services to rural populations. This study explored the perceived role and value of CHWs in addressing family planning issues in the Amansie West district of Ghana. The study included in-depth interviews (IDIs) with 28 women in the community, ages 18-49, and 30 CHWs. Using inductive thematic analysis, IDIs were coded to explore opinions on the CHWs' role and perceived value in the delivery of family planning. Participants explained that CHWs provided family planning as part of a healthcare package through household visits and referrals to government services. The value of CHWs in delivering family planning was seen in confidentiality, accessibility, and comfort. Participants recommended an enlarged CHW workforce with a range of commodities and programmatic support. The findings suggest CHWs play an important role in promoting family planning, by serving as a bridge between the community and clinics. In rural communities where resources are scarce, CHWs are an invaluable part of the broader healthcare system. abstract_id: PUBMED:32799990 Access to primary health care among women: the role of Ghana's community-based health planning and services policy. Background: Ghana in 1999 adopted the Community-based Health Planning and Service (CHPS) policy to enhance access to primary health care (PHC) service. After two decades of implementation, there remains a considerable proportion of the country's population, especially women who lack access to basic health care services. Aim: The aim of this paper is to understand the contribution of Ghana's CHPS policy to women's access to PHC services in the Upper West Region (UWR) of Ghana. Methods: A logistic regression technique was employed to analyse cross-sectional data collected among women (805) from the UWR. Findings: We found that women who resided in CHPS zones (OR = 1.612; P ≤ 0.01) were more likely to have access to health care compared with their counterparts who resided in non-CHPS zones. Also, rural-urban residence, distance to health facility, household wealth status and marital status predicted access to health care among women in the region. Our findings underscore the need to expand the CHPS policy to cover many areas in the country, especially rural communities and other deprived localities in urban settings. Answer: Yes, there is a significant role for community involvement in the community-based health planning and services (CHPS) skilled delivery program in rural Ghana. Community members, including community health volunteers and traditional birth attendants (TBAs), actively participate in promoting skilled delivery care in CHPS zones. They help provide health education on skilled delivery care and refer or accompany clients for skilled birth attendants. Political authorities, traditional leaders, and community members contribute resources to support the skilled delivery program, and both volunteers and TBAs receive financial and non-financial incentives for referring clients for skilled delivery (PUBMED:25113017). The integration of skilled delivery services with the CHPS program has been shown to be an effective model for improving access to skilled birth attendance in rural communities. CHO-midwives collaborate with District Assemblies, Non-Governmental Organizations (NGOs), and communities to offer skilled delivery services in rural areas, and there has been an observed improvement in the referral system for pregnant women with complications (PUBMED:25518900). Men's involvement in maternal healthcare is also recognized, with varying levels of engagement. Some men are directly involved in roles traditionally associated with women, while others support maternal healthcare within the spaces provided by traditional gender division of labor (PUBMED:28874157). The implementation of the CHO-midwife program in the Upper East Region (UER) has contributed to expanded skilled delivery care access and utilization for rural women. However, certain socio-demographic factors, such as ethnicity and husband's education level, influence the utilization of skilled attendants at birth (PUBMED:24721385). Community Health Volunteers (CHVs) and Community Health Management Committees (CHMCs) are integral to the routine implementation of the CHPS program. They are selected by their communities and play a critical role in mobilizing resources, supporting health promotion activities, and encouraging the use of maternal health services (PUBMED:33951049). In summary, community involvement is crucial to the success of the CHPS skilled delivery program in rural Ghana, with community members, leaders, and health volunteers all playing instrumental roles in its implementation and effectiveness.
Instruction: Are temporary inferior vena cava filters really temporary? Abstracts: abstract_id: PUBMED:10091301 Thrombosed temporary vena cava filter The application of temporary vena cava filters for the treatment of deep venous thrombosis of the lower extremity has become increasingly important in recent years. The filters are supposed to guarantee temporary protection from more extensive pulmonary embolism. Occlusion of the filter system by a larger embolus as well as vena cava thrombosis including the filter struts present major therapeutic problems. We report on one patient in whom the temporarily inserted filter was trapped in a large vena cava thrombus and had to be removed surgically by caval thrombectomy. Because of possible complications such as the above, the indication for insertion of temporary vena cava filters requires thorough consideration. Their duration of stay should be as short as possible and should be limited to the high risk phase, not exceeding ten days. abstract_id: PUBMED:11586454 Clinical experience with temporary vena cava filters. An experience with temporary filter placement, which seems to be safe and effective for temporarily preventing pulmonary embolism, is reported. Since October 1997, six patients had temporary filters. There were two men and four women, with a mean age of 37 years. Three filters were placed at the infrarenal inferior vena cava, two at the suprarenal inferior vena cava, and one at the superior vena cava. All filters were placed before various surgical interventions. During filter placement, anticoagulation therapy was routinely performed. There were no complications at and during filter placement. No pulmonary emboli occurred during surgical intervention. All filters were successfully removed, two of which were exchanged for permanent filters. All patients are alive and well without recurrent deep vein thrombosis and/or pulmonary emboli during a follow-up period of 11 to 25 months. Although this experience is small, temporary filter placement is safe and effective for short-term prevention of pulmonary emboli even in older patients or those with malignant disease. Veins of the upper part of the body may be more favorable than the femoral vein for insertion of a temporary filter. Temporary filters can be safely placed not only at the infrarenal inferior vena cava, but also at the suprarenal inferior vena cava or superior vena cava. abstract_id: PUBMED:14753313 Temporary and permanent inferior vena cava filter combination in a young patient: to implant or not to implant? The decision to implant vena cava filters, either temporary or permanent, is difficult in young patients. We present the case of a young man with pulmonary embolism in whom temporary and permanent inferior vena cava filters were implanted. The decision process is discussed in relation to the current literature. abstract_id: PUBMED:16307934 Are temporary inferior vena cava filters really temporary? Background: Despite significant risk for venous thromboembolism, severely injured trauma patients often are not candidates for prophylaxis or treatment with anticoagulation. Long-term inferior vena cava (IVC) filters are associated with increased risk of postphlebitic syndrome. Retrievable IVC filters potentially offer a better solution, but only if the filter is removed; our hypothesis is that the most of them are not. Methods: This retrospective study queried a level I trauma registry for IVC filter insertion from September 1997 through June 2004. Results: One IVC filter was placed before the availability of retrievable filters in 2001. Since 2001, 27 filters have been placed, indicating a change in practice patterns. Filters were placed for prophylaxis (n = 11) or for therapy in patients with pulmonary embolism or deep vein thrombosis (n = 17). Of 23 temporary filters, only 8 (35%) were removed. Conclusions: Surgeons must critically evaluate indications for IVC filter insertion, develop standard criteria for placement, and implement protocols to ensure timely removal of temporary IVC filters. abstract_id: PUBMED:28470480 Bedside implantation of a new temporary vena cava inferior filter : German results from the European ANGEL registry Background: Pulmonary embolism (PE) is a frequently occurring complication in critically ill patients, and the simultaneous occurrence of PE and life-threatening bleeding is a therapeutic dilemma. Inferior vena cava filters (IVCF) may represent an important therapeutic alternative in these cases. The Angel® catheter (Bio2 Medical Inc., San Antonio, TX, USA) is a novel IVCF that provides temporary protection from PE and is implanted at bedside without fluoroscopy. Material And Methods: The European Angel® Catheter Registry is an observational, multicenter study. In our German substudy, we investigated patients from three German hospitals and four intensive care units, who underwent Angel® catheter implantation between February 2016 and December 2016. Results: A total of 23 critically ill patients (68 ± 9 years, 43% male) were included. The main indication for implantation was a high risk for or an established PE, combined with contraindications for prophylactic or therapeutic anticoagulation due to either an increased risk of bleeding (81%) or active bleeding (13%). The Angel® catheter was successfully inserted in all patients at bedside. No PE occurred in patients with an indwelling Angel® catheter. Clots with a diameter larger the 20 mm, indicating clot migration, were detected in 5% of the patients by cavography before filter retrieval. Filter retrieval was uneventful in all of our cases, while filter dislocation occurred in 3% of the patients. Conclusion: The German data from the multicenter European Angel® Catheter Registry show that the Angel® catheter is a safe and effective approach for critically ill patients with a high risk for the development of PE or an established PE, when an anticoagulation therapy is contraindicated. abstract_id: PUBMED:30524576 Successful catheter intervention for deep vein thrombosis due to inferior vena cava stenosis after retrieval of a temporary inferior vena cava filter. Inferior vena cava (IVC) stenosis is a well-known complication of the IVC filter. However, there are no previous reports of IVC stenosis caused by a temporary IVC filter. In this case report, we describe the case of a 35-year-old man who was referred to our center for the treatment of recurrent proximal deep vein thrombosis (DVT) and severe IVC stenosis that occurred after retrieval of a temporary IVC filter. We performed a catheter-directed thrombolysis and balloon angioplasty. DVT resolved effectively, and his leg symptoms resolved. &lt;Learning objective: Although IVC filter-related stenosis is not common, it should be managed, even when a temporary IVC filter is used. The combination of catheter-directed thrombolysis and balloon angioplasty may be considered for a proximal deep vein thrombosis complicated with IVC stenosis.&gt;. abstract_id: PUBMED:36517606 Temporary inferior vena cava filters factors associated with non-removal. Objectives: Inferior vena cava filter (IVCF) placement is indicated when there is a deep vein thrombosis and/or a pulmonary embolism and a contraindication of anticoagulation. Due to the increased risk of recurrent deep venous thrombosis when left in place, IVCF removal is indicated once anticoagulant treatment can be reintroduced. However, many temporary IVCF are not removed. We aimed to analyze the removal rate and predictors of filter non-removal in a university hospital setting. Methods: We collected all the data of consecutive patients who had a retrievable IVCF inserted at the Saint-Etienne University Hospital (France) between April 2012 and November 2019. Rates of filter removal were calculated. We analyzed patient characteristics to assess factors associated with filter non-removal, particularly in patients without a definitive filter indication. The exclusion of this last category of patients allowed us to calculate an adjusted removal rate. Results: The overall removal rate of IVCF was 40.5% (IC 95% 35.6-45.6), and the adjusted removal rate was 62.9 % (IC 95% 56.6-69.2%). No major complications were noted. Advanced age (p &lt; 0.0001) and cancer presence (p &lt; 0.003) were statistically significant predictors of patients not being requested to make a removal attempt. Conclusions: Although most of the filters placed are for therapeutic indications validated by scientific societies, the removal rate in this setting remains suboptimal. The major factors influencing IVCF removal rate are advanced age and cancer presence. Key Points: • Most vena cava filters are placed for therapeutic indications validated by scientific societies. • Vena cava filter removal rates in this setting remain suboptimal. • Major factors influencing IVCF removal rate are advanced age and cancer presence. abstract_id: PUBMED:7670020 First experiences with temporary vena cava filters Aim: For prophylactic measures we used 3 different temporarily insertable vena cava filters. Material And Methods: In 49 patients we inserted 12 Cook-filters (6 transjugular, 6 transfemoral), 11 Angio-cor-filters (1 transjugular, 10 transbrachial), 26 Antheor-filters (1 transjugular, 1 transfemoral, 24 transbrachial). 35 patients underwent a lysis therapy, 11 a major operation of the pelvis, and 3 patients a Caesarean section. Results: No patient suffered from a clinically significant pulmonary embolism after filter insertion, but complications occurred caused by the basic therapy (1 lethal abdominal aortic aneurysm operation, 1 cerebral bleeding, 2 retroperitoneal haematomas, 2 streptokinase fever reactions, 1 compartment syndrome, 1 macro-haematuria) or by therapy joined with the filters insertion (2 groin haematomas, 2 haematomas of the bend of the elbow, 2 subclavian vein thrombosis, 1 catheter dislocation, 1 infection, 1 air embolism, 1 break of a leg of the filter basket). Conclusion: Temporary vena cava filters are highly efficient in preventing pulmonary embolism, but the side-effects show that they should only be inserted in patients with known deep vein thrombosis and a high risk treatment of the underlying disease. abstract_id: PUBMED:16500541 Clinical experience with Günther temporary inferior vena cava filters. This retrospective study was performed to evaluate the safety and effectiveness of Günther temporary inferior vena cava (IVC) filters. Fifteen Günther temporary filters were placed in 13 patients because of deep vein thrombosis (DVT) with pulmonary embolism (PE) despite DVT prophylaxis (9/13) or temporary contraindications for anticoagulants as well as recent or pending surgery (4/13). No clinical manifestation of PE was observed during the filtration or during the removal. Günther temporary IVC filters are easy and safe to use, and are effective in clot trapping, protecting patients at high risk for PE in whom anticoagulation treatment failed or is contraindicated. abstract_id: PUBMED:8547623 Temporary cava filter: effective prophylaxis of pulmonary embolism in venous thrombosis in the region of the pelvic vascular system and the inferior vena cava? Purpose: To assess the temporary Günther filter system concerning handling, complication rate and efficacy in respect to prophylaxis of pulmonary embolism. Patients And Methods: 45 temporary Günther filters were placed in 44 patients (mean age 46 years) with deep venous thrombosis of the iliac and/or inferior vena cava. Results: The mean time until filter retrieval was 6.1 +/- 4.1 days (maximum 14 days) dependent on the success of either thrombolytic therapy or thrombus consolidation. 27 patients underwent systemic thrombolytic therapy (61%), three patients (7%) received local thrombolysis. 14 patients (32%) were treated with heparin in a therapeutic dosage (PTT &gt; 60 sec). We observed four complications (8.8%): one accidental arterial puncture (carotid artery on attempting an internal jugular vein approach), one infection located at the puncture site and one caval vein thrombosis in a patient with known heparin-associated thrombocytopenia. One patient died of pulmonary embolism despite correct filter positioning. Conclusion: Percutaneous placement of inferior vena cava filters is therefore an easy, safe and effective prophylaxis in respect of pulmonary embolism if combined with thrombolytic therapy or therapeutic heparinisation. Answer: Temporary inferior vena cava (IVC) filters are designed to be temporary solutions for preventing pulmonary embolism in patients with deep vein thrombosis or other conditions where there is a risk of blood clots traveling to the lungs. However, the actual practice of removing these filters seems to be inconsistent. A retrospective study found that of 23 temporary filters placed, only 8 (35%) were removed, suggesting that in practice, many temporary IVC filters are not retrieved as intended (PUBMED:16307934). This indicates that while the filters are designed to be temporary, they may often become permanent by default due to non-removal. The reasons for non-removal of temporary IVC filters can vary. Factors such as advanced age and the presence of cancer have been identified as statistically significant predictors of patients not being requested to make a removal attempt (PUBMED:36517606). Additionally, the indication for the insertion of temporary IVC filters requires careful consideration, and their duration of stay should be as short as possible, limited to the high-risk phase and not exceeding ten days (PUBMED:10091301). Despite these concerns, clinical experiences with temporary IVC filters have reported them to be safe and effective for short-term prevention of pulmonary emboli, even in older patients or those with malignant disease (PUBMED:11586454). Moreover, successful interventions such as catheter-directed thrombolysis and balloon angioplasty have been performed for complications like IVC stenosis after retrieval of a temporary IVC filter (PUBMED:30524576). In conclusion, while temporary IVC filters are intended to be temporary, the reality is that they are not always removed as planned, which can effectively make them permanent in some patients. This underscores the importance of developing protocols to ensure timely removal and critically evaluating indications for IVC filter insertion (PUBMED:16307934).
Instruction: Is the guideline for work-related medical rehabilitation successfully implemented? Abstracts: abstract_id: PUBMED:24390869 Is the guideline for work-related medical rehabilitation successfully implemented? Background: The guideline for work-related medical rehabilitation (WMR) in responsibility of the German Pension Insurance describes standards of work-related measures in medical rehabilitation. We investigated if contents and recommended amount of treatment were successfully implemented and which improvements were associated with the implementation. Methods: Implementation of the WMR guideline was evaluated at 7 inpatient orthopaedic rehabilitation centres. Patients completed questionnaires at beginning of rehabilitation, at dis-charge and 3 months after discharge. Details -regarding the treatments provided were extracted from the standardised discharge report. Results: The recommended amount of social counselling and work-related psychosocial therapy measures were appropriate. However, there were discrepancies regarding the recommended amount of functional capacity training. The standardised mean difference (SMD) between baseline and 3-month follow-up sick leave duration indicated an almost medium-sized effect (SMD=0.47; 95% CI: 0.28-0.66). An additional 5 h of work-related therapy was associated with a 1.2-week decrease in sick leave duration (95% CI: -2.38 to -0.03). Conclusion: The guideline was for the most part successfully implemented and sets important standards for the roll-out of WMR. The nationwide implementation of the WMR guideline requires a continuous quality assurance that -enables promptly feedback about the achieved implementation level. abstract_id: PUBMED:33941123 Work-related medical rehabilitation in patients with mental disorders: the protocol of a randomized controlled trial (WMR-P, DRKS00023175). Background: Various rehabilitation services and return-to-work programs have been developed in order to reduce sickness absence and increase sustainable return-to-work. To ensure that people with a high risk of not returning to work can participate in working life, the model of work-related medical rehabilitation was developed in Germany. The efficacy of these programs in patients with mental disorders has been tested in only a few trials with very specific intervention approaches. To date, there is no clear evidence of the effectiveness of work-related medical rehabilitation implemented in real-care practice. Methods/design: Our randomized controlled trial will be conducted in six rehabilitation centers across Germany. Within 15 months, 1800 patients with mental disorders (300 per rehabilitation center) will be recruited and assigned one-to-one either to a work-related medical rehabilitation program or to a conventional psychosomatic rehabilitation program. Participants will be aged 18-60 years. The control group will receive a conventional psychosomatic rehabilitation program without additional work-related components. The intervention group will receive a work-related medical rehabilitation program that contains at least 11 h of work-related treatment modules. Follow-up data will be assessed at the end of the rehabilitation and 3 and 12 months after completing the rehabilitation program. The primary outcome is a stable return to work. Secondary outcomes cover several dimensions of health, functioning and coping strategies. Focus groups and individual interviews supplement our study with qualitative data. Discussion: This study will determine the relative effectiveness of a complex and newly implemented work-related rehabilitation strategy for patients with mental disorders. Trial Registration: German Clinical Trials Register ( DRKS00023175 , September 29 2020). abstract_id: PUBMED:31451849 Work-related medical rehabilitation in neurology : Effective on the basis of individualized rehabilitant identification Background: Evidence for the effectiveness of work-related medical rehabilitation (WMR) for a successful return to work (RTW) is lacking for neurological diseases. The aim of this study was therefore to correlate the cross-indication screening instrument for the identification of the demand of work-related medical rehabilitation (SIMBO‑C) with the individualized clinical anamnestic determination of severe restrictions of work ability (SRWA) as a required access criterion for admittance to neurological WMR. A further aim was to compare the rate of successful RTW in rehabilitants with and without WMR measures 6 months after inpatient rehabilitation. Methods: On admission SRWA were routinely screened by an individualized clinical anamnestic determination with subsequent assignment to WMR or conventional rehabilitation. At the beginning of rehabilitation the SIMBO-C was applied and 6 months after the rehabilitation the RTW status was surveyed. Results: Of the 80 rehabilitants 44 (55%) received WMR. On admission they showed a higher SIMBO-C score (41.3 ± 15.7 vs. 26.2 ± 18.6 points, p = 0.002), on discharge more often locomotor and psychomental disorders (55% vs. 36%, p = 0.10 and 46% vs. 22%, p = 0.03, respectively) and longer incapacitation times after rehabilitation of &gt; 4 weeks (66% vs. 33%, p = 0.02) compared to those without WMR. At 6 months follow-up after discharge the 2 groups did not significantly differ with respect to successful RTW (61% vs. 66%, p = 0.69). The SIMBO-C (cut-off ≥ 30 points) showed a medium correlation with the individualized clinical anamnestic determination of SRWA (r = 0.33, p = 0.01). Conclusion: The applied neurological WMR concept accomplished a comparable RTW rate between rehabilitants with SRWA by a WMR and those without SRWA and conventional rehabilitation. The SIMBO-C should only be used in combination with the individualized anamnesis to identify SRWA. abstract_id: PUBMED:28219096 Work-Related Medical Rehabilitation Work-related medical rehabilitation (WMR) is a strategy to improve work participation in patients with poor work ability. This review summarizes the state of knowledge on WMR. The prevalence of poor work ability and corresponding need for WMR is high (musculoskeletal disorders: 43%; mental disorders: 57%). The meta-analysis of randomized controlled trials in patients with musculoskeletal disorders shows better return to work outcomes after one year in favor of WMR patients than compared to patients participating in usual medical rehabilitation. The amount of work-related measures in rehabilitation was clearly increased during recent years. A direct involvement of the workplace and a closer cooperation with employers and occupational health physicians may further improve the outcomes of WMR. abstract_id: PUBMED:31594839 Effects of nationwide implementation of work-related medical rehabilitation in Germany: propensity score matched analysis. Objectives: Since 2014, the Federal German Pension Insurance has approved several departments to implement work-related medical rehabilitation programmes across Germany. Our cohort study was launched to assess the effects of work-related medical rehabilitation under real-life conditions. Methods: Participants received either a common or a work-related medical rehabilitation programme. Propensity score matching was used to identify controls that were comparable to work-related medical rehabilitation patients. The effects were assessed by patient-reported outcome measures 10 months after completing the rehabilitation programme. Results: We compared 641 patients who were treated in work-related medical rehabilitation with 641 matched controls. Only half of the treated patients had high initial work disability risk scores and were intended to be reached by the new programmes. The dose of work-related components was on average in accordance with the guideline; however, the heterogeneity was high. Work-related medical rehabilitation increased the proportion of patients returning to work by 5.8 percentage points (95% CI 0.005 to 0.110), decreased the median time to return to work by 9.46 days (95% CI -18.14 to -0.79), and improved self-rated work ability by 0.38 points (95% CI 0.05 to 0.72) compared with common medical rehabilitation. A per-protocol analysis revealed that work-related medical rehabilitation was more effective if patients were assigned according to the guideline and the minimal mandatory treatment dose was actually delivered. Conclusions: The implementation of work-related medical rehabilitation in German rehabilitation centres affected work participation outcomes. Improving guideline fidelity (reach and dose delivered) will probably improve the outcomes in real-world care. Trial Registration Number: DRKS00009780. abstract_id: PUBMED:32252122 Work-Related Medical Rehabilitation in Patients with Musculoskeletal Disorders: a Propensity-Score-Analysis Purpose: Work-related medical rehabilitation is a multimodal interdisciplinary approach to reduce health-related discrepancies between work capacity and job demands in order to achieve work participation, especially for patients with severely more restricted work ability. The study tested the effects of a work-related medical rehabilitation program, implemented in routine care, compared with common medical rehabilitation in patients with musculoskeletal disorders. Methods: Data were assessed in 2014 and 2015 and were analyzed by an as-treated analysis. By means of propensity-score-matching, participants of work-related medical rehabilitation (intervention group, IG) were compared with similar participants of common medical rehabilitation (control group, CG). The primary outcome was a positive work status one year after discharge of rehabilitation. Treatment effects were analyzed by logistic regressions and absolute risk reductions (ARR) were calculated. Results: 312 patients (156 in the IG) were included in the analysis one year after rehabilitation. Propensity-score-matching achieved balanced sample characteristics. Work-related medical rehabilitation increased a positive work status by 11 points (ARR=0.11; 95% CI: 0.02, 0.20; p=0.020) compared to common medical rehabilitation. Conclusion: Work-related medical rehabilitation leads to better work participation outcomes after one year compared with common medical rehabilitation. abstract_id: PUBMED:27465148 Effectiveness of work-related medical rehabilitation in cancer patients: study protocol of a cluster-randomized multicenter trial. Background: Work is a central resource for cancer survivors as it not only provides income but also impacts health and quality of life. Additionally, work helps survivors to cope with the perceived critical life event. The German Pension Insurance provides medical rehabilitation for working-age patients with chronic diseases to improve and restore their work ability, and support returning to or staying at work, and thus tries to sustainably avoid health-related early retirement. Past research showed that conventional medical rehabilitation programs do not support returning to work sufficiently and that work-related medical rehabilitation programs report higher return-to-work rates across several health conditions, when compared to medical rehabilitation. Therefore, the current study protocol outlines an effectiveness study of such a program for cancer survivors. Methods: To evaluate the effectiveness of work-related medical rehabilitation in cancer patients we conduct a cluster-randomized multicenter trial. In total, 504 rehabilitation patients between 18 and 60 years with a Karnofsky Performance Status of ≥70 %, a preliminary positive social-medical prognosis of employability for at least 3 h/day within the next 6 months and an elevated risk of not returning to work will be recruited in four inpatient rehabilitation centers. Patients are randomized to the work-related medical rehabilitation program or the conventional medical rehabilitation program based on their week of arrival at each rehabilitation center. The work-related medical rehabilitation program comprises additional work-related diagnostics, multi-professional team meetings, an introductory session as well as work-related functional capacity training, work-related psychological groups, and social counseling. All additional components are aimed at the adjustment of the patients' capacity in relation to their individual job demands. Role functioning defines the main study outcome and will be assessed with the EORTC-QLQ30. Secondary outcome measures are the remaining scales of the EORTC-QLQ30, fatigue, self-rated work ability, disease coping, participation in working life, realization of work-related goals and therapies during rehabilitation, and treatment satisfaction. Discussion: A positive evaluation of work-related medical rehabilitation in cancer patients is expected due to the promising findings on the effectiveness of such programs for patients with other health conditions. Results may support the dissemination of work-related medical rehabilitation programs in German cancer rehabilitation. Trial Registration: German Clinical Trials Register DRKS00007770 . Registered 13 May 2015. abstract_id: PUBMED:30473020 Implementing the German Model of Work-Related Medical Rehabilitation: Did the Delivered Dose of Work-Related Treatment Components Increase? Objectives: Work-related components are an essential part of rehabilitation programs to support return to work of patients with musculoskeletal disorders. In Germany, a guideline for work-related medical rehabilitation was developed to increase work-related treatment components. In addition, new departments were approved to implement work-related medical rehabilitation programs. The aim of our study was to explore the state of implementation of the guideline's recommendations by describing the change in the delivered dose of work-related treatments. Design: Nonrandomized controlled trial (cohort study). Setting: Fifty-nine German rehabilitation centers. Participants: Patients (N=9046) with musculoskeletal disorders were treated in work-related medical rehabilitation or common medical rehabilitation. Patients were matched one-to-one by propensity scores. Interventions: Work-related medical rehabilitation in 2014 and medical rehabilitation in 2011. Main Outcome Measures: Treatment dose of work-related therapies. Results: The mean dose of work-related therapies increased from 2.2 hours (95% confidence interval [CI], 1.6-2.8) to 8.9 hours (95% CI, 7.7-10.1). The mean dose of social counseling increased from 51 to 84 minutes, the mean dose of psychosocial work-related groups from 39 to 216 minutes, and the mean dose of functional capacity training from 39 to 234 minutes. The intraclass correlation of 0.67 (95% CI, 0.58-0.75) for the total dose of work-related therapies indicated that the variance explained by centers was high. Conclusions: The delivered dose of work-related components was increased. However, there were discrepancies between the guideline's recommendations and the actual dose delivered in at least half of the centers. It is very likely that this will affect the effectiveness of work-related medical rehabilitation in practice. abstract_id: PUBMED:28197661 Rehabilitation and work participation Work participation is increasingly seen as a primary outcome of rehabilitation measures. Randomised controlled trials from several different countries and the reviews and meta-analyses based on them show that multidisciplinary rehabilitation programmes improve work participation, return-to-work rates, and reduce sickness absence in patients with back pain, depression, and cancer. In Germany, such programmes were implemented as work-related medical rehabilitation. This intervention targets patients with poor work ability and an increased risk of permanent work disability. Randomised controlled trials have confirmed a reduction of sickness absence and increased rates of sustainable work participation in favour of work-related medical rehabilitation as compared to common medical rehabilitation. Dissemination of these programmes and translation of research evidence into practice progresses. An additional important strategy to support returning to work following rehabilitation is graded return to work. There is emerging evidence of sustainable employment effects in favour of graded return to work. A direct involvement of the workplace and a closer cooperation with employers and occupational health physicians may further improve the outcomes of rehabilitation programmes. Strategies that synergistically integrate safety, health promotion and rehabilitation may achieve more favourable outcomes than separated actions. abstract_id: PUBMED:27534527 Work-related medical rehabilitation in patients with musculoskeletal disorders: the protocol of a propensity score matched effectiveness study (EVA-WMR, DRKS00009780). Background: Musculoskeletal disorders are one of the most important causes of work disability. Various rehabilitation services and return-to-work programs have been developed in order to reduce sickness absence and increase sustainable return-to-work. As the effects of conventional medical rehabilitation programs on sickness absence duration were shown to be slight, work-related medical rehabilitation programs have been developed and tested. While such studies proved the efficacy of work-related medical rehabilitation compared with conventional medical rehabilitation in well-conducted randomized controlled trials, its effectiveness under real-life conditions has yet to be proved. Methods/design: The cohort study will be performed under real-life conditions with two parallel groups. Participants will receive either a conventional or a work-related medical rehabilitation program. Propensity score matching will be used to identify controls that are comparable to treated work-related medical rehabilitation patients. Over a period of three months, about 18,000 insured patients with permission to undergo a musculoskeletal rehabilitation program will be contacted. Of these, 15,000 will receive a conventional and 3,000 a work-related medical rehabilitation. We expect a participation rate of 40 % at baseline. Patients will be aged 18 to 65 years and have chronic musculoskeletal disorders, usually back pain. The control group will receive a conventional medical rehabilitation program without any explicit focus on work, work ability and return to work in diagnostics and therapy. The intervention group will receive a work-related medical rehabilitation program that in addition to common rehabilitation treatments contains 11 to 25 h of work-related treatment modules. Follow-up data will be assessed three and ten months after patients' discharge from the rehabilitation center. Additionally, department characteristics will be assessed and administrative data records used. The primary outcomes are sick leave duration, stable return to work and subjective work ability. Secondary outcomes cover several dimensions of health, functioning and coping strategies. Discussion: This study will determine the relative effectiveness of a complex, newly implemented work-related rehabilitation strategy for patients with musculoskeletal disorders. Trial Registration: German Clinical Trials Register ( DRKS00009780 , February 10, 2016). Answer: The guideline for work-related medical rehabilitation (WMR) in Germany has been partially successfully implemented. According to the evaluation conducted at seven inpatient orthopaedic rehabilitation centres, the recommended amount of social counselling and work-related psychosocial therapy measures were found to be appropriate. However, there were discrepancies regarding the recommended amount of functional capacity training. The implementation of the guideline showed an almost medium-sized effect on reducing sick leave duration, with an additional 5 hours of work-related therapy associated with a 1.2-week decrease in sick leave duration (PUBMED:24390869). Furthermore, the nationwide implementation of WMR has been shown to affect work participation outcomes positively. A study found that WMR increased the proportion of patients returning to work, decreased the median time to return to work, and improved self-rated work ability compared with common medical rehabilitation. However, the study also highlighted the importance of improving guideline fidelity, including reach and dose delivered, to enhance outcomes in real-world care (PUBMED:31594839). Another study focusing on patients with musculoskeletal disorders found that work-related medical rehabilitation led to better work participation outcomes after one year compared to common medical rehabilitation (PUBMED:32252122). Additionally, a nonrandomized controlled trial indicated that the delivered dose of work-related components had increased, but there were still discrepancies between the guideline's recommendations and the actual dose delivered in at least half of the centers, which could affect the effectiveness of WMR in practice (PUBMED:30473020). Overall, while the guideline for WMR has been implemented to some extent and has shown positive effects on work participation outcomes, there is still room for improvement in terms of fully adhering to the guideline's recommendations and ensuring consistent delivery of the recommended treatment components across rehabilitation centers.
Instruction: Increased short- and long-term mortality at low-volume pediatric heart transplant centers: should minimum standards be set? Abstracts: abstract_id: PUBMED:21183849 Increased short- and long-term mortality at low-volume pediatric heart transplant centers: should minimum standards be set? Retrospective data analysis. Objective: The relationship between volume and outcome in many complex surgical procedures is well established. Background: No published data has examined this relationship in pediatric cardiac transplantation, but low-volume adult heart transplant programs seem to have higher early mortality. Methods: The United Network for Organ Sharing (UNOS) provided center-specific data for the 4647 transplants performed on patients younger than 19 years old, 1992 to 2007. Patients were stratified into 3 groups based on the volume of transplants performed in the previous 5 years at that center: low [&lt;19 transplants, n = 1135 (24.4%)], medium [19–62 transplants, n = 2321(50.0%)], and high [≥63 transplants, n= 1191 (25.6%)]. A logistic regression model for postoperative mortality was developed and observed-to-expected (O:E) mortality rates calculated for each group. Results: Unadjusted long-term survival decreased with decreasing center volume (P&lt;0.0001). Observed postoperative mortality was higher than expected at low-volume centers [O:E ratio 1.39, 95% confidence interval (CI) 1.05–1.83]. At low volume centers, high-risk patients (1.34, 0.85–2.12)--especially patients 1 year old or younger (1.60, 1.07–2.40) or those with congenital heart disease (1.36, 0.94–1.96)--did poorly, but those at high-volume centers did well (congenital heart disease: 0.90, 0.36–1.26; age&lt;1 year: 0.75, 0.51–1.09). Similar results were obtained in the subset of patients transplanted after 1996. In multivariate logistic regression modeling, transplantation at a low-volume center was associated with an odds ratio for postoperative mortality of 1.60 (95% CI, 1.14–2.24); transplantation at a medium volume center had an odds ratio of 1.24 (95% CI, 0.92–1.66). Conclusion: The volume of transplants performed at any one center has a significant impact on outcomes. Regionalization of care is one option for improving outcomes in pediatric cardiac transplantation. abstract_id: PUBMED:36591862 Long-term mortality in pediatric solid organ recipients-A nationwide study. Background: The present study aimed at investigating long-term mortality of patients who underwent solid organ transplantation during childhood and at identifying their causes of death. Methods: A cohort of 233 pediatric solid organ transplant recipients who had a kidney, liver, or heart transplantation between 1982 and 2015 in Finland were studied. Year of birth-, sex-, and hometown-matched controls (n = 1157) were identified using the Population Register Center registry. The Causes of Death Registry was utilized to identify the causes of death. Results: Among the transplant recipients, there were 60 (25.8%) deaths (median follow-up 18.0 years, interquartile range of 11.0-23.0 years). Transplant recipients' risk of death was nearly 130-fold higher than that of the controls (95% CI 51.9-1784.6). The 20-year survival rates for kidney, liver, and heart recipients were 86.1% (95% CI 79.9%-92.3%), 58.5% (95% CI 46.2%-74.1%), and 61.4% (95% CI 48.1%-78.4%), respectively. The most common causes of death were cardiovascular diseases (23%), infections (22%), and malignancies (17%). There were no significant differences in survival based on sex or transplantation era. Conclusion: The late mortality is still significantly higher among pediatric solid organ recipients in comparison with controls. Cardiovascular complications, infections, and cancers are the main causes of late mortality for all studied transplant groups. These findings emphasize the cruciality of careful monitoring of pediatric transplant recipients in order to reduce long-term mortality. abstract_id: PUBMED:26298167 The Effect of Institutional Volume on Complications and Their Impact on Mortality After Pediatric Heart Transplantation. Background: This study evaluated the potential association of institutional volume with survival and mortality subsequent to major complications in a modern cohort of pediatric patients after orthotopic heart transplantation (OHT). Methods: The United Network of Organ Sharing database was queried for pediatric patients (aged ≤18 years) undergoing OHT between 2000 and 2010. Institutional volume was defined as the average number of transplants completed annually during each institution's active period and was evaluated as categoric and as a continuous variable. Logistic regression models were used to determine the effect of institutional volumes on postoperative outcomes, which included renal failure, stroke, rejection, reoperation, infection, and a composite complication outcome. Cox modeling was used to analyze the risk-adjusted effect of institutional volume on 30-day, 1-year, and 5-year mortality. Kaplan-Meier estimates were used to compare differences in unconditional survival. Results: A total of 3,562 patients (111 institutions) were included and stratified into low-volume (&lt;6.5 transplants/year, 91 institutions), intermediate-volume (6.5 to 12.5 transplants/year, 12 institutions), and high-volume (&gt;12.5 transplants/year, 8 institutions) tertiles. Unadjusted survival was significantly different at 30 days (p = 0.0087) in the low-volume tertile (94.2%; 95% confidence interval, 92.7% to 95.4%) compared with the high-volume tertile (96.8%; 95% confidence interval, 95.7% to 97.7%). No difference was observed at 1 or 5 years. Risk-adjusted Cox modeling demonstrated that low-volume institutions had an increased rate of mortality at 30 days (hazard ratio, 1.91; 95% confidence interval, 1.02 to 3.59; p = 0.044), but not at 1 or 5 years. High-volume institutions had lower incidences of postoperative complications than low-volume institutions (30.3% vs 38.4%, p &lt; 0.001). Despite this difference in the rate of complications, survival in patients with a postoperative complication was similar across the volume tertiles. Conclusions: No association was observed between institutional volume and adjusted or unadjusted long-term survival. High-volume institutions have a significantly lower rate of postoperative complications after pediatric OHT. This association does not correlate with increased subsequent mortality in low-volume institutions. Given these findings, strategies integral to the allocation of allografts in adult transplantation, such as regionalization of care, may not be as relevant to pediatric OHT. abstract_id: PUBMED:18805171 Increased mortality at low-volume orthotopic heart transplantation centers: should current standards change? Background: The Centers for Medicare and Medicaid Services (CMS) mandate that orthotopic heart transplantation (OHT) centers perform 10 transplants per year to qualify for funding. We sought to determine whether this cutoff is meaningful and establish recommendations for optimal center volume using the United Network for Organ Sharing (UNOS) registry. Methods: We reviewed UNOS data (years 1999 to 2006) identifying 14,401 first-time adult OHTs conducted at 143 centers. Stratification was by mean annual institution volume. Primary outcomes of 30-day and 1-year mortality were assessed by multivariable logistic regression (adjusted for comorbidities and risk factors for death). Sequential volume cutoffs were examined to determine if current CMS standards are optimal. Pseudo R2 and area under the receiver operating curve assessed goodness of fit. Results: Mean annual volume ranged from 1 to 90. One-year mortality was 12.6% (n = 1,800). Increased center volume was associated with decreased 30-day mortality (p &lt; 0.001). Decreased center volume was associated with increases in 30-day (odds ratio [OR] 1.03, 95% confidence interval [CI]: 1.02 to 1.03, p &lt; 0.001) and 1-year mortality (OR 1.01, 95% CI: 1.01 to 1.02, p = 0.03--censored for 30-day death). The greatest mortality risk occurred at very low volume centers (&lt;or= 2 cases = 2.15 times increase in death, p = 0.03). Annual institutional volume of fewer than 10 cases per year increased 30-day mortality by more than 100% (OR 2.02, 95%CI: 1.46 to 2.80, p &lt; 0.001) and each decrease in mean center volume by one case per year increased the odds of 30-day mortality by 2% (OR 1.02, 95% CI: 1.01 to 1.03, p &lt; 0.001]. Additionally, centers performing fewer than 10 OHTs per year had increased cumulative mortality by Cox proportional hazards regression (hazard ratio 1.35, 95% CI: 1.14 to 1.60, p &lt; 0.001). Sequential multivariable analyses suggested that current CMS standards may not be optimal, as all centers performing more than 40 transplants per year demonstrated less than 5% 30-day mortality. Conclusions: Annual center volume is an independent predictor of short-term mortality in OHT. These data support reevaluation of the current CMS volume cutoff for OHT, as high-volume centers achieve lower mortality. abstract_id: PUBMED:33233874 Pediatric heart transplantation: how to manage problems affecting long-term outcomes? Since the initial International Society of Heart Lung Transplantation registry was published in 1982, the number of pediatric heart transplantations has increased markedly, reaching a steady state of 500-550 transplantation annually and occupying up to 10% of total heart transplantations. Heart transplantation is considered an established therapeutic option for patients with end-stage heart disease. The long-term outcomes of pediatric heart transplantations were comparable to those of adults. Issues affecting long-term outcomes include acute cellular rejection, antibody-mediated rejection, cardiac allograft vasculopathy, infection, prolonged renal dysfunction, and malignancies such as posttransplant lymphoproliferative disorder. This article focuses on medical issues before pediatric heart transplantation, according to the Korean Network of Organ Sharing registry and as well as major problems such as graft rejection and cardiac allograft vasculopathy. To reduce graft failure rate and improve long-term outcomes, meticulous monitoring for rejection and medication compliance are also important, especially in adolescents. abstract_id: PUBMED:36345684 Impact of occurrence of cardiac arrest in the donor on long-term outcomes of pediatric heart transplantation. Objective: The impact of cardiac arrest in the donor on long-term outcomes of pediatric heart transplantation has not been studied. Methods: The UNOS database was queried for primary pediatric heart transplantation (1999-2020). The cohort was divided into recipients who received a cardiac allograft from a donor who had a cardiac arrest (CA) versus a donor who did not (NCA). Univariable and multivariable analysis was done to compare recipient outcomes, followed by survival analysis using the Kaplan-Meier method. Results: A total of 7300 patients underwent heart transplantation, of which 579 (7.9%) patients belonged to the CA group. The CA group was younger (median 3 vs. 5 years, p &lt; .001), male (51% vs. 47%, p = .03), and smaller in weight (13 vs. 17 kg, p &lt; .001) and height (101 vs. 109 cm, p &lt; .001) than the NCA group. The groups were similar in recipient heart failure diagnosis and blood type. The CA donors were younger (3 vs. 6 years, p &lt; .001) versus nonwhite (48% vs. 45%, p = .003) and died from drowning and asphyxiation compared to blunt injury and intracranial hemorrhage in the NCA group. The left-ventricular ejection fraction was similar between the groups. There was no difference in VAD and ECMO use before the transplant. The listing status, waitlist days, and allograft ischemic times were similar. Posttransplant morbidity such as stroke, dialysis, pacemaker implantation, and treated rejection were similar. Donor cardiac arrest (hazard ratio = 0.93, p = .5) was not an independent predictor of mortality on multivariable analysis. There was no survival difference even beyond 20 years of follow-up between the groups (p = .88). Conclusion: The occurrence of donor cardiac arrest has no impact on long-term survival in pediatric heart transplant recipients. abstract_id: PUBMED:31123765 Impact of Long-Term Support with Berlin Heart EXCOR® in Pediatric Patients with Severe Heart Failure. Berlin Heart EXCOR® (BHE) ventricular assist device (VAD) (Berlin Heart, Berlin Heart AG, Berlin, Germany) implantation is prevalent in patients with severe heart failure. However, clinical outcomes of pediatric patients on long-term BHE support remain mainly unknown. This study aimed to report our clinical experience with long-term support of pediatric patients with severe heart failure supported by BHE VAD. Clinical outcomes of 11 patients (median age 8.4 months; two male), who underwent LVAD implantation of the Berlin Heart EXCO® (BHE) VAD (Berlin Heart, Berlin Heart AG, Berlin, Germany) between 2013 and 2017 at our institution were reviewed. The median support period was 312 (range 45-661) days and five patients were supported for more than 1 year. The longest support duration was 661 days. No mortality occurred, and six patients were successfully bridged to heart transplantation, while three patients were successfully weaned off the device. Two patients are currently on BHE support while they await heart transplantation. Four patients had cerebral bleeding or infarction, but only one case of persistent neurological deficit occurred. No fatal device-related infection occurred during LVAD support. BHE VAD can provide long-term support for pediatric patients with severe heart failure with acceptable mortality and morbidity rates with long-term support. abstract_id: PUBMED:27009672 Supporting pediatric patients with short-term continuous-flow devices. Background: Short-term continuous-flow ventricular assist devices (STCF-VADs) are increasingly being used in the pediatric population. However, little is known about the outcomes in patients supported with these devices. Methods: All pediatric patients supported with a STCF-VAD, including the Thoratec PediMag or CentriMag, or the Maquet RotaFlow, between January 2005 and May 2014, were included in this retrospective single-center study. Results: Twenty-seven patients (15 girls [56%]) underwent 33 STCF-VAD runs in 28 separate hospital admissions. The STCF-VAD was implanted 1 time in 23 patients (85%), 2 times in 2 patients (7%), and 3 times in 2 patients (7%). Implantation occurred most commonly in the context of congenital heart disease in 14 runs (42.2%), cardiomyopathy in 11 (33%), and after transplant in 6 (18%). The median age at implantation was 1.7 (interquartile range [IQR] 0.1, 4.1) years, and median weight was 8.9 kg (IQR 3.7, 18 kg). Patients were supported for a median duration of 12 days (IQR 6, 23 days) per run; the longest duration was 75 days. Before implantation, 15 runs (45%) were supported by extracorporeal membrane oxygenation (ECMO). After implantation, an oxygenator was required in 20 runs (61%) and continuous renal replacement therapy in 21 (64%). Overall, 7 runs (21%) resulted in weaning for recovery, 14 (42%) converted to a long-term VAD, 4 (12%) resulted in direct transplantation, 3 (9%) were converted to ECMO, and 5 (15%) runs resulted in death on the device or within 1 month after decannulation. The most common complication was bleeding requiring reoperation in 24% of runs. In addition, 18% of runs were associated with neurologic events and 15% with a culture-positive infection. Hospital discharge occurred in 19 of 28 STCF-VAD admissions (67%). In follow-up, with a median duration of 9.2 months (IQR 2.3, 38.3 months), 17 patients (63%) survived. Conclusions: STCF-VADs can successfully bridge most pediatric patients to recovery, long-term device, or transplant, with an acceptable complication profile. Although these devices are designed for short-term support, longer support is possible and may serve as an alternative approach to patients not suitable for the current long-term devices. abstract_id: PUBMED:36150994 Comparing donor and recipient total cardiac volume predicts risk of short-term adverse outcomes following heart transplantation. Introduction: In pediatric heart transplantation, donor: recipient weight ratio (DRWR) has long been the sole metric for size matching. Total cardiac volume (TCV)-based size matching has emerged as a novel method to precisely identify an upper limit of donor organ size of a heart transplant recipient while minimizing the risk of complications from oversizing. The clinical adoption of donor: recipient volume ratio (DRVR) to prevent short-term adverse outcomes of oversizing is unknown. The purpose of this single-center study is to determine the relationship of DRWR and DRVR to the risk of post-operative complications from allograft oversizing. Methods: Recipient TCV was measured from imaging studies and donor TCV was calculated from published TCV prediction models. DRVR was defined as donor TCV divided by recipient TCV. The primary outcome was short-term post-transplant complications (SPTC), a composite outcome of delayed chest closure and prolonged intubation &gt; 7 days. A multivariable logistic regression model of DRWR (cubic spline), DRVR (linear) and linear interaction between DRWR and DRVR was used to examine the probability of experiencing a SPTC over follow-up as a function of DRWR and DRVR. Results: A total of 106 transplant patients' records were reviewed. Risk of the SPTC increased as DRVR increased. Both low and high DRWR was associated with the SPTC. A logistic regression model including DRWR and DRVR predicted SPTC with an AUROC curve of 0.74. [95% CI 0.62 0.85]. The predictive model identified a "low-risk zone" of donor-recipient size match between a weight ratio of 0.8 and 2.0 and a TCV ratio less than 1.0. Conclusion: DRVR in combination with DRWR predicts short-term post-transplant adverse events. Accepting donors with high DRWR may be safely performed when DRVR is considered. abstract_id: PUBMED:31019175 Institutional Case-Volume and Mortality After Heart Transplantation. The relationship between lower institutional case-volume and higher mortality after complex high-risk procedures has been shown. The aim of this study is to examine the effect of institutional volume on patient outcome after heart transplantation (HT) in the entire Korean population.We analyzed all adult HTs performed in Korea between 2007 and 2016 using data from the National Health Insurance Service. The association between case-volume and in-hospital mortality after HT was analyzed after categorizing hospitals performing HT into low-, medium-, or high-volume centers depending on the number of HTs performed. The effect of case-volume on long-term mortality was also assessed.A total of 833 adult HTs were performed in 17 centers. In-hospital mortality was 3.7% (13/356), 10.1% (38/375), and 18.6% (19/102) in high-, medium-, and low-volume centers, respectively. Medium-, and low-volume centers showed increased risk of in-hospital mortality (odds ratio [95% confidence interval]; 2.11 [1.42-3.13] and 3.68 [2.16-2.27], respectively.). Long-term survival of up to 10 years was worse in lower-volume centers compared to high-volume centers (P &lt; 0.001).In conclusion, lower case-volume was associated with increased in-hospital mortality and long-term mortality after HT. A minimum case-volume mandate may be required for hospitals performing HT to ensure the best patient outcome and effective resource allocation. Answer: The relationship between the volume of pediatric heart transplants performed at a center and patient outcomes has been a subject of investigation, with several studies suggesting that lower-volume centers may have higher mortality rates. A retrospective data analysis using the United Network for Organ Sharing (UNOS) database for transplants performed on patients younger than 19 years old from 1992 to 2007 found that unadjusted long-term survival decreased with decreasing center volume, and observed postoperative mortality was higher than expected at low-volume centers. High-risk patients, especially those 1 year old or younger or those with congenital heart disease, did poorly at low-volume centers, while similar patients at high-volume centers fared better. The study concluded that the volume of transplants performed at a center significantly impacts outcomes and suggested that regionalization of care could be an option for improving outcomes in pediatric cardiac transplantation (PUBMED:21183849). Another study evaluating the potential association of institutional volume with survival and mortality subsequent to major complications in a modern cohort of pediatric patients after orthotopic heart transplantation (OHT) found that high-volume institutions had a significantly lower rate of postoperative complications than low-volume institutions. However, this association did not correlate with increased subsequent mortality in low-volume institutions. The study concluded that strategies such as regionalization of care, which are integral to the allocation of allografts in adult transplantation, may not be as relevant to pediatric OHT (PUBMED:26298167). Furthermore, a study examining the effect of institutional volume on patient outcome after heart transplantation in the entire Korean population found that lower case-volume was associated with increased in-hospital mortality and long-term mortality after heart transplantation. The study suggested that a minimum case-volume mandate may be required for hospitals performing heart transplantation to ensure the best patient outcome and effective resource allocation (PUBMED:31019175). In light of these findings, it appears that there is a correlation between lower institutional volume and higher mortality rates in pediatric heart transplantation. Therefore, setting minimum standards for the volume of pediatric heart transplants performed at a center could potentially improve short- and long-term mortality outcomes.
Instruction: Can we use pulsed fluoroscopy to decrease the radiation dose during video fluoroscopic feeding studies in children? Abstracts: abstract_id: PUBMED:19070700 Can we use pulsed fluoroscopy to decrease the radiation dose during video fluoroscopic feeding studies in children? Aim: To investigate whether it is possible to reduce the radiation dose during video fluoroscopic feeding studies below the current 30 frames/s (continuous fluoroscopy). Methods: Ten consecutive children who had supraglottic penetration while swallowing barium were evaluated as part of a video fluoroscopic feeding study. All fluoroscopic studies were performed with a pulse rate of 30 frames/s. Frame by frame analysis was performed of the first episode of penetration in each patient to determine on how many image frames the penetration could be detected. Results: Supraglottic penetration occurred very rapidly. In seven of the 10 patients, full-depth penetration was only seen on one image frame. In no patient was the full-depth penetration seen in greater than two imaging frames. Conclusion: Decreasing the fluoroscopic pulse rate cannot be used as a method of decreasing radiation dose during performance of video fluoroscopic studies because it will potentially result in non-detection of episodes of supraglottic penetration of liquid barium. abstract_id: PUBMED:18312969 Optimizing the use of pulsed fluoroscopy to reduce radiation exposure to children. Radiologists desire to keep radiation dose as low as possible. Pulsed fluoroscopy provides an opportunity to lower radiation exposure to children undergoing fluoroscopic studies. To optimize the ability of pulsed fluoroscopy to decrease radiation dose to patients during fluoroscopic studies, radiologists need to understand how pulsed fluoroscopy operates. This report reviews the basic physics knowledge needed by radiologists to best use pulsed fluoroscopy to minimize radiation dose. It explains the paradox that the best video frame-grabbed images are obtained when using the lowest fluoroscopy pulse rate and therefore the lowest fluoroscopy radiation dose. abstract_id: PUBMED:28084814 Feasibility of low-dose digital pulsed video-fluoroscopic swallow exams (VFSE): effects on radiation dose and image quality. Background Fluoroscopy is a frequently used examination in clinical routine without appropriate research evaluation latest hardware and software equipment. Purpose To evaluate the feasibility of low-dose pulsed video-fluoroscopic swallowing exams (pVFSE) to reduce dose exposure in patients with swallowing disorders compared to high-resolution radiograph examinations (hrVFSE) serving as standard of reference. Material and Methods A phantom study (Alderson-Rando Phantom, 60 thermoluminescent dosimeters [TLD]) was performed for dose measurements. Acquisition parameters were as follows: (i) pVFSE: 76.7 kV, 57 mA, 0.9 Cu mm, pulse rate/s 30; (ii) hrVFSE: 68.0 kV, 362 mA, 0.2 Cu mm, pictures 30/s. The dose area product (DAP) indicated by the detector system and the radiation dose derived from the TLD measurements were analyzed. In a patient study, image quality was assessed qualitatively (5-point Likert scale, 5 = hrVFSE; two independent readers) and quantitatively (SNR) in 35 patients who subsequently underwent contrast-enhanced pVFSE and hrVFSE. Results Phantom measurements showed a dose reduction per picture of factor 25 for pVFSE versus hrVFSE images (0.0025 mGy versus 0.062 mGy). The DAP (µGym2) was 28.0 versus 810.5 (pVFSE versus hrVFSE) for an average examination time of 30 s. Direct and scattered organ doses were significantly lower for pVFSE as compared to hrVFSE ( P &lt; 0.05). Image quality was rated 3.9 ± 0.5 for pVFSE versus the hrVFSE standard; depiction of the contrast agent 4.8 ± 0.3; noise 3.6 ± 0.5 ( P &lt; 0.05); SNR calculations revealed a relative decreased of 43.9% for pVFSE as compared to hrVFSE. Conclusion Pulsed VFSE is feasible, providing diagnostic image quality at a significant dose reduction as compared to hrVFSE. abstract_id: PUBMED:26482817 Reducing Radiation Dose in Pediatric Diagnostic Fluoroscopy. Purpose: To assess radiation dose in common pediatric diagnostic fluoroscopy procedures and determine the efficacy of dose tracking and dose reduction training to reduce radiation use. Methods: Fluoroscopy time and radiation dose area product (DAP) were recorded for upper GIs (UGI), voiding cystourethrograms (VCUGs), and barium enemas (BEs) during a six-month period. The results were presented to radiologists followed by a 1-hour training session on radiation dose reduction methods. Data were recorded for an additional six months. DAP was normalized to fluoroscopy time, and Wilcoxon testing was used to assess for differences between groups. Results: Data from 1,479 cases (945 pretraining and 530 post-training) from 9 radiologists were collected. No statistically significant difference was found in patient age, proportion of examination types, or fluoroscopy time between the pre- and post-training groups (P ≥ .1), with the exception of a small decrease in median fluoroscopy time for VCUGs (1.0 vs 0.9 minutes, P = .04). For all examination types, a statistically significant decrease was found in the median normalized DAP (P &lt; .05) between pre- and post-training groups. The median (quartiles) for pretraining and post-training normalized DAPs (μGy·m(2) per minute) were 14.36 (5.00, 38.95) and 6.67 (2.67, 17.09) for UGIs; 13.00 (5.34, 32.71) and 7.16 (2.73, 19.85) for VCUGs; and 33.14 (9.80, 85.26) and 17.55 (7.96, 46.31) for BEs. Conclusions: Radiation dose tracking with feedback, paired with dose reduction training, can reduce radiation dose during diagnostic pediatric fluoroscopic procedures by nearly 50%. abstract_id: PUBMED:8911190 Reduction of radiation dose in pediatric patients using pulsed fluoroscopy. Objective: The purpose of this study was to determine if pulsed fluoroscopy reduces radiation exposure to pediatric patients undergoing conventional fluoroscopy. Subjects And Methods: Four hundred one consecutive patients were nonrandomly divided into pulsed fluoroscopy and conventional fluoroscopy study groups. Two control groups were also assembled: 474 patients evaluated with conventional fluoroscopy before the study and 138 patients evaluated with pulsed fluoroscopy after the study. Results: We found no difference in fluoroscopy times across the groups. Although the number of digital spot films was slightly higher for the pulsed fluoroscopy study group than for the conventional fluoroscopy study group, we found no difference in the number of digital spot films for the pulsed fluoroscopy study group and for the conventional fluoroscopy control group. Furthermore, the difference in the number of digital spot films was also insignificant for the pulsed fluoroscopy control group and the conventional fluoroscopy study group. The radiation exposure in the pulsed fluoroscopy study group was 50% lower (mean, 0.6 R) than in the conventional fluoroscopy study group. When using pulsed fluoroscopy in the 7.5 pulses-per-second mode, we were able to reduce radiation exposure by 75% of that from conventional fluoroscopy. Conclusion: Pulsed fluoroscopy reduces fluoroscopic radiation exposure to pediatric patients undergoing conventional fluoroscopy. Despite minor image degradation, pulsed fluoroscopy is the technique of choice at our institution. abstract_id: PUBMED:22220241 A Study to Compare the Radiation Absorbed Dose of the C-arm Fluoroscopic Modes. Background: Although many clinicians know about the reducing effects of the pulsed and low-dose modes for fluoroscopic radiation when performing interventional procedures, few studies have quantified the reduction of radiation-absorbed doses (RADs). The aim of this study is to compare how much the RADs from a fluoroscopy are reduced according to the C-arm fluoroscopic modes used. Methods: We measured the RADs in the C-arm fluoroscopic modes including 'conventional mode', 'pulsed mode', 'low-dose mode', and 'pulsed + low-dose mode'. Clinical imaging conditions were simulated using a lead apron instead of a patient. According to each mode, one experimenter radiographed the lead apron, which was on the table, consecutively 5 times on the AP views. We regarded this as one set and a total of 10 sets were done according to each mode. Cumulative exposure time, RADs, peak X-ray energy, and current, which were viewed on the monitor, were recorded. Results: Pulsed, low-dose, and pulsed + low-dose modes showed significantly decreased RADs by 32%, 57%, and 83% compared to the conventional mode. The mean cumulative exposure time was significantly lower in the pulsed and pulsed + low-dose modes than in the conventional mode. All modes had pretty much the same peak X-ray energy. The mean current was significantly lower in the low-dose and pulsed + low-dose modes than in the conventional mode. Conclusions: The use of the pulsed and low-dose modes together significantly reduced the RADs compared to the conventional mode. Therefore, the proper use of the fluoroscopy and its C-arm modes will reduce the radiation exposure of patients and clinicians. abstract_id: PUBMED:8998321 Initial experiences with pulsed fluoroscopy on a multifunctional fluoroscopic unit Purpose: Comparison of radiation doses in pulsed and continuous fluoroscopy to quantify the dose reduction by pulsed fluoroscopy. Further, the applicability of pulsed fluoroscopy in clinical routine has been evaluated. Materials And Methods: In a human pelvic phantom, the radiation dose (skin entry dose in cGycm2) was measured at two pulses per second (pps), 3 pps, 6 pps, 12 pps and continuous fluoroscopy mode, respectively, using image-intensifier entries of 38 cm, 25 cm, and 17 cm. 300 examinations were carried out, and the results of the different fluoroscopy modes were registered. Results: Dose reduction depends on the image-intensifier entry. Compared to the radiation dose in continuous fluoroscopy, with 12 pps fluoroscopy the radiation dose can be reduced at a minimum of 51%, with 6 pps fluoroscopy to 40%, with 3 pps fluoroscopy to 20%, and with 2 pps fluoroscopy to a minimum of 14.5%. Clinical routine has shown that 78% of all examinations can be performed with 2 or 3 pps fluoroscopy mode. In 12.7% of the cases pulsed fluoroscopy of diverse frequencies was used, in an additional 2% combined with continuous fluoroscopy. Exclusively, continuous fluoroscopy has been employed in 2% of the cases. Conclusions: Using pulsed fluoroscopy, an 80% reduction of the radiation dose compared to continuous fluoroscopy is possible. About 96% of all examinations can be performed with pulsed fluoroscopy of different pulse rate and without using continuous fluoroscopy. abstract_id: PUBMED:32747309 Fluoroscopic imaging optimization in children during percutaneous nephrolithotrispy. Introduction And Objectives: Radiation protection management recommends radiation exposures that are as low as reasonably achievable (ALARA), while still maintaining image quality. The aim of the study is to compare radiation exposure during pediatric percutaneous nephrolithotomy (PCNL) before and after implementation of strategy for optimization of fluoroscopic imaging by measuring the Dose Area Product (DAP) and the Fluoroscopy time (FT) and study its effect on surgical outcomes. Patients & Methods: We prospectively observed 56 children (group 1) undergoing PCNL for kidney stones in whom a radiation dose reduction strategy was adopted. The strategy included several intraoperative measures, including: optimizing position by keeping the fluoroscopy table as far from the X-ray tube as possible and the image intensifier close to the patient, preventing use of fluoroscopy for positioning, use of pulsed mode with last image hold technique, beam collimation and use of a designated fluoroscopy technician. Outcomes were compared to those in 42 children (group 2) before implementing dose reduction strategy. Results: DAP was decreased by 44% from 2.46 in group 2 to 1.38 mGy m2 in group 1 (p &lt; 0.04). Total fluoroscopy time was significantly reduced by 55% from 100.8 s in group 2-45 s in group 1 (p &lt; 0.002) after protocol implementation with very little loss of image quality. Conclusions: Radiation exposure in children undergoing PCNL can be reduced significantly after optimization of fluoroscopy imaging. A reduced radiation protocol did not increase surgical complexity, operative time, or complication rates while reducing radiation exposure in a population vulnerable to its hazardous effects. abstract_id: PUBMED:28641969 Determining 3D Kinematics of the Hip Using Video Fluoroscopy: Guidelines for Balancing Radiation Dose and Registration Accuracy. Background: Video fluoroscopy is a technique currently used to retrieve the in vivo three-dimensional kinematics of human joints during activities of daily living. Minimization of the radiation dose absorbed by the subject during the measurement is a priority and has not been thoroughly addressed so far. This issue is critical for the motion analysis of the hip joint, because of the proximity of the gonads. The aims of this study were to determine the x-ray voltage and the irradiation angle that minimize the effective dose and to achieve the best compromise between delivered dose and accuracy in motion retrieval. Methods: Effective dose for a fluoroscopic study of the hip was estimated by means of Monte Carlo simulations and dosimetry measurements. Accuracy in pose retrieval for the different viewing angles was evaluated by registration of simulated radiographs of a hip prosthesis during a prescribed virtual motion. Results: Absorbed dose can be minimized to about one-sixth of the maximum estimated values by irradiating at the optimal angle of 45° from the posterior side and by operating at 80 kV. At this angle, accuracy in retrieval of internal-external rotation is poorer compared with the other viewing angles. Conclusion: The irradiation angle that minimizes the delivered dose does not necessarily correspond to the optimal angle for the accuracy in pose retrieval, for all rotations. For some applications, single-plane fluoroscopy may be a valid lower dose alternative to the dual-plane methods, despite their better accuracy. abstract_id: PUBMED:27028533 Radiation exposure contribution of the scout abdomen radiograph in common pediatric fluoroscopic procedures. Background: Contrast enema, voiding cystourethrography and upper gastrointestinal studies are the most common fluoroscopic procedures in children. Scout abdomen radiographs have been routinely obtained prior to fluoroscopy and add to the radiation exposure from these procedures. Elimination of unnecessary routine scout radiographs in select studies might significantly reduce radiation exposure to children and improve the overall benefit-to-risk ratio of these fluoroscopic procedures. Objective: To determine the radiation exposure contribution of the preliminary/scout abdomen radiographs with respect to the radiation exposure of the total procedure. Materials And Methods: We retrospectively collected demographic information and radiation exposure values of dose area product (in Gy-cm(2)) and entrance air kerma (in mGy) - initially for the scout abdomen radiographs done prior to fluoroscopy and subsequently the total procedural radiation values (the combined values of the scout radiograph and fluoroscopic radiation exposure) - in children who underwent contrast enemas, voiding cystourethrograms and upper gastrointestinal studies in a 4-month period. The radiation parameters, including fluoroscopy time, dose area product and entrance air kerma, were available in the log book maintained in the fluoroscopy suite. Fluoroscopy procedures were performed on a single fluoroscopy machine using four frames per second pulse rate and other radiation-minimizing techniques. Usage of the grid to obtain scout radiographs was also recorded. The proportion of radiation exposure from the scout radiograph relative to that of the total procedure was calculated by dividing the individual parameters of the scout to the total procedural values and multiplied by 100 to express these values as a percentage. We calculated mean, median and range and performed statistical analysis of the data. Results: A total of 151 procedures performed on 71 males and 80 females qualified for the study. The age range of the patients was 2 days to 18 years, with a mean of nearly 3.5 years (40 months) and median of 15 months. There were 63 upper gastrointestinal studies, 65 voiding cystourethrography studies and 23 contrast enema studies. The fluoroscopy time for all procedures combined ranged from 0.1 min (6 s) to 2 min, with mean and median values of 0.4 min and 0.3 min, respectively. The fractional radiation exposure contribution for the dose area product of scout abdomen radiograph to the total procedure ranged from 4% to 98%, with mean and median values of 51% and 49%, respectively. The fractional contribution of the scout radiograph to the total procedure for the entrance air kerma values ranged from 6% to 97%, with mean and median values of 29% and 26%, respectively. There was a significant negative correlation (P&lt;0.001) between fluoroscopy radiation time and the proportion of radiation parameters of scout radiograph to total procedural values. Conclusion: Scout radiographs can contribute a significant proportion (median values of approximately 50% for the dose area product and 26% for the entrance air kerma) of radiation exposure in common fluoroscopy procedures in children. Answer: Pulsed fluoroscopy has been investigated as a method to decrease radiation exposure during fluoroscopic procedures in children. The use of pulsed fluoroscopy can significantly reduce the radiation dose compared to continuous fluoroscopy. For instance, pulsed fluoroscopy has been shown to reduce radiation exposure by 50% in one study and up to 75% when using a pulse rate of 7.5 pulses per second (PUBMED:8911190). Another study found that using pulsed and low-dose modes together could reduce the radiation-absorbed doses (RADs) by 83% compared to the conventional mode (PUBMED:22220241). Additionally, radiation dose tracking with feedback and dose reduction training can reduce radiation dose during diagnostic pediatric fluoroscopic procedures by nearly 50% (PUBMED:26482817). However, when it comes to video fluoroscopic feeding studies in children, specifically looking at supraglottic penetration during swallowing, decreasing the fluoroscopic pulse rate may not be a viable option. A study found that supraglottic penetration occurred very rapidly, and in most patients, full-depth penetration was only seen on one image frame. Therefore, decreasing the pulse rate could potentially result in non-detection of episodes of supraglottic penetration of liquid barium (PUBMED:19070700). This suggests that while pulsed fluoroscopy can be effective in reducing radiation dose in many fluoroscopic procedures, it may not be suitable for certain applications where rapid events need to be captured, such as in video fluoroscopic feeding studies. In conclusion, while pulsed fluoroscopy can be used to decrease radiation dose in many pediatric fluoroscopic procedures, it may not be appropriate for video fluoroscopic feeding studies where detecting rapid events like supraglottic penetration is crucial.
Instruction: Provision of continence pads by the continence services in Great Britain: fair all round? Abstracts: abstract_id: PUBMED:18794703 Provision of continence pads by the continence services in Great Britain: fair all round? Purpose: The UK Department of Health guidelines for continence services recommended that maintenance products should be available to anyone in quantities appropriate to the individuals' needs and to children above the age of 4 years. Despite this, there is much anecdotal evidence of rationing products. The aim of this study was to examine to what extent services limited pad supplies and what criteria were in operation to govern the supply. Methods: A questionnaire exploring the current practice in conferring eligibility and prescribing and providing continence products was developed and distributed to all continence services in Great Britain, using a database of services. Data were analyzed from all returned questionnaires to all continence services on the Continence Foundation database and were also extracted from the 2006 National Audit of Continence Care for Older People. Results: Few continence services employed clear and detailed criteria for issuing continence pads and, when present, written criteria used arbitrary cutoffs for measuring incontinence severity. Rationing was widespread, and the most common adult pad allowance was 4 per day. In addition, 59% of continence services provided pads to children below the recommended age of 4 years. Conclusions: Distribution of continence pads was based upon arbitrary criteria. We recommend the development of a single assessment tool with clear criteria for provision of continence products throughout the United Kingdom. We also recommend that criteria limiting the number of continence pads supplied on a daily basis are transparent and explicit. abstract_id: PUBMED:31597068 Meeting report: optimising procurement of continence pads. A group of continence care experts attended a round table to identify best practice for awarding a contract for disposable continence products. Here, Tracy Cowan, JWC Consultant Editor, describes the outcomes. abstract_id: PUBMED:34336238 Prospective evaluation of urinary continence after laparoscopic radical prostatectomy using a validated questionnaire and daily pad use assessment: which definition is more relevant to the patient's perception of recovery? Introduction: No standard definition for urinary continence after radical prostatectomy exists, and there are discrepancies in continence rates reported in the literature, as well as rates reported by physicians and patients. Therefore, we used two tools, a validated questionnaire and daily pad use, to identify the criteria that best reflects patients' perceptions of continence recovery. Material And Methods: This is a prospective study of 74 patients who underwent nerve-sparing laparoscopic radical prostatectomy. Continence was assessed monthly for 3 months following catheter removal using the International Consultation on Incontinence Questionnaire Short Form (ICIQ-UI SF) and by recording the number of pads the patients used on a daily basis. According to daily pad use, patients were categorized as either dry (no-pads), socially continent (0-1 pad) or incontinent (≥2 pads). Results: Seventy-four patients were enrolled with a mean age of 64.3 (±5.6) years. There were no significant differences in continence rates using scores from the International Consultation on Incontinence Questionnaire- Short Form (ICIQ-UI SF) or no-pad use (29.7% vs 32.4%, 45.9% vs 48.6% and 54.1% vs. 54.1%, at the 1-, 2- and 3-month follow-ups, respectively). However, the number of socially continent patients was significantly higher (59.5%, 70.3% and 81.1%, at the 1-, 2- and 3-month follow-ups, respectively [p &lt;0.001]). Conclusions: The totally dry definition better reflected patients' perceptions rather than the socially continent definition for the evaluation of continence recovery following laparoscopic radical prostatectomy. To avoid discrepancies, we recommend the use of a validated questionnaire as well as the no-pad definition to standardize the reporting of post radical prostatectomy continence rates. abstract_id: PUBMED:37655850 Holistically sustainable continence care: A working definition, the case of single-used absorbent hygiene products (AHPs) and the need for ecosystems thinking. Incontinence is a common health issue that affects hundreds of millions of people across the world. The solution is often to manage the condition with different kinds of single-use continence technologies, such as incontinence pads and other absorbent hygiene products (AHPs). Throughout their life cycle, these fossil-based products form a remarkable yet inadequately addressed ecological burden in society, contributing to global warming and other environmental degradation. The products are a necessity for their users' wellbeing. When looking for sustainability transitions in this field, focus on individual consumer-choice is thus inadequate - and unfair to the users. The industry is already seeking to decrease its carbon footprint. Yet, to tackle the environmental impact of single-use continence products, also societies and health systems at large must start taking continence seriously. Arguing that continence-aware societies are more sustainable societies, we devise in this article a society-wide working definition for holistically sustainable continence care. Involving dimensions of social, ecological and economic sustainability, the concept draws attention to the wide range of technologies, infrastructures and care practices that emerge around populations' continence needs. Holistically sustainable continence care is thus not only about AHPs. However, in this article, we examine holistically sustainable continence care through the case of AHPs. We review what is known about the environmental impact AHPs, discuss the impact of care practices on aggregate material usage, the future of biobased and degradable incontinence pads, as well as questions of waste management and circular economy. The case of AHPs shows how holistically sustainable continence care is a wider question than technological product development. In the end of the article, we envision an ecosystem where technologies, infrastructures and practices of holistically sustainable continence care can flourish, beyond the focus on singular technologies. abstract_id: PUBMED:16052946 Choosing and using disposable body-worn continence pads. Disposable, body-worn pads are the product most commonly chosen to contain and absorb urine and faeces (Pomfret, 2000). The cost to the NHS of supplying continence pads has been estimated at 80 million pounds per annum (Euromonitor, 1999) and is a huge financial burden on local services. abstract_id: PUBMED:28660675 Objectively improving appropriateness of absorbent products provision to patients with urinary incontinence: The DIAPPER study. Aims: To objectively assess and enhance the appropriateness of continence products provision to sufferers from urinary incontinence (UI) managed with containment strategies. Methods: Incontinent patients of five Italian continence care services were included in this industry-supported study from 01/2012 to 03/2016. All patients/carers have been invited to perform a 48-h home-based pad test and to fill in a diary. The primary outcome was the product appropriateness defined as the use of a pad with maximum absorbent capacity (MAC) from 30% to 50% higher than the individually measured urine load. Pads provision was corrected accordingly. Meaningful factors affecting products appropriateness and patient's satisfaction with the new products were also assessed. Results: The study included 14 493 subjects (mean age 78 years; 26% males, 74% females) using overall during the study days 98 362 pads. Sixty percent of the products were found to be not appropriate. In most of cases, (75%) products were inappropriate because too large. Age and pad weight gain, followed by gender, body weight, waist circumference, level of autonomy and mobility, pad wearing time, skin health status, and health district were independently associated to the propensity to inappropriateness. After correction of products prescription, a significant reduction (-31%) of the use of largest products was observed. At 6 months evaluation, 88% of evaluable participants were satisfied with the new prescription. Conclusions: Most of patients are provided with not appropriate containment products. The use of the 48-h pad test allows improving on an individual basis the appropriateness of products provision. abstract_id: PUBMED:33356929 Addressing and acting on individual ideas on continence care. Continence care should be individually delivered with dignity, decorum, distinction in all diverse contexts and circumstances. From the dependency of childhood to ultimately the end of life, continence care is essential for all, no matter what the setting is: at home, sheltered structures, community care, residential settings and nursing homes. Person-centred care is central to healthcare policies, procedures to the provision of personalised consultation, developing a collaborative partnership approach to continence assessment, promotion, and management. abstract_id: PUBMED:36245199 Age-stratified continence outcomes of robotic-assisted radical prostatectomy. Introduction: Incontinence after robot-assisted radical prostatectomy (RARP) significantly impacts quality of life. This study aims to compare the age-stratified continence outcomes in Canadian men undergoing RARP. Materials And Methods: A retrospective review was performed on a prospectively maintained database of 1737 patients who underwent RARP for localized prostate cancer between 2007 and 2019. Patients were stratified into five groups based on age: group 1, ≤ 54 years (n = 245); group 2, 55-59 years (n = 302) ; group 3, 60-64 years (n = 386); group 4, 65-69 years (n = 348); and group 5, ≥ 70 years (n = 116). Functional outcomes were assessed up to 36 months. Log-rank and multivariable Cox regression analyses were performed to compare the time to recovery of pad-free continence by age group. Results: Continence rates of groups 1 to 5 were respectively 90.2%, 79.1%, 80.4%, 71.4%, and 59.8% at 1-year follow up (p &lt; 0.001). After 3 years, groups 1 through 5 had continence rates of 97%, 91.7%, 89.3%, 81.4%, and 77.6%, respectively (p &lt; 0.001). Median time to recovery of continence was 58, 135, 140, 152 and 228 days, respectively. Among men who remained incontinent, older patients consistently required more pads. In Cox proportional hazard model, groups 2, 3, 4 and 5 were respectively 33% (p &lt; 0.001), 34% (p &lt; 0.001), 33% (p = 0.001), and 41% (p = 0.005) more likely to remain incontinent compared to group 1. Conclusions: Age is associated with significantly lower rates of continence recovery, longer time to recovery of continence, and more severe cases of incontinence after RARP. abstract_id: PUBMED:35705988 Professional perspectives on impacts, benefits and disadvantages of changes made to community continence services during the COVID-19 pandemic: findings from the EPICCC-19 national survey. Background: The COVID-19 pandemic required changes to the organisation and delivery of NHS community continence services which assess and treat adults and children experiencing bladder and bowel difficulties. Although strong evidence exists for the physical and mental health benefits, improved quality of life, and health service efficiencies resulting from optimally organised community-based continence services, recent audits identified pre-pandemic pressures on these services. The aim of this study was to explore professional perceptions of changes made to community continence services due to the COVID-19 pandemic and consequent impacts on practice, care provision and patient experience. Methods: Online survey of 65 community continence services in England. Thematic analysis using constant comparison of open-ended questions. Frequency counts of closed-ended questions. Results: Sixty-five services across 34 Sustainability and Transformation Partnership areas responded to the survey. Use of remote/virtual consultations enabled continuation of continence care but aspects of 'usual' assessment (examinations, tests) could not be completed within a remote assessment, requiring professionals to decide which patients needed subsequent in-person appointments. Remote appointments could increase service capacity due to their time efficiency, were favoured by some patients for their convenience, and could increase access to care for others. However, the limited ability to complete aspects of usual assessment raised concerns that diagnoses could be missed, or inappropriate care initiated. The format also restricted opportunities to identify non-verbal cues that could inform professional interpretation; and made building a therapeutic relationship between professional and patient more challenging. Remote appointments also posed access challenges for some patient groups. A third of participating services had experienced staff redeployment, resulting in long wait times and some patients being left without care; or reported additional caseload, which had delayed care provision for patients with continence issues. Participants perceived continence care to have been deprioritised, and more generally undervalued, and called for greater recognition of the impact of continence care. Conclusions: Remote appointments offer efficiency and convenience. However, 'in-person' approaches are highly valued for optimum quality, patient-centred continence care, and good team relationships. Failure to restore redeployed continence staff will diminish patient health and quality of life, with associated costs to the NHS. abstract_id: PUBMED:30230139 24/7 usage of continence pads and quality of life impairment in women with urinary incontinence. Aims: To compare quality of life (QoL) in women with urinary incontinence (UI) using continence pads during the day versus all day and night; to identify risk factors for a 24-hour pad use; to calculate quality-adjusted life years (QALY). Methods: A cross-sectional study in 331 women with lower urinary tract symptoms referred to urogynaecologic examination was conducted. Main outcome measures were the scores of King's Health Questionnaire (KHQ), clinical data, and KHQ-derived utility values. Results: A total of 270 women with UI were recruited: 176 (57.3%) using continence pads only during the day (group I) and 94 (30.6%) for 24 hours (group II). The groups did not differ in terms of age, menopause, parity, type of UI, stage of POP-Q, and percentage of sexually active subjects. Group II had significantly higher body mass index (BMI) and lower education than group I (P &lt; 0.05). QoL was significantly deteriorated in group II in KHQ Global score and in all domains except one (General Health). Risk factors for 24 hours pad use were as follows: BMI ≥ 30 vs BMI 25-30 (OR = 2.02 (1.09-3.73), P = 0.037), higher scores in KHQ Severity measures (OR = 1.03 (1.02-1.04), P &lt; 0.001), KHQ Global score (OR = 1.03 (1.02-1.05), P &lt; 0.001) and primary compared to secondary (OR = 0.4 (0.19-0.84)) or higher education (OR = 0.41 (0.18-0.94), P &lt; 0.05). Annual QALY was significantly lower in group II (0.9288 ± 0.03 vs 0.9432 ± 0.03, P &lt; 0.001). Conclusion: One-third of women with UI used continence pads for 24 hours. Among these patients QoL and QALY were found lower compared to women using continence pads only during the day. KHQ Severity measures domain was an independent predictor for a 24-hour usage of continence pads. Answer: The provision of continence pads by continence services in Great Britain has been a subject of concern, with evidence suggesting that the distribution of these essential products is not always fair or adequate. A study examining the extent to which services limited pad supplies found that few continence services employed clear and detailed criteria for issuing continence pads. When criteria were present, they often used arbitrary cutoffs for measuring incontinence severity. Rationing was widespread, with the most common adult pad allowance being 4 per day. Additionally, despite guidelines recommending that pads be provided to children above the age of 4 years, 59% of continence services were providing pads to children below this recommended age (PUBMED:18794703). The arbitrary criteria and rationing practices suggest a need for the development of a single assessment tool with clear criteria for the provision of continence products throughout the United Kingdom. It is also recommended that criteria limiting the number of continence pads supplied on a daily basis be transparent and explicit (PUBMED:18794703). Moreover, the cost of supplying continence pads is a significant financial burden on local services, with an estimated cost to the NHS of 80 million pounds per annum (PUBMED:16052946). This financial pressure may contribute to the rationing practices observed. To address these issues, a round table of continence care experts identified best practices for awarding contracts for disposable continence products, aiming to optimize procurement (PUBMED:31597068). Additionally, the DIAPPER study highlighted that most patients are provided with not appropriate containment products, and the use of a 48-hour pad test can improve the appropriateness of products provision on an individual basis (PUBMED:28660675). In conclusion, while continence services in Great Britain are tasked with providing necessary products to individuals with incontinence, the fairness and adequacy of these provisions are questionable. There is a need for standardized, transparent criteria and practices to ensure that all individuals receive the continence care they require.
Instruction: Is prolonged survival possible for patients with supraclavicular node metastases in non-small cell lung cancer treated with chemoradiotherapy? Abstracts: abstract_id: PUBMED:28415687 The prognostic impact of supraclavicular lymph node in N3-IIIB stage non-small cell lung cancer patients treated with definitive concurrent chemo-radiotherapy. Background: This study aimed to investigate the prognostic impact of supraclavicular lymph node (SCN) metastasis in patients who were treated with definitive chemoradiotherapy for N3-IIIB stage non-small cell lung cancer (NSCLC). Results: The 2- and 5-year overall survival (OS) rates were 57.3% and 35.7% in patients without SCN metastasis and 56.4% and 26.7% in patients with SCN metastasis, respectively. The median OS was 34 months in both groups. There was no significant difference in OS between the two groups (p = 0.679). The 2- and 5-year progression-free survival (PFS) rates were 24.1% and 12.6% in patients without SCN metastasis and 18.0% and 16.0% in patients with SCN metastasis, respectively. Patients without SCN metastasis had slightly longer median PFS (10 months vs. 8 months), but the difference was not statistically significant (p = 0.223). In multivariate analysis, SCN metastasis was not a significant factor for OS (p = 0.391) and PFS (p = 0.149). Materials And Methods: This retrospective analysis included 204 consecutive patients who were treated with chemoradiotherapy for N3-IIIB stage NSCLC between May 2003 and December 2012. A median RT dose of 66 Gy was administered over 6.5 weeks. Of these, 119 patients (58.3%) had SCN metastasis and 85 (41.7%) had another type of N3 disease: mediastinal N3 nodes in 84 patients (98.8%) and contralateral hilar node in one (1.2%). The patients were divided into two groups according to SCN metastasis. Conclusions: SCN metastasis does not compromise treatment outcomes compared to other mediastinal metastasis in the setting of definitive chemoradiotherapy. abstract_id: PUBMED:10386642 Is prolonged survival possible for patients with supraclavicular node metastases in non-small cell lung cancer treated with chemoradiotherapy?: Analysis of the Radiation Therapy Oncology Group experience. Purpose: To determine if patients with non-small cell lung carcinoma (NSCLC) and positive supraclavicular nodes (SN+) have a similar outcome to other patients with Stage IIIB NSCLC (SN-) when treated with modern chemoradiotherapy. Methods And Materials: Using the Radiation Therapy Oncology Group (RTOG) database, data were retrospectively analyzed from five RTOG trials studying chemoradiotherapy for NSCLC: 88-04, 88-08 (chemo-RT arm), 90-15, 91-06, 92-04. Comparisons were made between the SN+ and SN- subgroups with respect to overall survival, progression-free survival (PFS), and metastases-free survival (MFS) using the log rank test. Cox multivariate proportional hazards regression analysis was used to determine the effect of several potential confounding variables, including histology (squamous vs. nonsquamous), age (&gt;60 vs. &lt; or = 60), Karnofsky Performance Status (KPS) (&lt;90 vs. &gt; or = 90), weight loss (&gt; or = 5% vs. &lt;5%), and gender. Results: A total of 256 Stage IIIB patients were identified, of whom 47 had supraclavicular nodes (SN+) and 209 did not (SN-). Statistically significantly more SN+ patients had nonsquamous histology (p = 0.05); otherwise, known prognostic factors were well balanced. The median survival for SN+ patients was 16.2 months, vs. 15.6 months for SN- patients. The 4-year actuarial survival rates were 21% and 16% for SN+ and SN- patients respectively (p = 0.44). There was no statistically significant difference in the 4-year PFS rates (19% vs. 14%, p = 0.48). The Cox analysis did not show the presence or absence of supraclavicular nodal disease to be a prognostic factor for survival, MFS, or PFS. The only statistically significant factor on multivariate analysis was gender, with males having a 40% greater risk of mortality than females (p = 0.03). There were no clinically significant differences in toxicity when comparing SN+ vs. SN- patients. Among the 47 SN+ patients, there were no reported cases of brachial plexopathy or other &gt; or = Grade 2 late neurologic toxicity. Conclusions: When treated with modern chemoradiotherapy, the outcome for patients with supraclavicular metastases appears to be similar to that of other Stage IIIB patients. SN+ patients should continue to be enrolled in trials studying aggressive chemoradiotherapy regimens for locally advanced NSCLC. abstract_id: PUBMED:31329602 Significance of overall concurrent chemoradiotherapy duration on survival outcomes of stage IIIB/C non-small-cell lung carcinoma patients: Analysis of 956 patients. Background: To investigate the detrimental effects of prolonged overall radiotherapy duration (ORTD) on survival outcomes of stage IIIB/C NSCLC patients treated with concurrent chemoradiotherapy (C-CRT). Methods: The study cohort consisted of 956 patients who underwent C-CRT for stage IIIB/C NSCLC. Primary endpoint was the association between the ORTD and overall survival (OS) with locoregional progression-free survival (LRPFS) and PFS comprising the secondary endpoints. Receiver operating characteristic (ROC) curve analysis was utilized for accessibility of the cut-off that interacts with survival outcomes. Multivariate Cox model was utilized to identify the independent associates of survival outcomes. Results: The ROC curve analysis exhibited significance at 49 days of ORTD cut-off that dichotomized patients into ORTD&lt;50 versus ORTD≥50 days groups for OS [area under the curve (AUC): 82.8%; sensitivity: 81.1%; specificity: 74.8%], LRPFS (AUC: 91.9%; sensitivity: 90.6%; specificity: 76.3%), and PFS (AUC: 76.1%; sensitivity: 72.4%; specificity: 68.2%), respectively. Accordingly, ORTD≥50 days group had significantly shorter median OS (P&lt;0.001), LRPFS (P&lt;0.001), and PFS (P&lt;0.001); and 10-year actuarial locoregional control (P&lt;0.001) and distant metastases-free (P&lt;0.011) rates than the ORTD&lt;50 days group. The ORTD retained its significant association with survival outcomes at multivariate analyses independent of the other favorable covariates (p&lt;0.001, for OS, LRPFS, and PFS): Stage IIIB disease (versus IIIC), lymph node bulk &lt;2 cm (versus ≥2 cm), and 2-3 chemotherapy cycles (versus 1). The higher sensitivity for LRPFS (90.6%) than PFS (72.4%) on ROC curve analysis suggested the prolonged ORTD-induced decrements in locoregional control rates as the major cause of the poor survival outcomes. Conclusions: Longer ORTD beyond ≥50 days was associated with significantly poorer OS, LRPFS and PFS outcomes, where reduced locoregional control rates appeared to be the main causative. abstract_id: PUBMED:25687865 Effect of Radiation Therapy Techniques on Outcome in N3-positive IIIB Non-small Cell Lung Cancer Treated with Concurrent Chemoradiotherapy. Purpose: This study was conducted to evaluate clinical outcomes following definitive concurrent chemoradiotherapy (CCRT) for patients with N3-positive stage IIIB (N3-IIIB) non-small cell lung cancer (NSCLC), with a focus on radiation therapy (RT) techniques. Materials And Methods: From May 2010 to November 2012, 77 patients with N3-IIIB NSCLC received definitive CCRT (median, 66 Gy). RT techniques were selected individually based on estimated lung toxicity, with 3-dimensional conformal RT (3D-CRT) and intensity-modulated RT (IMRT) delivered to 48 (62.3%) and 29 (37.7%) patients, respectively. Weekly docetaxel/paclitaxel plus cisplatin (67, 87.0%) was the most common concurrent chemotherapy regimen. Results: The median age and clinical target volume (CTV) were 60 years and 288.0 cm(3), respectively. Patients receiving IMRT had greater disease extent in terms of supraclavicular lymph node (SCN) involvement and CTV ≥ 300 cm(3). The median follow-up time was 21.7 months. Fortyfive patients (58.4%) experienced disease progression, most frequently distant metastasis (39, 50.6%). In-field locoregional control, progression-free survival (PFS), and overall survival (OS) rates at 2 years were 87.9%, 38.7%, and 75.2%, respectively. Although locoregional control was similar between RT techniques, patients receiving IMRT had worse PFS and OS, and SCN metastases from the lower lobe primary tumor and CTV ≥ 300 cm(3)were associated with worse OS. The incidence and severity of toxicities did not differ significantly between RT techniques. Conclusion: IMRT could lead to similar locoregional control and toxicity, while encompassing a greater disease extent than 3D-CRT. The decision to apply IMRT should be made carefully after considering oncologic outcomes associated with greater disease extent and cost. abstract_id: PUBMED:10755877 Is prolonged survival possible for patients with supraclavicular node metastases in NSCLC treated with chemoradiotherapy? IJROBP 1999;44(4): 847-853. N/A abstract_id: PUBMED:32850456 Supraclavicular Recurrence in Completely Resected (y)pN2 Non-Small Cell Lung Cancer: Implications for Postoperative Radiotherapy. Background: The clinical value and delineation of clinical target volume (CTV) of postoperative radiotherapy (PORT) in completely resected (y)pN2 non-small cell lung cancer (NSCLC) remain controversial. Investigations specifically focusing on the cumulative incidence and prognostic significance of initial disease recurrence at the supraclavicular region (SCR) in this disease population are seldom reported. Methods: Consecutive patients with curatively resected (y)pN2 NSCLC who received adjuvant chemotherapy from January 2013 to December 2018 at our cancer center were retrospectively examined. Disease recurrence at the surgical margin, ipsilateral hilum, and/or mediastinum was defined as loco-regional recurrence (LRR). Disease recurrence beyond LRR and SCR, was defined as distant metastasis (DM). Overall survival (OS1 and OS2) were calculated from surgery and disease recurrence to death of any cause, in the entire cohort and in patients with recurrent disease, respectively. Results: Among the 311 patients enrolled, PORT without elective supraclavicular nodal irradiation (ESRT) was performed in 94 patients and neoadjuvant chemotherapy was administered in 31 patients. With a median follow-up of 26 months, 203 patients developed recurrent disease, including 27 SCRs, among which 16 were without DM and 22 involved the ipsilateral supraclavicular region. The 1, 3, and 5-year cumulative incidence of SCR were 6.53, 13.0, and 24.7%, respectively. Chosen DM as a competing event, cN2, ypN2, not receiving lobectomy, and negative expression of CK7 were significantly associated with SCR using the univariate competing risk analysis, while ypN2 was identified as the only independent risk factor of SCR (p = 0.012). PORT significantly reduced LRR (p = 0.031) and prolonged OS1 (p = 0.018), but didn't impact SCR (p = 0.254). Pattern of failure analyses indicated that the majority of LRRs developed within the actuarial or virtual CTV of PORT, and 15 of the 22 ipsilateral SCRs could be covered by the virtual CTV of proposed ESRT. In terms of OS2, patients who developed SCR but without DM had intermediate prognosis, compared with those who had DM (p = 0.009) and those who had only LRR (p = 0.048). Conclusions: SCR is not uncommon and has important prognostic significance in completely resected (y)pN2 NSCLC. The clinical value of PORT and ESRT in such patients need to be further investigated. abstract_id: PUBMED:28732516 Application of the new 8th TNM staging system for non-small cell lung cancer: treated with curative concurrent chemoradiotherapy. Background: The eighth tumor, node, metastasis (TNM) staging system (8-TNM) for non-small cell lung cancer (NSCLC) was newly released in 2015. This system had limitation because most patients included in the analysis were treated with surgery. Therefore, it might be difficult to reflect prognosis of patients treated with curative concurrent chemoradiotherapy (CCRT). Purpose of this study was to investigate clinical impact of the newly published 8-TNM compared to the current seventh TNM staging system (7-TNM) for locally advanced NSCLC patients treated with CCRT. Methods: New 8-TNM was applied to 64 patients with locally advanced NSCLC who were treated with CCRT from 2010 to 2015. Changes in T category and stage group by 8-TNM were recorded and patterns of change were evaluated. Survival was analyzed according to T category, N category, and stage group in each staging system, respectively. Results: Among the total of 64 patients, 38 (59.4%) patients showed change in T category while 22 (34.4%) patients showed change in stage group using 8-TNM compared to 7-TNM. Survival curves were significantly separated in the 8-TNM stage group (p = 0.001) than those in the 7-TNM (p &gt; 0.05). Especially, survival of newly introduced stage IIIC by 8-TNM was significantly lower than that of others. On the other hand, there was no significant survival difference between T categories in each staging system. Conclusions: Subdivision of stage III into IIIA, IIIB, and IIIC by 8-TNM for patients treated with CCRT better reflected prognosis than 7-TNM. However, subdivision of T category according to tumor size in 8-TNM might be less significant. abstract_id: PUBMED:30280658 Overall Survival with Durvalumab after Chemoradiotherapy in Stage III NSCLC. Background: An earlier analysis in this phase 3 trial showed that durvalumab significantly prolonged progression-free survival, as compared with placebo, among patients with stage III, unresectable non-small-cell lung cancer (NSCLC) who did not have disease progression after concurrent chemoradiotherapy. Here we report the results for the second primary end point of overall survival. Methods: We randomly assigned patients, in a 2:1 ratio, to receive durvalumab intravenously, at a dose of 10 mg per kilogram of body weight, or matching placebo every 2 weeks for up to 12 months. Randomization occurred 1 to 42 days after the patients had received chemoradiotherapy and was stratified according to age, sex, and smoking history. The primary end points were progression-free survival (as assessed by blinded independent central review) and overall survival. Secondary end points included the time to death or distant metastasis, the time to second progression, and safety. Results: Of the 713 patients who underwent randomization, 709 received the assigned intervention (473 patients received durvalumab and 236 received placebo). As of March 22, 2018, the median follow-up was 25.2 months. The 24-month overall survival rate was 66.3% (95% confidence interval [CI], 61.7 to 70.4) in the durvalumab group, as compared with 55.6% (95% CI, 48.9 to 61.8) in the placebo group (two-sided P=0.005). Durvalumab significantly prolonged overall survival, as compared with placebo (stratified hazard ratio for death, 0.68; 99.73% CI, 0.47 to 0.997; P=0.0025). Updated analyses regarding progression-free survival were similar to those previously reported, with a median duration of 17.2 months in the durvalumab group and 5.6 months in the placebo group (stratified hazard ratio for disease progression or death, 0.51; 95% CI, 0.41 to 0.63). The median time to death or distant metastasis was 28.3 months in the durvalumab group and 16.2 months in the placebo group (stratified hazard ratio, 0.53; 95% CI, 0.41 to 0.68). A total of 30.5% of the patients in the durvalumab group and 26.1% of those in the placebo group had grade 3 or 4 adverse events of any cause; 15.4% and 9.8% of the patients, respectively, discontinued the trial regimen because of adverse events. Conclusions: Durvalumab therapy resulted in significantly longer overall survival than placebo. No new safety signals were identified. (Funded by AstraZeneca; PACIFIC ClinicalTrials.gov number, NCT02125461 .). abstract_id: PUBMED:28069039 Supraclavicular lymph node incisional biopsies have no influence on the prognosis of advanced non-small cell lung cancer patients: a retrospective study. Background: Supraclavicular lymph node (SCLN) biopsies play an important role in diagnosing and staging lung cancer. However, not all patients with SCLN metastasis can have a complete resection. It is still unknown whether SCLN incisional biopsies affect the prognosis of non-small cell lung cancer (NSCLC) patients. Methods: Patients who were histologically confirmed to have NSCLC with SCLN metastasis were enrolled in the study from January 2007 to December 2012 at Guangdong Lung Cancer Institute. The primary endpoint was OS, and the secondary endpoints were complications and local recurrence/progression. Results: Two hundred two consecutive patients who had histologically confirmed NSCLC with SCLN metastasis were identified, 163 with excisional and 39 with incisional biopsies. The median OS was not significantly different between the excisional (10.9 months, 95% CI 8.7-13.2) and incisional biopsy groups (10.1 months, 95% CI 6.3-13.9), P = 0.569. Multivariable analysis showed that an Eastern Cooperative Oncology Group (ECOG) performance status (PS) ≥2 (HR = 2.75, 95% CI 1.71-4.38, P &lt; 0.001) indicated a worse prognosis. Having an epidermal growth factor receptor (EGFR) mutation (HR = 0.58, 95% CI 0.40-0.84, P = 0.004) and receiving systemic treatment (HR = 0.36, 95% CI 0.25-0.53, P &lt; 0.001) were associated with a favorable OS. Neither the number (multiple vs. single) nor site (bilateral vs. unilateral) of SCLNs was associated with an unfavorable OS, and SCLN size or fixed SCLNs did not affect OS. Conclusions: SCLN incisional biopsies did not negatively influence the prognosis of NSCLC patients. It was safe and feasible to partly remove a metastatic SCLN as a last resort in advanced NSCLC. abstract_id: PUBMED:24692108 Influence of conformal radiotherapy technique on survival after chemoradiotherapy for patients with stage III non-small cell lung cancer in the National Cancer Data Base. Background: Definitive chemoradiotherapy is a core treatment modality for patients with stage III non-small cell lung cancer (NSCLC). Although radiotherapy (RT) technologies have advanced dramatically, to the authors' knowledge relatively little is known regarding the importance of irradiation technique on outcome, particularly given the competing risk of distant metastasis. The National Cancer Data Base was used to determine predictors of overall survival (OS) in patients with AJCC stage III NSCLC who were treated with chemoradiotherapy, focusing on the importance of conformal RT (CRT). Methods: Patients with stage III NSCLC who were treated with chemoradiotherapy between 2003 and 2005 in the National Cancer Data Base were included. RT technique was defined as conventional, 3-dimensional-conformal, or intensity-modulated RT (IMRT), the latter 2 combined as CRT. Cox proportional hazards regression was performed for univariable and multivariable analyses of OS. Results: The median, 3-year, and 5-year survival outcomes for the 13,292 patients were 12.9 months, 19%, and 11%, respectively. The 3-year and 5-year survival probabilities of patients receiving CRT versus no CRT were 22% versus 19% and 14% versus 11%, respectively (P &lt; .0001). On multivariable analysis, CRT was found to be significantly associated with improved OS (hazards ratio, 0.89). This effect was confirmed on sensitivity analyses, including restricting the cohort to minimum 6-month survivors, young patients with stage IIIA disease, and propensity score-matching. Institutional academic status and patient volume were not found to be associated with OS. Conclusions: CRT was found to be independently associated with a survival advantage. These results reflect the importance of optimal locoregional therapy in patients with stage III NSCLC and provide motivation for further study of advanced RT technologies in patients with NSCLC. Answer: Yes, prolonged survival is possible for patients with supraclavicular node metastases in non-small cell lung cancer (NSCLC) treated with chemoradiotherapy. Studies have shown that the presence of supraclavicular node metastases does not significantly compromise treatment outcomes compared to other mediastinal metastases in the setting of definitive chemoradiotherapy. Specifically, one study found no significant difference in overall survival (OS) or progression-free survival (PFS) between patients with and without supraclavicular lymph node (SCN) metastasis when treated with definitive concurrent chemoradiotherapy for N3-IIIB stage NSCLC (PUBMED:28415687). Another study using the Radiation Therapy Oncology Group (RTOG) database also concluded that the outcome for patients with supraclavicular metastases appears to be similar to that of other Stage IIIB patients when treated with modern chemoradiotherapy (PUBMED:10386642). Additionally, a study on the effect of radiation therapy techniques on outcome in N3-positive IIIB NSCLC treated with concurrent chemoradiotherapy found that intensity-modulated radiation therapy (IMRT) could lead to similar locoregional control and toxicity while encompassing a greater disease extent than 3D-conformal radiation therapy (3D-CRT) (PUBMED:25687865). Furthermore, a study on the application of the new 8th TNM staging system for NSCLC treated with curative concurrent chemoradiotherapy suggested that subdivision of stage III into IIIA, IIIB, and IIIC by the 8th TNM for patients treated with CCRT better reflected prognosis than the 7th TNM (PUBMED:28732516). Lastly, a study on overall survival with durvalumab after chemoradiotherapy in Stage III NSCLC showed that durvalumab therapy resulted in significantly longer overall survival than placebo (PUBMED:30280658).
Instruction: Could postmortem hemorrhage occur in the brain? Abstracts: abstract_id: PUBMED:23629388 Could postmortem hemorrhage occur in the brain?: a preliminary study on the establishment and investigation of postmortem hypostatic hemorrhage using rabbit models. Objective: The aim of this study was to explore whether postmortem hemorrhage can occur in brain tissue using rabbit models. Methods: The rabbits killed by air embolism were randomly divided into the horizontal-position group and the upside-down group. Autopsy was performed after 48 hours, and the brains were investigated with macroscopic assessment and histologic examination. Results: Macroscopically, congestion of vessels on the surface of the brain was identified in all the subjects in both groups. Microscopically, the presence of multifocal extravascular red blood cell aggregation was observed in brain parenchyma and subarachnoid space in the upside-down group. In contrast, no leakage of extravascular red blood cells was observed in the brain parenchyma and the subarachnoid space in the horizontal-position group. Conclusions: Hypostatic and leakage bleeding can occur in sites of subarachnoid space and brain parenchyma of rabbits after death by nonforce with a certain period and certain position of the placement. This type of hemorrhage is challenging to differentiate from traumatic hemorrhage in pathologic practice. To avoid misdiagnosis, the clinical pathologists should keep in mind that the possibility of postmortem hypostatic hemorrhage needs to be ruled out when the diagnosis of subarachnoid hemorrhage or cerebral hemorrhage is established. abstract_id: PUBMED:36763092 Air bubble artifact: why postmortem brain MRI should always be combined with postmortem CT. Forensic pathology increasingly uses postmortem magnetic resonance imaging (PMMRI), particularly in pediatric cases. It should be noted that each (sudden and unexpected) death of an infant or child should have a forensic approach as well. Current postmortem imaging protocols do not focus adequately on forensic queries. First, it is important to demonstrate or rule out bleeding, especially in the brain. Thus, when incorporating PMMRI, a blood sensitive sequence (T2* and/or susceptibility weighted imaging (SWI)) should always be included. Secondly, as intracranial air might mimic small focal intracerebral hemorrhages, PMMRI should be preceded by postmortem CT (PMCT) since air is easily recognizable on CT. This will be illustrated by a case of a deceased 3-week-old baby. Finally, note that postmortem scans will often be interpreted by clinical radiologists, sometimes with no specific training, which makes this case report relevant for a broader audience. abstract_id: PUBMED:20177369 Brain-stem laceration and blunt rupture of thoracic aorta: is the intrapleural bleeding postmortem in origin?: an autopsy study. Some of the fatally injured car occupants could have had both blunt rupture of thoracic aorta with great amount of intrapleural blood, and pontomedullar laceration of brain-stem as well, with both injuries being fatal. The aim of this study was to answer if all intrapleural bleeding in these cases was antemortem, or the bleeding could also be partially postmortem. We observed the group of 66 cases of blunt aortic rupture: 21 case with brain-stem laceration, and 45 cases without it. The average amount of intrapleural bleeding in cases without brain-stem laceration (1993 ± 831 mL) was significantly higher than in those with this injury (1100 ± 708 mL) (t = 4.252, df = 64, P = 0.000). According to our results, in cases of the thoracic aorta rupture with concomitant brain-stem laceration, the amount of intrapleural bleeding less than 1500 mL, should be considered mostly as postmortem in origin, and in such cases, only the brain-stem injury should be considered as cause of death. abstract_id: PUBMED:18977623 Reversal sign on ante- and postmortem brain imaging in a newborn: report of one case. A 16-day-old female newborn was admitted to the emergency department after cardiopulmonary arrest. Total-body radiographs and non-enhanced CT of the brain showed fracture of the right clavicle, pericerebral hemorrhage and brain damage with reversal sign. The infant died on the day of her hospital admission. Because child abuse was suspected, a medicolegal autopsy was ordered by the legal authorities. Prior to autopsy, total-body MRI and CT were performed. Results of the ante- and postmortem investigations were compared with each other and then with the autopsy findings. Postmortem brain imaging showed persistence of the reversal sign. To the best of our knowledge, this is the first case describing hypoxic ischemic damage of the brain parenchyma on antemortem CT and persisting on postmortem imaging in a child abuse case. abstract_id: PUBMED:26132433 Deep Into the Fibers! Postmortem Diffusion Tensor Imaging in Forensic Radiology. Purpose: In traumatic brain injury, diffusion-weighted and diffusion tensor imaging of the brain are essential techniques for determining the pathology sustained and the outcome. Postmortem cross-sectional imaging is an established adjunct to forensic autopsy in death investigation. The purpose of this prospective study was to evaluate postmortem diffusion tensor imaging in forensics for its feasibility, influencing factors and correlation to the cause of death compared with autopsy. Methods: Postmortem computed tomography, magnetic resonance imaging, and diffusion tensor imaging with fiber tracking were performed in 10 deceased subjects. The Likert scale grading of colored fractional anisotropy maps was correlated to the body temperature and intracranial pathology to assess the diagnostic feasibility of postmortem diffusion tensor imaging and fiber tracking. Results: Optimal fiber tracking (&gt;15,000 fiber tracts) was achieved with a body temperature at 10°C. Likert scale grading showed no linear correlation (P &gt; 0.7) to fiber tract counts. No statistically significant correlation between total fiber count and postmortem interval could be observed (P = 0.122). Postmortem diffusion tensor imaging and fiber tracking allowed for radiological diagnosis in cases with shearing injuries but was impaired in cases with pneumencephalon and intracerebral mass hemorrhage. Conclusions: Postmortem diffusion tensor imaging with fiber tracking provides an exceptional in situ insight "deep into the fibers" of the brain with diagnostic benefit in traumatic brain injury and axonal injuries in the assessment of the underlying cause of death, considering influencing factors for optimal imaging technique. abstract_id: PUBMED:37443680 Bleeding-Source Exploration in Subdural Hematoma: Observational Study on the Usefulness of Postmortem Computed Tomography Angiography. In a few cases, postmortem computed tomography angiography (PMCTA) is effective in postmortem detection of cortical artery rupture causing subdural hematoma (SDH), which is difficult to detect at autopsy. Here, we explore the usefulness and limitations of PMCTA in detecting the sites of cortical arterial rupture for SDH. In 6 of 10 cases, extravascular leakage of contrast material at nine different places enabled PMCTA to identify cortical arterial rupture. PMCTA did not induce destructive arterial artifacts, which often occur during autopsy. We found that, although not in all cases, PMCTA could show the site of cortical arterial rupture causing subdural hematoma in some cases. This technique is beneficial for cases of SDH autopsy, as it can be performed nondestructively and before destructive artifacts from the autopsy occur. abstract_id: PUBMED:33346981 Green Discoloration of Human Postmortem Brains: Etiologies and Mechanisms of Discoloration. Abstract: A variety of gross discolorations of human postmortem brains is occasionally encountered and can have diagnostic implications. We describe 3 cases of green discoloration of the human brain observed on postmortem examination. Two patients who succumbed shortly after administration of methylene blue (MB) showed diffuse green discoloration that was detectable as early as 24 hours and was seen for at least 48 hours after MB administration. Green discoloration was largely in cortical and deep gray matter structures with relative sparing of the white matter. In contrast, a patient with severe hyperbilirubinemia who died after intracerebral hemorrhage showed localized bright green bile stained brain parenchyma in the areas surrounding the hemorrhage. We highlight the distinct patterns of discoloration in different causes of green brain discoloration, including MB, bile staining, and hydrogen sulfide poisoning. Recognition of these patterns by practicing pathologists can be used to differentiate between these etiologies and allow correct interpretation in both the medical and forensic autopsy settings. abstract_id: PUBMED:37415801 A forensic case of hydranencephaly in a preterm neonate fully documented by postmortem imaging techniques. The authors present a medico-legal autopsy case of hydranencephaly in a male preterm newborn, fully documented by postmortem unenhanced and enhanced imaging techniques (postmortem computed tomography and postmortem magnetic resonance imaging). Hydranencephaly is a congenital anomaly of the central nervous system, consisting in almost complete absence of the cerebral hemispheres and replacement of the cerebral parenchyma by cerebrospinal fluid, rarely encountered in forensic medical practice. A premature baby was born during the supposed 22nd and 24th week of pregnancy in the context of a denial of pregnancy without any follow-up. The newborn died a few hours after birth and medico-legal investigations were requested to determine the cause of death and exclude the intervention of a third person in the lethal process. The external examination revealed neither traumatic nor malformative lesions. Postmortem imaging investigations were typical of hydranencephaly, and conventional medico-legal autopsy, neuropathological examination, and histological examination confirmed a massive necrotic-haemorrhagic hydranencephaly. This case represents in itself an association of out-of-the-ordinary elements making it worthy of interest. Key Points: Postmortem unenhanced and enhanced imaging techniques (computed tomography and magnetic resonance imaging) were performed as complementary examination to conventional medico-legal investigations.Postmortem angiography of a preterm newborn is possible with catheterization of the umbilical blood vessels.Hydranencephaly is a congenital anomaly of the central nervous system, consisting in almost complete absence of the cerebral hemispheres and replacement of the brain by cerebrospinal fluid, for which several aetiologies have been postulated. abstract_id: PUBMED:32840712 The possibility of identifying brain hemorrhage in putrefied bodies with PMCT. This paper aims to demonstrate that post-mortem CT (PMCT) can locate intracranial hemorrhages, even in decomposed cases. This is of relevance in that post-mortem decomposition is particularly damaging to the brain tissue's consistency, resulting in great difficulties to reliably diagnose and locate intracranial hemorrhages. We searched our case database of the last 11 years to find cases with decomposition of the body, where PMCT and an autopsy had been performed. We identified eleven cases according to these criteria. Postmortem interval ranged from 2 days to 2 weeks, and post-mortem radiological alteration index (RAI) was at or above 49. Eight out of eleven cases showed an intraparenchymal hemorrhage whereas the hemorrhage was extra-axial in the remaining three cases. Autopsy validated the presence of intracranial hemorrhage in all eleven cases, but location could not be confirmed due to liquid state of the brain. PMCT identified and localized intracranial hemorrhages in decomposed bodies, and in all of these cases, autopsy validated their presence. The actual cause of the hemorrhage (e.g. tumor, metastasis, vascular malformation, hypertensive hemorrhage) remained obscure. From this case series, it can be concluded that PMCT may add relevant information pertaining to localization of intracranial hemorrhages in decomposed bodies. abstract_id: PUBMED:33511828 Postmortem diagnostic of diffuse axonal injury considering histoarchitectonic of the corpus callosum The results of studying the morphological signs of diffuse axonal injury (DAI) postmortem diagnosis in the brain are presented. It was investigated the histoarchitectonics of the corpus callosum in various types of non-violent and violent death. In the indusium griseum of the corpus callosum, neurons were identified; the features of the morphology and the interposition of the neurons processes, glia and vessels in various parts of the corpus callosum were studied. Taking into account the revealed architectonics of the corpus callosum, changes in DAI were determined. It was found that the main diagnostically significant morphological sign of DAI is hemorrhages localized in the trunk and indusium griseum on sagittal corpus callosum sections, which may be important in solving expert questions about the morphogenesis of hemorrhages in the corpus callosum. Answer: Yes, postmortem hemorrhage can occur in the brain. A study using rabbit models demonstrated that hypostatic and leakage bleeding could occur in the subarachnoid space and brain parenchyma after death, given a certain period and position of the body placement. This type of hemorrhage can be challenging to differentiate from traumatic hemorrhage in pathological practice, and it is important for clinical pathologists to consider the possibility of postmortem hypostatic hemorrhage when diagnosing subarachnoid hemorrhage or cerebral hemorrhage (PUBMED:23629388).
Instruction: Are post-traumatic stress disorder symptoms and temporomandibular pain associated? Abstracts: abstract_id: PUBMED:17547122 Prevalence and impact of post-traumatic stress disorder symptoms in patients with masticatory muscle or temporomandibular joint pain: differences and similarities. Aims: To evaluate temporomandibular disorder (TMD) patients for differences between masticatory muscle (MM) and temporomandibular joint (TMJ) pain patients in the prevalence of posttraumatic stress disorder (PTSD) symptoms and evaluate the level of psychological dysfunction and its relationship to PTSD symptoms in these patients. Methods: This study included 445 patients. Psychological questionnaires included the Symptom Check List-90-Revised (SCL-90-R), the Multidimensional Pain Inventory, the Pittsburgh Sleep Quality Index, and the PTSD Check List Civilian. The total sample of patients was divided into 2 major groups: the MM group (n = 242) and the TMJ group (n = 203). Each group was divided into 3 subgroups based on the presence of a stressor and severity of PTSD symptoms. Results: Thirty-six patients (14.9%) in the MM group and 20 patients (9.9%) in the TMJ group presented with PTSD symptomatology (P = .112). Significant differences were found between the MM and the TMJ group in several psychometric domains, but when the presence of PTSD symptomatology was considered, significant differences were mostly maintained in the subgroups without PTSD. MM and TMJ pain patients in the "positive PTSD" subgroups scored higher on all SCL-90-R scales (P &lt; .001) than patients in the other 2 subgroups and reached levels of distress indicative of psychological dysfunction. TMJ pain patients (58.3%; P = .008) in the positive-PTSD subgroups were more often classified as dysfunctional. Both positive-PTSD subgrounps of the MM and TMJ groups presented with more sleep disturbance (P &lt; .005) than patients in the other 2 subgroups. Conclusion: A somewhat elevated prevalence rate for PTSD symptomatology was found in the MM group compared to the TMJ group. Significant levels of psychological dysfunction appeared to be linked to TMD patients with PTSD symptoms. abstract_id: PUBMED:30153313 Association Between Symptoms of Posttraumatic Stress Disorder and Signs of Temporomandibular Disorders in the General Population. Aims: To estimate the association between signs of temporomandibular disorders (TMD) and symptoms of posttraumatic stress disorder (PTSD) in a representative sample from the general population of northeastern Germany. Methods: Signs of TMD were assessed with a clinical functional analysis that included palpation of the temporomandibular joints (TMJs) and masticatory muscles. PTSD was assessed with the PTSD module of the Structured Clinical Interview for the Diagnostic and Statistical Manual of Mental Disorders, ed 4. The change-in-estimate method for binary logistic regression models was used to determine the final model and control for confounders. Results: After the exclusion of subjects without prior traumatic events, the sample for joint pain consisted of 1,673 participants with a median age of 58.9 years (interquartile range 24.8), and the sample for muscle pain consisted of 1,689 participants with a median age of 59.1 years (interquartile range 24.8). Of these samples, 84 participants had pain on palpation of the TMJ, and 42 participants had pain on palpation of the masticatory muscles. Subjects having clinical PTSD (n = 62) had a 2.56-fold increase in joint pain (odds ratio [OR] = 2.56; 95% confidence interval [CI]: 1.14 to 5.71, P = .022) and a 3.86-fold increase (OR = 3.86; 95% CI: 1.51 to 9.85, P = .005) in muscle pain compared to subjects having no clinical PTSD. Conclusion: These results should encourage general practitioners and dentists to acknowledge the role of PTSD and traumatic events in the diagnosis and therapy of TMD, especially in a period of international migration and military foreign assignments. abstract_id: PUBMED:18351033 Are post-traumatic stress disorder symptoms and temporomandibular pain associated? Findings from a community-based twin registry. Aims: To determine whether symptoms of post-traumatic stress disorder (PTSD) are related to the pain of temporomandibular disorders (TMD) in a community-based sample of female twin pairs, and if so, to ascertain whether the association is due to the presence of chronic widespread pain (CWP) and familial/genetic factors. Methods: Data were obtained from 630 monozygotic and 239 dizygotic female twin pairs participating in the University of Washington Twin Registry. PTSD symptoms were assessed with the Impact of Events Scale (IES), with scores partitioned into terciles. TMD pain was assessed with a question about persistent or recurrent pain in the face, jaw, temple; in front of the ear; or in the ear during the past 3 months. CWP was defined as pain located in 3 body regions during the past 3 months. Random-effects regression models, adjusted for demographic features, depression, CWP, and familial/genetic factors, were used to examine the relationship between the IES and TMD pain. Results: IES scores were significantly associated with TMD pain (P &lt; .01). Twins in the highest IES tercile were almost 3 times more likely than those in the lowest tercile to report TMD pain, even after controlling for demographic factors, depression, and CWP. After adjustment for familial and genetic factors, the association of IES scores with TMD pain remained significant in dizygotic twins (Ptrend = .03) but was not significant in monozygotic twins (Ptrend = .30). Conclusion: PTSD symptoms are strongly linked to TMD pain. This association could be partially explained by genetic vulnerability to both conditions but is not related to the presence of CWP. Future research is needed to understand the temporal association of PTSD and TMD pain and the genetic and physiological underpinnings of this relationship. abstract_id: PUBMED:37300526 Post-traumatic stress, prevalence of temporomandibular disorders in war veterans: Systematic review with meta-analysis. Introduction: The physical and psychological effects of war are not always easy to detect, but they can be far-reaching and long-lasting. One of the physical effects that may result from war stress is temporomandibular disorder (TMD). Objective: To evaluate the prevalence of TMD sign and symptoms among war veterans diagnosed with PTSD. Methods: We systematically searched in Web of Science, PubMed and Lilacs for articles published from the inception until 30 December 2022. All documents were assessed for eligibility based on the following Population, Exposure, Comparator and Outcomes (PECO) model: (P) Participants consisted of human subjects. (E) The Exposure consisted of exposition to war. (C) The Comparison was between war veterans (subjects exposed to war) and subjects not exposed to war. (O) The Outcome consisted of presence of temporomandibular disorders sign or symptoms (we considered pain to muscle palpation in war veterans). Results: Forty studies were identified at the end of the research. We chose only four study to draw up the present systematic study. The included subjects were 596. Among them, 274 were exposed to war, whereas the remaining 322 were not exposed to war stress. Among those exposed to war, 154 presented sign/symptoms of TMD (56.2%) whereas only 65 of those not exposed to war (20.18%). The overall effect revealed that subjects exposed to war and diagnosed with PTSD had a higher prevalence of TMD signs (pain at muscle palpation) than controls (RR 2.21; 95% CI: 1.13-4.34), showing an association PTSD war-related and TMD. Conclusions: War can cause lasting physical and psychological damage that can lead to chronic diseases. Our results clearly demonstrated that war exposure, directly or indirectly, increases the risk of developing TMJ dysfunction and TMD sign/symptoms. abstract_id: PUBMED:17153558 The prevalence of temporomandibular disorders in war veterans with post-traumatic stress disorder. The purposes of this study were to assess the prevalence of temporomandibular disorders in Croatian war veterans suffering from post-traumatic stress disorder (PTSD) and to analyze the impact of the disease on mandibular function. One hundred eighty-two male subjects participated in the study. The examined group consisted of 94 subjects who had taken part in the war in Croatia and for whom PTSD had previously been diagnosed. Patients were compared with an age- and gender-matched group of subjects who had not taken part in the war and for whom PTSD was excluded by means of a psychiatric examination. The study used a clinical examination and standard questionnaire. Statistically significant differences were found in almost all measured parameters. With regard to restricted movements, overbite, and overjet, the differences obtained did not have clinical significance. The most significant differences were found in the parameters of pain. Headache was experienced by 63.83% of the subjects with PTSD, facial pain by 12.77%, and pain in the region of the jaw by 10.64%. Headache was the most intense pain, with an average intensity of 4.92 on a scale of 0 to 10. Pain on loading, temporomandibular joint clicking, and intrameatal tenderness were more prevalent in the PTSD group than in the healthy control group. The study supports the concept that PTSD patients are at increased risk for the development of temporomandibular disorder symptoms. abstract_id: PUBMED:25077153 Temporomandibular joint health status in war veterans with post-traumatic stress disorder. Background And Aim: The objective of this study was to determine the prevalence of signs and symptoms of temporomandibular joint dysfunction (TMJD) in the Iran/Iraq war veterans suffering from post-traumatic stress disorder. Materials And Methods: A total of 120 subjects in the age range of 27 to 55 years were included; it included case group (30 war veterans with PTSD) and three control groups (30 patients with PTSD who had not participated in the War, 30 healthy war veterans, and 30 healthy subjects who had not participated in the War). All subjects underwent a clinical TMJ examination that involved the clinical assessment of the TMJ signs and symptoms. Results: The groups of veterans had high prevalence of TMJD signs and symptoms vs. other groups; history of Trauma to joint was significantly higher in subjects who had participated in the war compare with subjects who had not participated in the war (P = 0.0006). Furthermore, pain in palpation of masseter, temporal, pterygoideus, digastric, and sternocleidomastoid muscles in the groups of veterans was significantly greater than other groups (P &lt; 0.0001). Clicking noise during mouth chewing was significantly different between groups (P = 0.01). And, there was significant difference in the frequencies of maximum opening of the mouth between groups (P = 0.001). Conclusion: The results of this study showed that subjects' war veterans with PTSD have significantly poorer TMJ functional status than the control subjects. abstract_id: PUBMED:36056716 Prevalence of painful temporomandibular disorders, awake bruxism and sleep bruxism among patients with severe post-traumatic stress disorder. Background: Post-traumatic stress disorder (PTSD) is associated with painful temporomandibular disorder (TMD) and may be part of the aetiology of awake bruxism (AB) and sleep bruxism (SB). Investigating the associations between PTSD symptoms on the one hand, and painful TMD, AB and SB on the other, can help tailoring treatment to the needs of this patient group. Objectives: The aim of this study was to investigate the associations between PTSD symptoms and painful TMD, AB and SB among patients with PTSD, focusing on prevalence, symptom severity and the influence of trauma history on the presence of painful TMD, AB and SB. Methods: Individuals (N = 673) attending a specialised PTSD clinic were assessed (pre-treatment) for painful TMD (TMD pain screener), AB and SB (Oral Behaviours Checklist), PTSD symptoms (Clinician-Administered PTSD Scale) and type of traumatic events (Life Events Checklist). Results: Painful TMD, AB and SB were more prevalent among patients with PTSD (28.4%, 48.3% and 40.1%, respectively) than in the general population (8.0%, 31.0% and 15.3%, respectively; all p's &lt; .001). PTSD symptom severity was found to be significantly, but poorly, associated with the severity of painful TMD (rs = .126, p = .001), AB (rs = .155, p &lt; .001) and SB (rs = .084, p = .029). Patients who had been exposed to sexual assault were more likely to report AB than patients who had not. Similarly, exposure to physical violence was associated with increased odds for SB. Conclusion: Patients with severe PTSD are more likely to experience painful TMD, AB or SB, whereas type of traumatic event can be of influence. These findings can contribute to selecting appropriate treatment modalities when treating patients with painful TMD, AB and SB. abstract_id: PUBMED:36579250 The Association Between Post-Traumatic Stress Disorder and Temporomandibular Disorders: A Systematic Review. The purpose of this systematic study was to discover a connection between temporomandibular joint disorders and post-traumatic stress disorder. A systematic review of observational studies on post-traumatic stress disorder and the incidence of temporomandibular joint disorders (TMD) was conducted. Electronic searches of PubMed, the Saudi Digital Library, Science Direct, the Virtual Health Library (VHL), Scopus, Web of Science, Sage, EBSCO Information Services, and Ovid were performed. There was a consensus among the reviewing examiners. Only studies with the following Medical Subject Headings (MeSH) terms were included: "Posttraumatic stress disorder" combined with "temporomandibular joint disorder," "myofascial pain," "orofacial pain," "internal derangement," "disc displacement with reduction," or "disc displacement without reduction." Only full-text studies in the English language published between 2010 and June 2020 were considered. Of a total of 381 articles meeting the initial screening criteria, only eight were included in the qualitative analysis. Overall, pain is exacerbated in patients with PTSD; that is, their TMD is heightened in all aspects of pain, chronicity, decreased response to conventional therapies, and the need for more potent treatment options as compared with patients with just TMD. The evidence, albeit weak, obtained from the studies included in this review suggests a relationship between PTSD and TMDs. abstract_id: PUBMED:15635556 Prevalence of traumatic stressors in patients with temporomandibular disorders. Purpose: The aim of the present study was to identify the prevalence of significant traumatic stressor(s) reported by chronic temporomandibular disorder patients, and to describe the nature of these stressors. A second aim of this study was to evaluate and compare the behavioral and psychological domains of patients who reported 1 or more significant traumatic stressors to those who did not. Patients And Methods: Twelve hundred twenty-one patients with chronic temporomandibular disorder pain completed a battery of psychometric measures including the Symptom Check List-90-Revised, Multidimensional Pain Inventory, Pittsburgh Sleep Quality Index, and a check list of major traumatic stressors. Results: The prevalence of major traumatic stressors among our chronic pain patients was high (49.8%). Traumatic stressors were related to increased pain severity, affective distress, and disability among patients with chronic pain. Patients admitting to major traumatic stressors also scored higher on most psychometric measures and more often had pain from myogenous origin. It is argued that possible significant contributors to chronic orofacial pain may be anxiety, depression, and dysregulation of the hypothalamic-pituitary-adrenal axis. Conclusion: Prevalence of major traumatic stressors in chronic temporomandibular disorder patients is high. The greater distress on all psychological domains in patients endorsing major traumatic events may be a reflection of inadequate coping skills in these individuals. Therapy should be oriented toward the resolution of accompanying stressors such as depression, anxiety, and dysregulation of the hypothalamic-pituitary-adrenal axis. abstract_id: PUBMED:27908491 Clinical outcomes of Botox injections for chronic temporomandibular disorders: do we understand how Botox works on muscle, pain, and the brain? The main objective of this retrospective review was to analyze the clinical outcomes following the use of botulinum toxin (onabotulinumtoxinA, Botox) injections to relieve the symptoms of chronic temporomandibular disorders (TMD). Seventy-one patients with a diagnosis of TMD (according to the RDC/TMD international consortium) associated with or without bruxism and refractory to conventional treatment (e.g. oral appliances, physiotherapy, etc.) received Botox injections into the temporalis and masseter muscles. Subjective responses to Botox were categorized as 'beneficial' or 'not beneficial', as patient-reported outcomes based on the subjective reduction in pain and/or improvement in function. Fifty-five of the 71 subjects (77%) reported beneficial effects with Botox. Subjects with a concomitant bruxism diagnosis reported significant improvement over subjects without bruxism (87% vs. 67%; P=0.042). Subjects with stress-related psychiatric comorbidities and bruxism had a significantly higher benefit than those with stress-related psychiatric comorbidities alone (P=0.027). Patients reported less improvement if the time between the initial Botox injection and follow-up was less than an average of 5 weeks, compared to an average follow-up of 5-10 weeks (P=0.009). The subgroup TMD diagnosis and time interval post-injection are important predictors of patient-reported beneficial outcomes. Answer: Yes, post-traumatic stress disorder (PTSD) symptoms and temporomandibular pain are associated. Several studies have found a significant relationship between the two conditions. For instance, a study found that PTSD symptoms are strongly linked to temporomandibular disorders (TMD) pain, and this association could be partially explained by genetic vulnerability to both conditions but is not related to the presence of chronic widespread pain (CWP) (PUBMED:18351033). Another study reported that subjects with clinical PTSD had a 2.56-fold increase in joint pain and a 3.86-fold increase in muscle pain compared to subjects without clinical PTSD (PUBMED:30153313). Additionally, a systematic review with meta-analysis found that war veterans diagnosed with PTSD had a higher prevalence of TMD signs and symptoms compared to those not exposed to war, indicating an association between war-related PTSD and TMD (PUBMED:37300526). Furthermore, research has shown that PTSD patients are at increased risk for the development of TMD symptoms (PUBMED:17153558), and war veterans with PTSD have significantly poorer TMJ functional status than control subjects (PUBMED:25077153). In patients with severe PTSD, painful TMD, awake bruxism (AB), and sleep bruxism (SB) were found to be more prevalent than in the general population, and PTSD symptom severity was associated with the severity of painful TMD, AB, and SB (PUBMED:36056716). Another systematic review suggested a relationship between PTSD and TMDs, with pain being exacerbated in patients with PTSD (PUBMED:36579250). Lastly, the prevalence of major traumatic stressors among chronic TMD patients was high, and these stressors were related to increased pain severity, affective distress, and disability (PUBMED:15635556).
Instruction: Are vitamin D levels affected by acute bacterial infections in children? Abstracts: abstract_id: PUBMED:24923333 Are vitamin D levels affected by acute bacterial infections in children? Aims: Vitamin D deficiency is associated with infectious diseases; however, it is not known whether vitamin D levels are affected by acute infection. Our aim was to establish whether 25-hydroxyvitamin D (25OHD) levels taken during an acute bacterial infection are representative of baseline levels. Methods: Thirty children between 6 months and 15 years of age with proven bacterial infections presenting to a tertiary paediatric referral centre had 25OHD levels taken during their acute infection and again 1 month later provided that they had recovered from their infection, had no subsequent infections and had not been taking vitamin supplements. 25OHD levels were measured by liquid chromatography mass spectrometry. Results: Mean 25OHD at enrolment was 67.5 nmol/L (standard deviation (SD) 22.0), and mean 25OHD at 1 month follow up was 72.7 nmol/L (SD 25.8) (paired t-test P = 0.25). C-reactive protein levels were recorded in 29/30 patients at enrolment (mean 85.1 mg/L, SD 83.5) and 25/30 patients at follow-up (mean 4.0 mg/L, SD 3.3) (paired t-test P = 0.002). The ethnicity of the participants was New Zealand European or European Other, 26; Samoan, 2; Maori, 1; and Chinese, 1. Conclusions: In children, 25OHD levels are not affected by acute bacterial infections; 25OHD levels taken during acute bacterial infection are representative of baseline levels. 25OHD levels collected during acute bacterial infection provide reliable information for case-control studies. abstract_id: PUBMED:30264762 Serum zinc levels amongst under-five children with acute diarrhoea and bacterial pathogens. Background And Aim: Acute diarrhoea contributes significantly to morbidity and mortality in under-five children globally with conflicting reports regarding the therapeutic benefit of zinc across the different causative pathogens. This study aimed to determine the prevalence of bacterial aetiology of children with acute diarrhoea and compare their serum zinc levels. Methods: One hundred children aged 2-59 months with acute diarrhoea and 100 apparently healthy matched controls were recruited in Ilorin, North Central Nigeria. Stool specimens were investigated for bacterial pathogens using conventional culture techniques, while serum zinc levels were determined by colorimetric method. Results: Bacteria were isolated in 73 (73.0%) patients and 6 (6.0%) controls. Escherichia coli was isolated in 39 (39.0%) of the patients, while Klebsiella spp., Proteus spp. and Pseudomonas aeruginosa were isolated in 28 (28.0%), 4 (4.0%) and 2 (2.0%) patients, respectively.E. coli and Klebsiella spp. were detected in 4 (4.0%) and 2 (2.0%) controls, respectively. The mean serum zinc level of 65.3 ± 7.4 μg/dl in the patients was significantly lower than 69.0 ± 6.5 μg/dl in the controls (P &lt; 0.001). Zinc deficiency (serum zinc levels &lt; 65 μg/dl) was detected in 47 (47.0%) patients which was significantly higher than 32 (32.0%) controls (P = 0.030). The mean serum zinc levels significantly differed amongst the bacteria isolated in the patients (P &lt; 0.001). Conclusions: Bacterial pathogens constitute a significant burden to aetiology of acute diarrhoea in under-five Nigerian children. The prevalence of zinc deficiency was high in the study population. The serum zinc levels also differed across the bacteria isolated. abstract_id: PUBMED:23782208 Vitamin D deficiency and acute lung injury. Acute Lung Injury (ALI) and the more severe form Acute Respiratory Distress Syndrome (ARDS) remain a significant cause of morbidity and mortality in the critically ill patient. It is characterised by a severe inflammatory process resulting in diffuse alveolar damage, influx of neutrophils, macrophages and a protein rich exudate in the alveolar spaces caused by endothelial and epithelial injury. Improvements in outcomes are in part due to restrictive fluid management and protective lung ventilation however successful therapeutic strategies remain elusive with promising therapies failing to translate positively in human studies. The evidence for the role of vitamin D in lung disease is growing - deficiency has been associated with impaired pulmonary function, increased incidence of viral and bacterial infections and inflammatory disease including asthma and COPD. Studies have also reported a high prevalence of vitamin D deficiency in the critically ill and an association with adverse outcomes. Although exact mechanisms are yet to be discerned, vitamin D appears to impact on a variety of inflammatory and structural cells within the lung including macrophages, lymphocytes and epithelial cells. To date there are few directly supportive clinical studies in ALI; this review explores the compelling evidence suggesting arole for vitamin D in ALI and the mechanisms by which it could contribute to pathogenesis. abstract_id: PUBMED:17532827 Serum zinc levels in children with acute gastroenteritis. Background: The aim of the present study was to determine the serum zinc levels on admission and 7-10 days after clinical recovery from acute gastroenteritis of &lt;8 days' duration. Methods: This prospective study included 82 infants aged 2-24 months who had no associated bacterial infection, chronic disease, prior antibiotic use, moderate or severe malnutrition or dysentery. Forty-one healthy children formed the control group. Results: The mean serum zinc level on admission (Zn1) was 11.85 +/- 2.83 micromol/L and at 7-10 days after recovery (Zn2) was 10.92 +/- 2.17 micromol/L; mean serum zinc level of the control group was 11.81 +/- 3.45 micromol/L. Zn2 was significantly lower than Zn1, but there was no statistical difference between the mean of the control group and Zn1 and Zn2. When dehydrated patients were excluded from the patient group, Zn1 and Zn2 did not differ. Although asymptomatic, 39% of the control group had low zinc. Serum zinc levels were not affected by sex, age, clinical characteristics of the patients or severity of gastroenteritis. Conclusion: Serum zinc levels of the patients admitted with acute gastroenteritis without any other disease and without moderate or severe malnutrition were not affected by disease state. Gastroenteritis did not further decrease serum zinc levels in patients with asymptomatic or subclinical zinc deficiency. abstract_id: PUBMED:31422183 Vitamin D supplementation could reduce the risk of acute cellular rejection and infection in vitamin D deficient liver allograft recipients. Background: Vitamin D regulates the immune system and affects the outcome of allografts. We investigated the mechanisms underlying the preventative potential of vitamin D in acute cellular rejection (ACR) and infection, and determined its effects on the induction of both T cells and complement. Methods: A total of 141 patients who received a liver allograft at our center between 2012 and 2016 were enrolled in the study and divided into a vitamin D supplementation group (case group, n = 71) and a non-vitamin D supplementation group (control group, n = 70). Serum was collected in the hours prior to transplantation and within the first month of transplantation. We evaluated the relationship between the serum levels of 25-hydroxyvitamin D ACR, infection, T cells, complement, and graft function. Follow-up was conducted until patient death or June 30, 2018. Results: Vitamin D deficiency was an important independent risk factor for ACR. The incidence of ACR, and bacterial and fungal infection was reduced in patients with vitamin D supplementation. The frequency of Treg, Tmemory, T naïve cells and CD8 + CD28+ T cells (CTL) and the level of complement component 3 were related to ACR in the first month after transplantation. This study showed increased numbers of Treg cells and Tmemory cells and decreased numbers of Naïve cells and CTL in the case group. Vitamin D status was significantly associated with mortality. Conclusions: Vitamin D supplementation is associated with a lower risk of ACR and infection, suggesting that it may promote immune tolerance towards the liver allografts. abstract_id: PUBMED:26862046 Serum Vitamin D Levels Not Associated with Atopic Dermatitis Severity. Backgound/objectives: The objective of the current study was to determine the relationship between serum vitamin D levels and the severity of atopic dermatitis (AD) in a Brazilian population. Methods: This was a cross-sectional study of patients younger than 14 years of age seen from April to November 2013. All patients fulfilled the Hanifin and Rajka Diagnostic Criteria for AD diagnosis. Disease severity was determined using the SCORing Atopic Dermatitis index and classified as mild (&lt;25), moderate (25-50), or severe (&gt;50). Serum vitamin D levels were classified as sufficient (≥30 ng/mL), insufficient (29-21 ng/mL), or deficient (≤20 ng/mL). Results: A total of 105 patients met the inclusion criteria. Mild AD was diagnosed in 58 (55.2%) children, moderate in 24 (22.8%), and severe in 23 (21.9%). Vitamin D deficiency was observed in 45 individuals (42.9%). Of these, 24 (53.3%) had mild AD, 13 (28.9%) moderate, and 8 (17.7%) severe. Insufficient vitamin D levels were found in 45 (42.9%) individuals; 24 (53.3%) had mild AD, 9 (20.0%) moderate, and 12 (26.7%) severe. Of the 15 individuals (14.2%) with sufficient vitamin D levels, 10 (60.7%) had mild AD, 2 (13.3%) moderate, and 3 (20.0%) severe. The mean vitamin D level was 22.1 ± 7.3 ng/mL in individuals with mild AD, 20.8 ± 6.5 ng/mL in those with moderate AD, and 21.9 ± 9.3 ng/mL in those with severe AD. Variables such as sex, age, skin phototype, season of the year, and bacterial infection were not significantly associated with vitamin D levels. Conclusion: Levels of 25-hydroxyvitamin D were deficient or insufficient in 85% of the children, but serum vitamin D concentrations were not significantly related to AD severity. abstract_id: PUBMED:30774019 Incidence of infections after therapy completion in children with acute lymphoblastic leukemia or acute myeloid leukemia: a systematic review of the literature. Infections are a common complication of treatment for pediatric acute lymphoblastic leukemia (ALL) or acute myeloid leukemia (AML). Less is known about infections occurring after treatment. We performed a systematic review of the literature to assess the incidence of infections after therapy completion in children and young adults with ALL or AML. Twenty-eight studies, with 4138 patients, were included. Four studies reported infections in patients who did not undergo hematopoietic stem cell transplant (HSCT). Respiratory tract and urinary tract infections affected 9.9-72.5% and 2.9-19.8% of patients, respectively. Twelve studies reported infections in patients treated with HSCT. Late bacterial, viral and fungal infections affected 3.9-38.5%, 16.1-66.7%, and 0.2-41.7% of patients, respectively. Viral hepatitis affected 0.8-75.4% of patients from 12 studies. Our review suggests that infections are a frequent complication after treatment for leukemia in children, especially after HSCT and identifies several knowledge gaps in the current literature. abstract_id: PUBMED:31063672 Enhancement of vitamin B6 levels in rice expressing Arabidopsis vitamin B6 biosynthesis de novo genes. Vitamin B6 (pyridoxine) is vital for key metabolic reactions and reported to have antioxidant properties in planta. Therefore, enhancement of vitamin B6 content has been hypothesized to be a route to improve resistance to biotic and abiotic stresses. Most of the current studies on vitamin B6 in plants are on eudicot species, with monocots remaining largely unexplored. In this study, we investigated vitamin B6 biosynthesis in rice, with a view to examining the feasibility and impact of enhancing vitamin B6 levels. Constitutive expression in rice of two Arabidopsis thaliana genes from the vitamin B6 biosynthesis de novo pathway, AtPDX1.1 and AtPDX2, resulted in a considerable increase in vitamin B6 in leaves (up to 28.3-fold) and roots (up to 12-fold), with minimal impact on general growth. Rice lines accumulating high levels of vitamin B6 did not display enhanced tolerance to abiotic stress (salt) or biotic stress (resistance to Xanthomonas oryzae infection). While a significant increase in vitamin B6 content could also be achieved in rice seeds (up to 3.1-fold), the increase was largely due to its accumulation in seed coat and embryo tissues, with little enhancement observed in the endosperm. However, seed yield was affected in some vitamin B6 -enhanced lines. Notably, expression of the transgenes did not affect the expression of the endogenous rice PDX genes. Intriguingly, despite transgene expression in leaves and seeds, the corresponding proteins were only detectable in leaves and could not be observed in seeds, possibly pointing to a mode of regulation in this organ. abstract_id: PUBMED:24051165 Vitamin D and infectious diseases Vitamin D plays a role in the synthesis of antibacterial peptids and in autophagy. Several studies have shown that low levels of vitamin D are associated with the susceptibility and the severity of acute infections on one hand, and with an unfavorable outcome of some chronic infections (such as HIV infection). Vitamin D supplementation improves response to treatment of some viral (such as chronic hepatitis C infection) or bacterial infections (such as pulmonar tuberculosis). Vitamin D supplementation demonstrated no benefit in reducing the incidence of pulmonary infections. The target level of vitamin D to be reached after supplementation is not known yet. abstract_id: PUBMED:19190532 The vitamin D connection to pediatric infections and immune function. Over the past 20 y, a resurgence in vitamin D deficiency and nutritional rickets has been reported throughout the world, including the United States. Inadequate serum vitamin D concentrations have also been associated with complications from other health problems, including tuberculosis, cancer (prostate, breast, and colon), multiple sclerosis, and diabetes. These findings support the concept of vitamin D possessing important pleiotropic actions outside of calcium homeostasis and bone metabolism. In children, an association of nutritional rickets with respiratory compromise has long been recognized. Recent epidemiologic studies clearly demonstrate the link between vitamin D deficiency and the increased incidence of respiratory infections. Further research has also elucidated the contribution of vitamin D in the host defense response to infection. However, the mechanism(s) by which vitamin D levels contribute to pediatric infections and immune function has yet to be determined. This knowledge is particularly relevant and timely, because infants and children seem more susceptible to viral rather than bacterial infections in the face of vitamin D deficiency. The connection among vitamin D, infections, and immune function in the pediatric population indicates a possible role for vitamin D supplementation in potential interventions and adjuvant therapies. Answer: According to the study with PUBMED:24923333, vitamin D levels are not affected by acute bacterial infections in children. The study aimed to establish whether 25-hydroxyvitamin D (25OHD) levels taken during an acute bacterial infection are representative of baseline levels. The results showed that the mean 25OHD levels at enrolment and at 1 month follow-up did not significantly differ (paired t-test P = 0.25), indicating that 25OHD levels taken during acute bacterial infection are representative of baseline levels. Therefore, it can be concluded that in children, 25OHD levels collected during an acute bacterial infection provide reliable information for case-control studies and are not influenced by the infection itself.
Instruction: Antibiotics and respiratory infections: do antibiotic prescriptions improve outcomes? Abstracts: abstract_id: PUBMED:23318197 Adherence between antibiotic prescriptions and guidelines in an internal medicine ward: an evaluation of professional practices Introduction: This is an evaluation of professional practices (EPP) on antibiotic therapy in an internal medicine ward. Material And Methods: A 6-month prospective review of antibiotic prescriptions and their comparisons with local and national guidelines (drug, daily dose, administration, and duration) were performed. Results: Antibiotic therapy on 227 infectious episodes was collected. According to local guidelines, we found 56% of totally respected (lower respiratory tract infections: 38%, urinary tract infections: 88% and skin infections: 73%), 33% of partially respected and 11% of non-appropriate prescriptions. Considering national guidelines for lower respiratory tract infections as references, the results were: totally respected prescriptions 81%, partially respected prescriptions 16%, and non-appropriate prescriptions 3%. Conclusion: This evaluation of the prescriptions allowed setting up long-lasting actions to improve clinical practice. This approach anticipates the procedures of EPP that will be needed for hospital accreditation and highlights the importance of considering several guidelines for the interpretation of the results. abstract_id: PUBMED:8824043 Antibiotics and respiratory infections: do antibiotic prescriptions improve outcomes? Background And Objectives: Antibiotics are frequently prescribed for respiratory infections though most of these infections are viral. To determine whether this practice contributes to patient health and patient satisfaction, we studied the effect of antibiotic prescriptions on outcomes at 7 to 10 days. We also studied the effect of antibiotic prescriptions upon the accuracy of patients' beliefs about viruses. Methods: One hundred thirteen patients with a respiratory infection completed questionnaires before and after their visit with their primary care doctor. A phone interview was completed 7 to 10 days later. Questions elicited their expectations for antibiotics, their beliefs about the efficacy of antibiotics, and satisfaction with the doctor. The phone interview asked whether they felt better, whether they had returned to the doctor about the same illness, satisfaction, and whether they would expect antibiotics for the same disease in the future. The doctors provided information about their diagnosis and treatment. Results: No correlation was found between prescription of antibiotics and patient satisfaction, feeling better, return physician visits, or phone calls. Receiving antibiotics increased the likelihood the patients would expect antibiotics the next time they had an upper respiratory infection and made them more likely to have an inaccurate belief, that antibiotics kill viruses. Conclusions: The study found no evidence that antibiotics improve patient outcome in upper respiratory infections by making patients feel better at 7 to 10 days. Nor did it find evidence that antibiotics help physicians by reducing return visits or increasing patient satisfaction. Doctors are invited to reconsider their policies for prescribing antibiotics for upper respiratory infection. abstract_id: PUBMED:24978045 POPI (Pediatrics: Omission of Prescriptions and Inappropriate prescriptions): development of a tool to identify inappropriate prescribing. Introduction: Rational prescribing for children is an issue for all countries and has been inadequately studied. Inappropriate prescriptions, including drug omissions, are one of the main causes of medication errors in this population. Our aim is to develop a screening tool to identify omissions and inappropriate prescriptions in pediatrics based on French and international guidelines. Methods: A selection of diseases was included in the tool using data from social security and hospital statistics. A literature review was done to obtain criteria which could be included in the tool called POPI. A 2-round-Delphi consensus technique was used to establish the content validity of POPI; panelists were asked to rate their level of agreement with each proposition on a 9-point Likert scale and add suggestions if necessary. Results: 108 explicit criteria (80 inappropriate prescriptions and 28 omissions) were obtained and submitted to a 16-member expert panel (8 pharmacists, 8 pediatricians hospital-based -50%- or working in community -50%-). Criteria were categorized according to the main physiological systems (gastroenterology, respiratory infections, pain, neurology, dermatology and miscellaneous). Each criterion was accompanied by a concise explanation as to why the practice is potentially inappropriate in pediatrics (including references). Two round of Delphi process were completed via an online questionnaire. 104 out of the 108 criteria submitted to experts were selected after 2 Delphi rounds (79 inappropriate prescriptions and 25 omissions). Discussion Conclusion: POPI is the first screening-tool develop to detect inappropriate prescriptions and omissions in pediatrics based on explicit criteria. Inter-user reliability study is necessary before using the tool, and prospective study to assess the effectiveness of POPI is also necessary. abstract_id: PUBMED:33112451 Effect of antimicrobial stewardship on antimicrobial prescriptions for selected diseases of dogs in Switzerland. Background: Antimicrobial stewardship programs (ASPs) are important tools to foster prudent antimicrobial use. Objective: To evaluate antimicrobial prescriptions by Swiss veterinarians before and after introduction of the online ASP AntibioticScout.ch in December 2016. Animals: Dogs presented to 2 university hospitals and 14 private practices in 2016 or 2018 for acute diarrhea (AD; n = 779), urinary tract infection (UTI; n = 505), respiratory tract infection (RTI; n = 580), or wound infection (WI; n = 341). Methods: Retrospective study. Prescriptions of antimicrobials in 2016 and 2018 were compared and their appropriateness assessed by a justification score. Results: The proportion of dogs prescribed antimicrobials decreased significantly between 2016 and 2018 (74% vs 59%; P &lt; .001). The proportion of prescriptions in complete agreement with guidelines increased significantly (48% vs 60%; P &lt; .001) and those in complete disagreement significantly decreased (38% vs 24%; P &lt; .001) during this time. Antimicrobial prescriptions for dogs with AD were significantly correlated with the presence of hemorrhagic diarrhea in both years, but a significantly lower proportion of dogs with hemorrhagic diarrhea were unnecessarily prescribed antimicrobials in 2018 (65% vs 36%; P &lt; .001). In private practices, in 2018 a bacterial etiology of UTI was confirmed in 16% of dogs. Prescriptions for fluoroquinolones significantly decreased (29% vs 14%; P = .002). Prescriptions for antimicrobials decreased significantly in private practices for RTI (54% vs 31%; P &lt; .001). Conclusion: Antimicrobials were used more prudently for the examined indications in 2018 compared to 2016. The study highlights the continued need for ASPs in veterinary medicine. abstract_id: PUBMED:10437289 French National Institute for observation of prescriptions and consumption of medicines. Prescription and consumption of antibiotics in ambulatory care The National Research Institute for Prescriptions and Consumption of medicines which was founded under the authority of the Minister of Health, is charged with the following missions: improved evaluation of the therapeutic needs of the population; more precise knowledge of therapeutic management; the identification of possible deviations in relation to systems of reference; recommendations in favor of correct use of medicines; and the optimization of patient management. Its first report concerned the antibiotic therapy of respiratory infections. In France, the average annual increase rate of the frequency of antibiotics consumption was in the region of 3.7%, between the periods 1980-1981 and 1991-1992. It essentially concerned cephalosporins and quinolones. Between 1991 and 1996 antibiotics sales increased on average by 2.1%, in units, per year. The increase of this consumption, which was not justified by any epidemiological evolution, is partly explained by the high frequency of antibiotic prescriptions during respiratory or ENT affections presumed to be of viral etiology: in 40% of rhinopharyngitis, 80% of acute bronchitis and more than 90% of anginas, whatever the age. Moreover the antibiotic treatments were not prescribed optimally: too long duration, insufficient dosages. Such phenomena are disturbing with regard to their consequences on the evolution of bacterial resistances. A comparison between French practices and those of Germany and the United Kingdom suggests that recourse to treatment is more frequent in France for the infectious diseases mentioned above, with more intensive utilization of antibiotics, in particular broad-spectrum penicillins. Recommendations have been made in favor of a rationalisation of practices. abstract_id: PUBMED:32487580 Variations in antibiotic prescribing among village doctors in a rural region of Shandong province, China: a cross-sectional analysis of prescriptions. Objectives: To assess variation in antibiotic prescribing practices among village doctors in a rural region of Shandong province, China. Design, Setting And Participants: Almost all outpatient encounters at village clinics result in a prescription being issued. Prescriptions were collected over a 2.5-year period from 8 primary care village clinics staffed by 24 doctors located around a town in rural Shandong province. A target of 60 prescriptions per clinic per month was sampled from an average total of around 300. Prescriptions were analysed at both aggregate and individual-prescriber levels, with a focus on diagnoses of likely viral acute upper respiratory tract infections (AURIs), defined as International Classification of Diseases, 10th Revision codes J00 and J06.9. Main Outcome Measures: Proportions of prescriptions for AURIs containing (1) at least one antibiotic, (2) multiple antibiotics, (3) at least one parenteral antibiotic; classes and agents of antibiotics prescribed. Results: In total, 14 471 prescriptions from 23 prescribers were ultimately included, of which 5833 (40.3%) contained at least 1 antibiotic. Nearly two-thirds 62.5% (3237/5177) of likely viral AURI prescriptions contained an antibiotic, accounting for 55.5% (3237/5833) of all antibiotic-containing prescriptions. For AURIs, there was wide variation at the individual level in antibiotic prescribing rates (33.1%-88.0%), as well multiple antibiotic prescribing rates (1.3%-60.2%) and parenteral antibiotic prescribing rates (3.2%-62.1%). Each village doctor prescribed between 11 and 21 unique agents for AURIs, including many broad-spectrum antibiotics. Doctors in the highest quartile for antibiotic prescribing rates for AURI also had higher antibiotic prescribing rates than doctors in the lowest quartile for potentially bacterial upper respiratory tract infections (pharyngitis, tonsillitis, laryngopharyngitis; 89.1% vs 72.4%, p=0.002). Conclusions: All village doctors overused antibiotics for respiratory tract infections. Variations in individual prescriber practices are significant even in a small homogenous setting and should be accounted for when developing targets and interventions to improve antibiotic use. abstract_id: PUBMED:25678982 Antibiotic repeat prescriptions: are patients not re-filling them properly? Objective: This study aimed to explore patients' utilization of repeat prescriptions for antibiotics indicated in upper respiratory tract infections (URTI). An emphasis was placed on whether the current system of repeat prescriptions contributes to patients self-diagnosing infections and if so, identify the common reasons for this. Methods: This is a prospective study of self-reported use of repeat antibiotic prescriptions by pharmacy consumers presenting with repeat prescriptions for antibiotics commonly indicated in URTIs. Data were collected via self-completed surveys in Perth metropolitan pharmacies. Results: A total of 123 respondents participated in this study from 19 Perth metropolitan pharmacies. Of the respondents, approximately a third of them (33.9%) presented to the pharmacy to fill their antibiotic repeat prescription one month or more from the time the original prescription was written (i.e. time when original diagnosis was made by a doctor). Over two thirds of respondents indicated to not have consulted their doctor prior to presenting to the pharmacy to have their antibiotic repeat prescription dispensed (i.e. 68.3%). The most common reasons for this were that their 'doctor had told them to take the second course' (38%), followed by potential self-diagnosis (29%), i.e. 'they had the same symptoms as the last time they took the antibiotics'. Approximately one third (33.1%) of respondents indicated they 'were not told what the repeat prescription was needed for' when they were originally prescribed the antibiotic. Respondents who presented to fill their repeat prescription more than 2 weeks after the original prescription written were more likely not have consulted their doctor (p = 0.006, 95% CI [1.16, 2.01]) and not to know why their repeat was needed (p = 0.010, 95% CI [1.07, 2.18]). Conclusions: Findings of this study suggested that the current 12 month validity of antibiotics repeat prescriptions is potentially contributing to patients' self-diagnosis of URTIs and therefore potential misuse of antibiotics. This may be contributing to the rise of antimicrobial resistance. The study also outlines some common reasons for patients potentially self-diagnosing URTIs when using repeat prescriptions. Larger Australian studies are needed to confirm these findings. abstract_id: PUBMED:33373435 Trends in US Outpatient Antibiotic Prescriptions During the Coronavirus Disease 2019 Pandemic. Background: The objective of our study was to describe trends in US outpatient antibiotic prescriptions from January through May 2020 and compare with trends in previous years (2017-2019). Methods: We used data from the IQVIA Total Patient Tracker to estimate the monthly number of patients dispensed antibiotic prescriptions from retail pharmacies from January 2017 through May 2020. We averaged estimates from 2017 through 2019 and defined expected seasonal change as the average percent change from January to May 2017-2019. We calculated percentage point and volume changes in the number of patients dispensed antibiotics from January to May 2020 exceeding expected seasonal changes. We also calculated average percent change in number of patients dispensed antibiotics per month in 2017-2019 versus 2020. Data were analyzed overall and by agent, class, patient age, state, and prescriber specialty. Results: From January to May 2020, the number of patients dispensed antibiotic prescriptions decreased from 20.3 to 9.9 million, exceeding seasonally expected decreases by 33 percentage points and 6.6 million patients. The largest changes in 2017-2019 versus 2020 were observed in April (-39%) and May (-42%). The number of patients dispensed azithromycin increased from February to March 2020 then decreased. Overall, beyond-expected decreases were greatest among children (≤19 years) and agents used for respiratory infections, dentistry, and surgical prophylaxis. Conclusions: From January 2020 to May 2020, the number of outpatients with antibiotic prescriptions decreased substantially more than would be expected because of seasonal trends alone, possibly related to the coronavirus disease 2019 pandemic and associated mitigation measures. abstract_id: PUBMED:37691049 Interventions to improve outcomes in community-acquired pneumonia. Introduction: Community-acquired pneumonia (CAP) is a common infection associated with high morbimortality and a highly deleterious impact on patients' quality of life and functionality. We comprehensively review the factors related to the host, the causative microorganism, the therapeutic approach and the organization of health systems (e.g. setting for care and systems for allocation) that might have an impact on CAP-associated outcomes. Our main aims are to discuss the most controversial points and to provide some recommendations that may guide further research and the management of patients with CAP, in order to improve their outcomes, beyond mortality. Area Covered: In this review, we aim to provide a critical account of potential measures to improve outcomes of CAP and the supporting evidence from observational studies and clinical trials. Expert Opinion: CAP is associated with high mortality and a highly deleterious impact on patients' quality of life. To improve CAP-associated outcomes, it is important to understand the factors related to the patient, etiology, therapeutics, and the organization of health systems. abstract_id: PUBMED:20944049 Postdated versus usual delayed antibiotic prescriptions in primary care: Reduction in antibiotic use for acute respiratory infections? Objective: To determine whether postdating delayed antibiotic prescriptions results in a further decrease (over usual delayed prescriptions) in antibiotic use. Design: Randomized controlled trial. Setting: A small rural town in Newfoundland and Labrador. Participants: A total of 149 consecutive adult primary care patients who presented with acute upper respiratory tract infections. Intervention: Delayed prescriptions for patients who might require antibiotics were randomly dated either the day of the office visit (ie, the usual group) or 2 days later (ie, the postdated group). Main Outcome Measures: Whether or not the prescriptions were filled and the time it took for the patients to fill the prescriptions were noted by the 4 local pharmacies and relayed to the investigators. Results: In total, 149 delayed antibiotic prescriptions were written, 1 per patient. Of the 74 usual delayed prescriptions given out, 32 (43.2%) were filled; of the 75 postdated delayed prescriptions given out, 33 (44.0%) were filled. Sixteen patients from each group filled their delayed prescriptions earlier than the recommended 48 hours. Statistical analyses-χ² tests to compare the rates of antibiotic use between the 2 groups and t tests to compare the mean time to fill the prescription between the 2 groups-indicated that these results were not significant (P &gt; .05). Conclusion: Although delayed prescriptions reduce the rate of antibiotic use, postdating the delayed prescription does not seem to lead to further reduction in use. Answer: The study described in abstract PUBMED:8824043 investigated whether prescribing antibiotics for respiratory infections contributes to patient health and satisfaction. The results showed no correlation between the prescription of antibiotics and patient satisfaction, feeling better, return physician visits, or phone calls. Additionally, receiving antibiotics increased the likelihood that patients would expect antibiotics for future upper respiratory infections and made them more likely to hold the inaccurate belief that antibiotics kill viruses. The study concluded that there was no evidence that antibiotics improve patient outcomes in upper respiratory infections by making patients feel better at 7 to 10 days, nor did they reduce return visits or increase patient satisfaction. Doctors were encouraged to reconsider their policies for prescribing antibiotics for upper respiratory infections.
Instruction: Cardiac injury markers and a failed algorithm: can accurate assessment of acute myocardial infarction be cost effective? Abstracts: abstract_id: PUBMED:12188242 Cardiac injury markers and a failed algorithm: can accurate assessment of acute myocardial infarction be cost effective? Context: Most studies assessing the use of cardiac injury markers, such as cardiac troponin I (cTnI), total creatine kinase (CK Total), and the cardiac isoenzyme of CK (CK-MB), agree that cTnI is the most specific test for diagnosing acute myocardial infarction (AMI). However, throughout the literature, there are ambiguities and contradictions on assay-ordering criteria. Inconsistent ways of viewing biochemical assessment of acute chest pain leads to cardiac injury marker assay-ordering patterns that can be nonspecific, ambiguous, and costly. Objective: This study set out to design a cost-effective strategy and to outline criteria for ordering cardiac injury marker assays. This is accomplished by comparing Madigan Army Medical Center (MAMC) testing patterns to guidelines described in recently published prospective hospital studies investigating the markers. Design: This was a retrospective study analyzing the patterns of 34,412 cardiac marker assays performed on 4,861 patients during 1999 and 2000 at MAMC. A total of 5,850 assays were run from 1,223 patients during the first 6 months of 2001. Results: The MAMC chemistry section spent more than $100,000 during 1999 for the measurement of cardiac injury markers. During 2000, an algorithm was implemented to place controls on ordering; however, the same dollar amount was spent. CK Total, CK-MB, and cTnI testing represent 3.5% of the tests performed in the chemistry section, but they consumed about 20% of the supply budget. This disproportionate expenditure is attributable to numerous, dissimilar, and voluminous ordering patterns. Conclusions: Proper use of cardiac marker assays can lead to rapid and accurate diagnosis of AMI and subsequently save lives. This study demonstrates that cTnI is the only marker needed for accurate and more cost-effective assessment of AMI. abstract_id: PUBMED:27683523 Biochemical Markers of Myocardial Damage. Heart diseases, especially coronary artery diseases (CAD), are the leading causes of morbidity and mortality in developed countries. Effective therapy is available to ensure patient survival and to prevent long term sequelae after an acute ischemic event caused by CAD, but appropriate therapy requires rapid and accurate diagnosis. Research into the pathology of CAD have demonstrated the usefulness of measuring concentrations of chemicals released from the injured cardiac muscle can aid the diagnosis of diseases caused by myocardial ischemia. Since the mid-1950s successively better biochemical markers have been described in research publications and applied for the clinical diagnosis of acute ischemic myocardial injury. Aspartate aminotransferase of the 1950s was replaced by other cytosolic enzymes such as lactate dehydrogenase, creatine kinase and their isoenzymes that exhibited better cardiac specificity. With the availability of immunoassays, other muscle proteins, that had no enzymatic activity, were also added to the diagnostic arsenal but their limited tissue specificity and sensitivity lead to suboptimal diagnostic performance. After the discovery that cardiac troponins I and T have the desired specificity, they have replaced the cytosolic enzymes in the role of diagnosing myocardial ischemia and infarction. The use of the troponins provided new knowledge that led to revision and redefinition of ischemic myocardial injury as well as the introduction of biochemicals for estimation of the probability of future ischemic myocardial events. These markers, known as cardiac risk markers, evolved from the diagnostic markers such as CK-MB or troponins, but markers of inflammation also belong to these groups of diagnostic chemicals. This review article presents a brief summary of the most significant developments in the field of biochemical markers of cardiac injury and summarizes the most recent significant recommendations regarding the use of the cardiac markers in clinical practice. abstract_id: PUBMED:23105645 Biochemical markers of myocardial injury. The serum markers of myocardial injury are used to help in establishing the diagnosis of myocardial infarction. The older markers like aspartate amino-transferase, creatine kinase, lactate dehydrogenase etc. lost their utility due to lack of specificity and limited sensitivities. Among the currently available markers cardiac troponins are the most widely used due to their improved sensitivity specificity, efficiency and low turn around time. Studies have shown that cardiac troponins should replace CKMB as the diagnostic 'gold standard' for the diagnosis of myocardial injury. The combination of myoglobin with cardiac troponins has further improved the accuracy in the diagnosis of acute coronary syndromes and thereby reducing the hospital stay and patients' money. Among the other new markers of early detection of myocardial damage, heart fatty acid binding protein, glycogen phosphorylase BB and myoglobin/carbonic anhydrase III ratio seem to be the most promising. But the search for the most ideal marker of myocardial injury is still on. abstract_id: PUBMED:34308517 The Cost Implications of Dabigatran in Patients with Myocardial Injury After Non-Cardiac Surgery. Background: The Management of Myocardial Injury after Non-Cardiac Surgery (MANAGE) trial demonstrated that dabigatran 110 mg twice daily was more effective than placebo in preventing the primary composite outcome of vascular mortality, non-fatal myocardial infarction, non-hemorrhagic stroke, peripheral arterial thrombosis, amputation and symptomatic venous thromboembolism in patients with myocardial injury after non-cardiac surgery (MINS). The cost implications of dabigatran for this population are unknown but are important given the significant clinical implications. Methods: Hospitalized events, procedures, and study and non-study medications were documented. We applied Canadian unit costs to healthcare resources consumed for all patients in the trial, and calculated the average cost per patient in Canadian dollars for the duration of the study (median follow-up of 16 months). A sensitivity analysis was performed using only Canadian patients, and subgroup analyses were also conducted. Results: The total study cost for the dabigatran group was $9985 per patient, compared with $10,082 for placebo, a difference of - $97 (95% confidence interval [CI] - $2128 to $3672). Savings arising from fewer clinical events and procedures in the dabigatran 110 mg twice-daily group were enough to offset the cost of the study drug. In Canadian patients, the difference was $250 (95% CI -$2848 to $4840). Both differences were considered cost neutral. Dabigatran 110 mg twice daily was cost saving or cost neutral in many subgroups that were considered. Conclusion: Dabigatran 110 mg twice daily was cost neutral for patients in the MANAGE trial. Our cost findings support the use of dabigatran 110 mg twice daily in patients with MINS. Trial Registration: ClinicalTrials.gov identifier number NCT01661101. abstract_id: PUBMED:35937953 Cardiovascular markers and COVID-19. COVID-19 is an emerging viral disease with incompletely elucidated pathogenesis, a heterogeneous clinical profile, and significant interindividual variability. The major cardiovascular complications of COVID-19 include acute cardiac injury, acute myocardial infarction (AMI), myocarditis, arrhythmia, heart failure, and venous thromboembolism (VTE)/pulmonary embolism (PE). Elevated BNP /NT-proBNP, troponin and d-dimer levels has been found in a significant proportion of patients since the first data analysis, suggesting that myocardial damage is a likely pathogenic mechanism contributing to severe disease and mortality. The level of these markers is now associated with a risk of adverse outcome, namely mortality. The aim of our study is to highlight the importance of these biomarkers for the prediction of cardiovascular complications and their potential role in the evolution of COVID-19. abstract_id: PUBMED:14508160 Myocardial perfusion imaging versus biochemical markers in acute coronary syndromes. The assessment and appropriate clinical management of patients with acute chest pain and non-diagnostic electrocardiograms remain a continuing clinical problem. Accordingly, there is considerable interest in evaluating new strategies to improve early diagnostic accuracy in patients with possible acute myocardial ischaemia. Cardiac troponins (T and I) and acute rest myocardial perfusion imaging have similar sensitivities for detecting acute myocardial infarction. Whereas cardiac markers require 6-12 h to become positive, acute rest myocardial perfusion imaging immediately reflects the status of regional myocardial blood flow at the time of radiopharmaceutical injection. The measurement of cardiac troponins is particularly useful in the diagnosis and estimation of the degree of myocardial injury in those patients with a high likelihood of coronary artery disease and myocardial necrosis and for prognostication of adverse cardiac events in those patients with unstable angina. In contrast, the most appropriate use of acute rest myocardial perfusion imaging is in the setting of patients with acute ischaemic symptoms, non-diagnostic electrocardiogram and a low likelihood of myocardial necrosis, in which early imaging will assist in effective triage decisions. abstract_id: PUBMED:17400202 New biochemical markers: from bench to bedside. Background: Evaluation of patients presenting to hospital with chest pain or other signs or symptoms suggesting acute coronary syndrome (ACS) is problematic, time-consuming and sometimes expensive, even if new biochemical markers, such as troponins, have improved the ability to detect cardiac injury. However, patients with normal troponin values are not necessarily risk-free for major cardiac events. Methods: Recent investigations indicate that the overall patient risk may be assessed earlier than before, thanks to new knowledge acquired concerning the pathobiology of atherosclerosis and molecular events involved in the progression of disease, thus allowing the development of new biochemical markers. Some selected markers are released during the different phases of development of cardiovascular disease and may be useful for the diagnosis of patients with cardiovascular disease. In particular, the identification of emerging markers that provide relevant information on the inflammatory process, and the development of biomarkers whose circulating concentrations suggest the status of plaque instability and rupture, seems to be of particular value in prognosis and risk stratification. The overall expectations for a cardiovascular biochemical marker are not only its biological plausibility but also the availability at a reasonable cost of rapid, high quality assays, and their correct interpretation by clinicians using optimal cut-offs. Conclusion: The crossing from bench to bedside for each new marker discovered, must be associated with concurrent advances in the characterization of analytical features and the development of routine assay, in the assessment of analytical performance and in interpretative reporting of test results as well as in the training of physicians to use the array of biomarkers available appropriately and to interpret them correctly. This approach calls for the coordinated support of clinicians, technology experts, statisticians and the industry so that new biochemical developments can be of optimal value. abstract_id: PUBMED:16278120 Exercise testing in chest pain units: rationale, implementation, and results. Chest pain units are now established centers for assessment of low-risk patients presenting to the emergency department with symptoms suggestive of acute coronary syndrome. Accelerated diagnostic protocols, of which treadmill testing is a key component, have been developed within these units for efficient evaluation of these patients. Studies of the last decade have established the utility of early exercise testing,which has been safe, accurate, and cost-effective in this setting. Specific diagnostic protocols vary, but most require 6 to 12 hours of observation by serial electrocardiography and cardiac injury markers to exclude infarction and high-risk unstable angina before proceeding to exercise testing. However, in the chest pain unit at UC Davis Medical Center,the approach includes "immediate" treadmill testing without a traditional process to rule out myocardial infarction. Extensive experience has validated this approach in a large, heterogeneous population. The optimal strategy for evaluating low-risk patients presenting to the emergency department with chest pain will continue to evolve based on current research and the development of new methods. abstract_id: PUBMED:10150427 Echocardiography in the emergency room: is it feasible, beneficial, and cost-effective? Echocardiography in the emergency room presents exciting practice possibilities that can facilitate prompt and reliable diagnostic evaluations in patients with suspected cardiovascular emergencies. Echocardiography has the diagnostic potentials to evaluate the entire spectrum of cardiovascular abnormalities short of delineating coronary anatomy and evaluation of the conduction system. By reliably assessing the global and regional function, visualizing the cardiovascular structures from multiple tomographic planes, and quantitating hemodynamic abnormalities, echocardiography should be able to assist emergency room physician's evaluation and triage of the patients with chest pain syndrome, unexplained dyspnea, hypotension, shock, chest trauma, and cardiac arrest, whereby hopefully minimizing the unnecessary admission to the hospital and facilitating inhospital evaluation of the admitted patients with echocardiographic information. However, the optimal echocardiography practice in the emergency room requires well trained sonographers and echocardiographers who can respond to the clinical needs at anytime. Whether an emergency room physician can perform and interpret echocardiographic examinations satisfactorily will depend on his/her level of training and continuing education in this area. Currently, there is no established guideline for performing echocardiography in the emergency room. Further clinical investigations are necessary to define the most optimal and economical utilization of this versatile imaging and hemodynamic diagnostic modality in the emergency room. abstract_id: PUBMED:25646038 β-Adrenoreceptor Agonist Isoproterenol Alters Oxidative Status, Inflammatory Signaling, Injury Markers and Apoptotic Cell Death in Myocardium of Rats. Sustained high levels of circulating catecholamines are reported to induce cardiotoxicity. Isoproterenol (ISP), a synthetic catecholamine has been widely employed to induce myocardial injury, though the role of inflammation and apoptosis is not well established. This study was designed to investigate the underlying mechanism of oxidative damage, inflammatory signaling, cell death in ISP induced myocardial infarction in rats. Wistar albino rats were divided in two groups: group I (sham control) and group II (ischemic control). ISP (85 mg/kg, s.c.) was administered at an interval of 24 h to group II for two consecutive days. On day third, after 48 h of the first injection of ISP, blood was collected from retro orbital plexus of rat eyes to estimate the biochemical parameters. Glutathione (GSH) and superoxide dismutase (SOD) were measured for antioxidant status. Similarly, malondialdehyde (MDA) was measured as an index of lipid peroxidation. Cardiac markers (SGOT, CK-MB, TropI and LDH) and pro-inflammatory cytokines (IL-6, CRP and TNF-α) were also estimated in ISP-induced rats. At the end of experiments animals were sacrificed for histopathological studies. GSH and SOD showed significant decrease after ISP challenge as compared to sham (control) group (p &lt; 0.01) while MDA level, increased significantly (p &lt; 0.01). ISP, also increased the level of cardiac markers and markers of inflammation significantly (p &lt; 0.01), which was further verified by histopathological studies of the heart tissues. The study confirmed that ISP causes detrimental changes in the myocardium by altering cardiac and inflammatory markers, which leads to severe necrosis. The deleterious effects produced by ISP substantiate its suitability as a novel animal model for evaluation of cardioprotective agents/drugs. Answer: The study described in abstract PUBMED:12188242 investigated the cost-effectiveness of using cardiac injury markers for the diagnosis of acute myocardial infarction (AMI). The study was conducted at the Madigan Army Medical Center (MAMC) and analyzed the patterns of 34,412 cardiac marker assays performed on 4,861 patients during 1999 and 2000, and a total of 5,850 assays from 1,223 patients during the first 6 months of 2001. Despite implementing an algorithm to control ordering in 2000, the chemistry section spent over $100,000 on the measurement of cardiac injury markers in both 1999 and 2000. The study concluded that cardiac troponin I (cTnI) is the only marker needed for accurate and more cost-effective assessment of AMI, as it is the most specific test for diagnosing AMI (PUBMED:12188242). This conclusion is supported by other studies that have demonstrated the superiority of cardiac troponins over other markers such as aspartate aminotransferase, creatine kinase, lactate dehydrogenase, and their isoenzymes due to their improved sensitivity and specificity for diagnosing myocardial ischemia and infarction (PUBMED:27683523; PUBMED:23105645). The use of cardiac troponins has led to a redefinition of ischemic myocardial injury and the introduction of biochemicals for estimating the probability of future ischemic myocardial events (PUBMED:27683523). Furthermore, the study in abstract PUBMED:34308517 found that the use of dabigatran, a medication for patients with myocardial injury after non-cardiac surgery, was cost-neutral, suggesting that cost-effective strategies can be implemented in the management of cardiac injury. In summary, accurate assessment of AMI using cardiac injury markers can be cost-effective when the most specific and sensitive markers, such as cardiac troponins, are utilized. This approach can lead to rapid and accurate diagnosis, which is essential for effective patient management and can also be cost-neutral or cost-saving in certain contexts (PUBMED:12188242; PUBMED:34308517).
Instruction: Does "mainstreaming" guarantee access to care for medicaid recipients with asthma? Abstracts: abstract_id: PUBMED:11520386 Does "mainstreaming" guarantee access to care for medicaid recipients with asthma? Objective: Recent reforms in the federal Medicaid program have attempted to integrate beneficiaries into the mainstream by providing them with managed care options. However, the effects of mainstreaming have not been systematically evaluated. Design: Cross-sectional survey. Setting/participants: A sample of 478 adult, nonelderly asthmatics followed by a large Northern California medical group. Measurements And Main Results: We examined differences in self-reported access by insurance status. Compared to patients with other forms of insurance, patients covered by the state's Medicaid program (Medi-Cal) were more likely to report access problems for asthma-related care, including difficulties in reaching a health care provider by telephone, obtaining a clinic appointment, and obtaining asthma medication. Adjusting for relevant clinical and sociodemographic variables, Medi-Cal patients were more likely to report at least one access problem compared to non-Medi-Cal patients (adjusted odds ratio [AOR], 3.34; 95% confidence interval [CI], 1.43 to 7.80). Patients reporting at least one access problem were also more likely to have made at least one asthma-related emergency department visit within the past year (AOR, 4.84; 95% CI, 2.41 to 9.72). Reported barriers to care did not translate into reduced patient satisfaction. Conclusions: Within this population of Medicaid patients, the provision of health insurance and care within the mainstream of an integrated health system was no guarantee of equal access as perceived by the patients themselves. abstract_id: PUBMED:15078746 A survey of Medicaid recipients with asthma: perceptions of self-management, access, and care. Study Objectives: To understand how Medicaid recipients with asthma view their experience with care. Design: Survey sent to Medicaid managed care enrollees. Setting: A survey designed to assess general health status, access to care, medication-taking behaviors, and overall satisfaction was sent to 25,171 patients with moderate-to-severe asthma. Results: A total of 92% of patients rated their asthma care as good or excellent, 64% of adults reported their health as fair or poor, while only 27% of children reported their health as being fair or poor. Respondents were well-educated regarding their asthma, with 87% reporting knowing what to do for severe asthma attacks, 78% knowing the early warning signs of an asthma attack, and 77% recognizing aggravating factors. Eighty-nine percent of respondents rated the quality of the information given to them by their provider as very good or good. While 75% of patients reported using inhaled steroids, only 38% of those reported using them on a daily basis. Forty percent of patients reported using inhaled steroids only when they have symptoms. Forty-six percent of adults either smoke cigarettes or are exposed to smoking in the home, while 35% of children are exposed to smoke in the home. Conclusion: Asthmatic patients rated the quality of the information that their physicians provide very highly and reported that that they understand how to treat exacerbations. However, they do not take prescribed inhaled steroids on a daily basis. In addition, many asthmatic patients reside in homes where cigarette smoking is present. abstract_id: PUBMED:11759197 A comparison of ambulatory care-sensitive hospital discharge rates for Medicaid HMO enrollees and nonenrollees. With an increasing volume of Medicaid recipient enrollees in managed care, many states are developing tools for monitoring service quality and access of Medicaid recipients. This article explores the use of ambulatory care-sensitive (ACS) hospital discharge rates as a simple, practical indicator tool for monitoring the access of Medicaid health maintenance organization (HMO) enrollees through an empirical application in Massachusetts in 1995. Although unadjusted hospital discharge rates were lower, Medicaid HMO enrollees had higher age-gender-race adjusted total and ACS hospital discharge rates than Medicaid recipients enrolled in a primary care case management program under fee-for-service reimbursement. Higher HMO discharge rates for the specific ACS conditions of asthma and dehydration were suggestive of potential HMO access problems. abstract_id: PUBMED:19489363 Racial/ethnic differences in quality of care for North Carolina Medicaid recipients. Background: National health care quality measures suggest that racial and ethnic minority populations receive inferior quality of care compared to whites across many health services. As the largest insurer of low-income and minority populations in the United States, Medicaid has an important opportunity to identify and address health care disparities. Methods: Using 2006 Healthcare Effectiveness Data and Information Set (HEDIS) measures developed by the National Committee for Quality Assurance (NCQA), we examined quality of care for cancer screening, diabetes, and asthma among all eligible non-dual North Carolina Medicaid recipients by race and ethnicity. Results: In comparison to non-Latino whites, non-Latino African Americans had higher rates of screening for breast cancer (40.7% vs. 36.7%), cervical cancer (60.5% vs. 54.6%), and colorectal cancer (25.5% vs. 20.6%) and lower rates of LDL testing among people with diabetes (61.8% vs. 65.7%) and appropriate asthma medication use (88.7% vs. 97.0%). A1C testing and retinal eye exam rates among people with diabetes were similar. Smaller racial/ethnic minority groups had favorable quality indicators across most measures. Limitations: Comparability of findings to national population-based quality measures and other health plan HEDIS measures is limited by lack of case-mix adjustment. Conclusions: For the health services examined, we did not find evidence of large racial and ethnic disparities in quality of care within the North Carolina Medicaid program. There is substantial room for improvement, however, in cancer screening and preventive care for Medicaid recipients as a whole. abstract_id: PUBMED:16679438 Quality measurement in medicaid managed care and fee-for-service: the New York State experience. New York State has transitioned 1.7 million Medicaid recipients from a fee-for-service delivery system to a managed care model. To evaluate whether managed care has had a positive effect on access and quality, the New York State Department of Health compared rates of performance across standardized measures of quality (ie, childhood immunization, well-child visits, prenatal care in the first trimester, cervical cancer screening, use of appropriate medications for people with asthma, and comprehensive diabetes care) in both systems. For almost all measures, Medicaid managed care rates were statistically higher than Medicaid fee-for-service. abstract_id: PUBMED:18823447 Transportation brokerage services and Medicaid beneficiaries' access to care. Objective: To examine the effect of capitated transportation brokerage services on Medicaid beneficiaries' access to care and expenditures. Data Sources/study Setting: The study period from 1996 to 1999 corresponds to the period of a natural experiment during which Georgia and Kentucky implemented transportation brokerage services. Effects were estimated for asthmatic children and diabetic adults. Study Design: We used difference-in-differences models to assess the effects of transportation brokerage services on access to care, measured by Medicaid expenditures and health services use. The study design is strengthened by the staggered implementation dates between states and within each state. Principal Findings: For asthmatic children, transportation brokerage services increased nonemergency transportation expenditures and the likelihood of using any services; reductions in monthly expenditures more than offsetting the increased transportation costs. For diabetic adults, nonemergency transportation costs decreased despite increased monthly use of health services; average monthly medical expenditures and the likelihood of hospital admission for an ambulatory care-sensitive condition (ACSC) also decreased. Conclusions: The shift to transportation brokerage services improved access to care among Medicaid beneficiaries and decreased the expenditures. The increase in access combined with reduced hospitalizations for asthmatic children and ACSC admissions for diabetic adults are suggestive of improvements in health outcomes. abstract_id: PUBMED:33341118 Effects of variations in access to care for children with atopic dermatitis. Background: An estimated 50% of children in the US are Medicaid-insured. Some of these patients have poor health literacy and limited access to medications and specialty care. These factors affect treatment utilization for pediatric patients with atopic dermatitis (AD), the most common inflammatory skin disease in children. This study assesses and compares treatment patterns and healthcare resource utilization (HCRU) between large cohorts of Medicaid and commercially insured children with AD. Methods: Pediatric patients with AD were identified from 2 large US healthcare claims databases (2011-2016). Included patients had continuous health plan eligibility for ≥6 months before and ≥12 months after the first AD diagnosis (index date). Patients with an autoimmune disease diagnosis within 6 months of the index date were excluded. Treatment patterns and all-cause and AD-related HCRU during the observation period were compared between commercially and Medicaid-insured children. Results: A minority of children were evaluated by a dermatology or allergy/immunology specialist. Several significant differences were observed between commercially and Medicaid-insured children with AD. Disparities detected for Medicaid-insured children included: comparatively fewer received specialist care, emergency department and urgent care center utilization was higher, a greater proportion had asthma and non-atopic morbidities, high- potency topical corticosteroids and calcineurin inhibitors were less often prescribed, and prescriptions for antihistamines were more than three times higher, despite similar rates of comorbid asthma and allergies among antihistamine users. Treatment patterns also varied substantially across physician specialties. Conclusions: Results suggest barriers in accessing specialty care for all children with AD and significant differences in management between commercially and Medicaid-insured children. These disparities in treatment and access to specialty care may contribute to poor AD control, especially in Medicaid-insured patients. abstract_id: PUBMED:27514245 Diet quality, risk factors and access to care among low-income uninsured American adults in states expanding Medicaid vs. states not expanding under the affordable care act. Background: The Affordable Care Act (ACA) Medicaid expansion varies in availability across states. Purpose: We compared characteristics of low-income uninsured residents in both Medicaid nonexpanding and expanding states with respect to their dietary quality, health risk factors, and access to care. Methods: Data from the 2007-2012 National Health and Nutrition Examination Survey was matched with the Kaiser Family Foundation Medicaid expansion data. Bivariate and multivariate regressions were estimated to assess differences across expanding and non-expanding states. Result: The non-expansion group had a lower Healthy Eating Index score (41.8 vs. 44.1, p-value=0.006), a higher Body Mass Index (29.9 vs. 28.9, p-value=0.032), higher obesity prevalence (41% vs. 33%, p-value=0.007), and lower asthma prevalence (14.8% vs. 19.7%, p-value=0.037) compared with the expansion group. Conclusions: Differences across states in Medicaid coverage under the ACA may lead to widening disparities in health outcomes between expanding and non-expanding states. abstract_id: PUBMED:17506597 Quality of drug treatment of childhood persistent asthma in Maryland medicaid recipients in transition from managed fee for service to managed capitation. Background: From December 1991 to June 1997, approximately 80% of Maryland's Medicaid recipients were served through a fee-for-service (FFS) managed care delivery system in which assigned primary care providers served as gatekeepers for hospital and specialty services. The remaining 20% of recipients were voluntarily enrolled in 1 of 5 available health maintenance organizations (HMOs). Beginning in June 1997, Maryland required most Medicaid recipients to enroll in capitated managed care organizations (MCOs), also referred to as managed Medicaid plans. Although research has been conducted on the quality of asthma care among MCOs and in MCOs for Medicaid versus non-Medicaid members, the quality of asthma care has been less well studied for MCO patients than for FFS patients. Objective: To determine whether quality of drug use among Medicaid children with persistent asthma was different after the transition from the managed care FFS system to a capitated managed Medicaid system. Methods: This 4-year retrospective cohort study (from June 1, 1996, to December 31, 2000) followed children aged 5 to 18 years with persistent asthma (defined by the existence of at least 1 medical claim with an International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis code of 493.x and receipt of 2 or more pharmacy claims for beta2-agonists in a 6-month period) enrolled in Maryland Medicaid as they transitioned from the managed FFS system to 1 of 4 large capitated MCOs. Children were selected from a review of Medicaid enrollment records and medical and pharmacy FFS claims filed between June 1, 1996, and December 31, 1997. Children with a diagnosis of cystic fibrosis were excluded. The asthma quality indicator was defined as the proportion of children with persistent asthma (who had 2 or more claims for any short-acting beta2-agonists [SABAs], including metered-dose inhalers, nebulizers, or oral forms, which we defined as rescue medication, within a 6-month period), who also had at least 1 claim for a controller medication (inhaled corticosteroid, mastcell stabilizer, or leukotriene-receptor modifier) in the same 6-month period. Subjects were followed from June 1, 1996 (or, if later, the first Medicaid eligibility date), through December 31, 2000 (or, if earlier, the last Medicaid eligibility date). Mean quality indicator rates were calculated for the 2 managed FFS periods (FFS1 and FFS2) and the 6 managed Medicaid 6-month periods. We used generalized estimating equations to test for significant trends over time and to compare changes in the quality indicator in the managed Medicaid plans. Results: There were 3,721 children who met the inclusion and exclusion criteria for the study. The quality indicator (proportion of patients who received a controller medication among those receiving SABAs for asthma) was 62% in managed FFS1 and 57% in managed FFS2. In the first 6 months of managed Medicaid plans, the quality indicator rose from 56% to 57%, 59%, 61%, 66%, and 59% in the ensuing five 6-month observation periods. The results from the generalized estimating equations suggested slight improvement in the quality indicator in the managed Medicaid plans, but the difference was not significant (relative risk 1.01, 95% confidence interval, 0.95-1.08). There was no significant trend in the asthma quality indicator over time in the managed Medicaid plans. Conclusion: There was no distinct improvement or worsening in asthma care as measured by the quality indicator (proportion of patients who received a controller medication among those receiving SABAs for asthma) as children moved from managed FFS to managed Medicaid. Larger sample sizes with no data loss may have produced a different result. abstract_id: PUBMED:12090834 If we prescribe it, will it come? Access to asthma equipment for Medicaid-insured children and adults in the Bronx, NY. Context: Asthma is a major cause of morbidity in the United States. Self-management of asthma requires access to appropriate equipment. Clinical experience in an inner-city practice suggests that families encounter difficulties in filling prescriptions for spacers/holding chambers, peak flow (PF) meters, and nebulizer machines. Objectives: To determine whether Bronx, NY, pharmacies (1) carry spacers/holding chambers, PF meters, and nebulizer machines; (2) accept Medicaid insurance for them; and (3) perceive barriers to reimbursement by Medicaid for this equipment. Design And Setting: Structured telephone survey of 100 Bronx pharmacies randomly selected from the 1999 telephone directory. Participants: Ninety-eight pharmacists and 2 pharmacy technicians in 100 different pharmacies. Main Outcome Measures: Pharmacists' reports of equipment availability, Medicaid acceptance, and reasons for not carrying equipment or accepting Medicaid. Results: Overall equipment availability was as follows: spacers (68%), spacers with masks (57%), adult PF meters (40%), child-range PF meters (24%), and nebulizer machines (56%). For Medicaid recipients, equipment was less available: spacers (45%), spacers with masks (35%), adult PF meters (27%), child-range PF meters (17%), and nebulizer machines (33%). Surveyed pharmacists reported misconceptions about requirements for Medicaid reimbursement, which included the following: that Durable Medical Equipment permits are required (64% spacers and 33% PF meters), that special forms are needed (17% PF meters), or that this equipment is not covered by Medicaid (14% spacers and 8% PF meters). Of the 100 surveyed pharmacists, 32 reported difficulties with Medicaid reimbursement and 41 had never tried to receive reimbursement. Conclusions: These results suggest that (1) access to spacers/holding chambers, PF meters, and nebulizers for Medicaid-insured families is severely limited in Bronx pharmacies; (2) misunderstandings regarding Medicaid reimbursement policies are common; and (3) interventions to increase the proportion of pharmacies that dispense equipment are needed. Answer: "Mainstreaming," which refers to the integration of Medicaid beneficiaries into managed care options, does not guarantee access to care for Medicaid recipients with asthma. A cross-sectional survey of adult asthmatics followed by a large Northern California medical group found that patients covered by the state's Medicaid program were more likely to report access problems for asthma-related care, such as difficulties in reaching a health care provider by telephone, obtaining a clinic appointment, and obtaining asthma medication. Even after adjusting for clinical and sociodemographic variables, Medicaid patients were more likely to report at least one access problem compared to non-Medicaid patients (PUBMED:11520386). Additionally, Medicaid HMO enrollees had higher age-gender-race adjusted total and ambulatory care-sensitive (ACS) hospital discharge rates than Medicaid recipients enrolled in a primary care case management program under fee-for-service reimbursement, suggesting potential HMO access problems (PUBMED:11759197). Moreover, a survey of Medicaid managed care enrollees with asthma indicated that while patients were well-educated regarding their asthma and rated the quality of information from their providers highly, they did not consistently take prescribed inhaled steroids on a daily basis, and many resided in homes where cigarette smoking was present (PUBMED:15078746). This suggests that knowledge and satisfaction with care do not necessarily translate into optimal self-management or access to a smoke-free environment, which are critical for asthma care. In summary, while mainstreaming aims to provide Medicaid recipients with managed care options, it does not ensure equal access to care for those with asthma, and there are still significant barriers that need to be addressed to improve care for this population.
Instruction: Does EMG (dry needling) reduce myofascial pain symptoms due to cervical nerve root irritation? Abstracts: abstract_id: PUBMED:9298338 Does EMG (dry needling) reduce myofascial pain symptoms due to cervical nerve root irritation? Objective: EMG examination at tender points affects myofascial pain symptoms related to cervical nerve root irritation. Methods: Consecutive patients with neck and arm pain had physical examinations immediately before and after having EMGs of bilateral C3-C8 myotomes. Patients were randomly chosen for EMG either at the most tender point along the palpated myofascial band or at a nonselected site. The myotomal presence of &gt; or = 30% incidence of normal duration and amplitude, and polyphasic motor unit potentials confirm the diagnosis of cervical nerve root irritation. Results: 52% returned patient questionnaires 2 weeks post EMG examination. Group I (82/122 patients [67.2%]), averaged pain relief of 51.8 +/- 21.9%, a mean of 10.2 +/- 8 days; 14% had &gt; or = 75% relief. The number of days of pain relief correlated positively with the percentage of pain relief (p &lt; 0.005), but negatively with the number of nerve roots involved on EMG (p &lt; 0.05). Group 2 (23/42 patients [54.8%]), averaged relief of 39.0 +/- 18.7%, lasting 8.8 +/- 11.2 days. None had &gt; or = 75% pain relief. Both groups' duration of pain symptoms affected onset of relief. Evidence of bilateral multiple-level cervical nerve root irritation, especially noted at bilateral C6 and C7 levels. Conclusion: EMG at tender points on myofascial bands tends to improve symptoms. Needling these points elicits motor endplate activity and twitches, and induces more relief than when needling random points. abstract_id: PUBMED:33066556 Effectiveness of Dry Needling for Myofascial Trigger Points Associated with Neck Pain Symptoms: An Updated Systematic Review and Meta-Analysis. Our aim was to evaluate the effect of dry needling alone as compared to sham needling, no intervention, or other physical interventions applied over trigger points (TrPs) related with neck pain symptoms. Randomized controlled trials including one group receiving dry needling for TrPs associated with neck pain were identified in electronic databases. Outcomes included pain intensity, pain-related disability, pressure pain thresholds, and cervical range of motion. The Cochrane risk of bias tool and the Physiotherapy Evidence Database (PEDro) score were used to assessed risk of bias (RoB) and methodological quality of the trials. The quality of evidence was assessed by using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach. Between-groups mean differences (MD) and standardized mean differences (SMD) were calculated (3) Twenty-eight trials were finally included. Dry needling reduced pain immediately after (MD -1.53, 95% CI -2.29 to -0.76) and at short-term (MD -2.31, 95% CI -3.64 to -0.99) when compared with sham/placebo/waiting list/other form of dry needling and, also, at short-term (MD -0.51, 95% CI -0.95 to -0.06) compared with manual therapy. No differences in comparison with other physical therapy interventions were observed. An effect on pain-related disability at the short-term was found when comparing dry needing with sham/placebo/waiting list/other form of dry needling (SMD -0.87, 95% CI -1.60 to -0.14) but not with manual therapy or other interventions. Dry needling was effective for improving pressure pain thresholds immediately after the intervention (MD 55.48 kPa, 95% CI 27.03 to 83.93). No effect on cervical range of motion of dry needling against either comparative group was found. No between-treatment effect was observed in any outcome at mid-term. Low to moderate evidence suggests that dry needling can be effective for improving pain intensity and pain-related disability in individuals with neck pain symptoms associated with TrPs at the short-term. No significant effects on pressure pain sensitivity or cervical range of motion were observed. Registration number: OSF Registry-https://doi.org/10.17605/OSF.IO/P2UWD. abstract_id: PUBMED:37090321 Efficacy of Dry Needling Versus Transcutaneous Electrical Nerve Stimulation in Patients With Neck Pain Due to Myofascial Trigger Points: A Randomized Controlled Trial. Introduction Myofascial pain is defined as pain arising primarily in muscles and associated with multiple trigger points. Among the non-pharmacological methods, trigger point injection and electrotherapy are effective methods to treat myofascial pain syndrome. This study compares the effectiveness of dry needling (DN) and transcutaneous electrical nerve stimulation (TENS) in reducing cervical pain intensity and improving cervical range of motion in patients with neck pain due to myofascial trigger points. Methods Fifty patients were enrolled and randomized into two groups. Patients in group A received dry needling, and those in group B received TENS. Patients were evaluated using the Visual Analog Scale (VAS), Neck Disability Index (NDI), and Cervical Range of Motion (CROM) before the treatment and on days 14 and 28 after the treatment. The unpaired t-test was used to evaluate quantitative data, except for VAS, where the Mann-Whitney U test was used. All quantitative variables had a normal distribution with a standard deviation except for pain intensity (VAS), which deviated from the normal distribution. The significance level was set at a P-value=0.05. Results Both DN and TENS groups showed significant improvement in VAS, NDI, and CROM between days 0 and 28 (p=&lt;0.001). The DN group showed greater improvements in pain intensity from day 0 to day 28 (p =&lt;0.001). Between days 0 and 28, there was no discernible difference in NDI changes between the groups (p = 0.157 and p = 0.799, respectively). Mixed results were obtained for CROM, with significant improvement of cervical flexion in the dry needling group (p=&lt;0.008) and significant improvement of cervical rotation to the painful side in the TENS group (&lt;0.001). Conclusion Both dry needling and TENS are effective in reducing pain and improving NDI and CROM in patients with neck pain due to myofascial trigger points. However, as dry needling is more effective in pain reduction, a single session of dry needling is more beneficial and cost-effective as compared to multiple sessions of TENS. abstract_id: PUBMED:34114639 Dry Needling Versus Trigger Point Injection for Neck Pain Symptoms Associated with Myofascial Trigger Points: A Systematic Review and Meta-Analysis. Objective: To examine the effects of dry needling against trigger point (TrP) injections (wet needling) applied to TrPs associated with neck pain. Methods: Electronic databases were searched for randomized clinical trials in which dry needling was compared with TrP injections (wet needling) applied to neck muscles and in which outcomes on pain or pain-related disability were collected. Secondary outcomes consisted of pressure pain thresholds, cervical mobility, and psychological factors. The Cochrane Risk of Bias tool, the Physiotherapy Evidence Database score, and the Grading of Recommendations Assessment, Development, and Evaluation approach were used. Results: Six trials were included. TrP injection reduced pain intensity (mean difference [MD ] -2.13, 95% confidence interval [CI] -3.22 to -1.03) with a large effect size (standardized mean difference [SMD] -1.46, 95% CI -2.27 to -0.65) as compared with dry needling. No differences between TrP injection and dry needling were found for pain-related disability (MD 0.9, 95% CI -3.09 to 4.89), pressure pain thresholds (MD 25.78 kPa, 95% CI -6.43 to 57.99 kPa), cervical lateral-flexion (MD 2.02°, 95% CI -0.19° to 4.24°), or depression (SMD -0.22, 95% CI -0.85 to 0.41). The risk of bias was low, but the heterogenicity and imprecision of results downgraded the evidence level. Conclusion: Low evidence suggests a superior effect of TrP injection (wet needling) for decreasing pain of cervical muscle TrPs in the short term as compared with dry needling. No significant effects on other outcomes (very low-quality evidence) were observed. Level Of Evidence: Therapy, level 1a. abstract_id: PUBMED:32935643 The effectiveness of the masseteric nerve block compared with trigger point injections and dry needling in myofascial pain. Objective: To compare the efficacy of three different treatment methods in the management of myofascial pain: masseteric nerve block (MNB), trigger point injection with local anesthetic (LA), and dry needling (DN). Methods: Forty-five patients diagnosed with myofascial pain and trigger points in masseter muscles were treated with MNB (n = 15), DN (n = 15), and trigger point injection with LA (n = 15). Pain on palpation (PoP), pain on function (PoF), and maximum mouth opening (MMO) scores were measured and compared before the injections and all follow-ups after the injections. Results: MMO values were significantly increased in each group. The decrease in PoF values was statistically significant between the groups at 12 weeks (baseline time period). Discussion: Results of the present study indicate that MNB was not as effective as trigger point injection with local anesthetic or dry-needling in the management of masticatory myofascial pain. abstract_id: PUBMED:31361323 Electromyographic Activity Evolution of Local Twitch Responses During Dry Needling of Latent Trigger Points in the Gastrocnemius Muscle: A Cross-Sectional Study. Objective: Trigger points (TrPs) are hypersensitive spots within taut bands of skeletal muscles that elicit referred pain and motor changes. Among the variety of techniques used for treating TrPs, dry needling is one of the most commonly applied interventions. The question of eliciting local twitch responses (LTRs) during TrP dry needling is unclear. Our main aim was to investigate the evolution of the electromyographic (EMG) peak activity of each LTR elicited during dry needling into latent TrPs of the gastrocnemius medialis muscle. Methods: Twenty asymptomatic subjects with latent TrPs in the gastrocnemius medialis muscle participated in this cross-sectional study. Changes in EMG signal amplitude (root mean square [RMS]) with superficial EMG were assessed five minutes before, during, and five minutes after dry needling. The peak RMS score of each LTR was calculated (every 0.5 sec). Results: Analysis of variance revealed a significant effect (F = 29.069, P &lt;0.001) showing a significant decrease of RMS peak amplitude after each subsequent LTR. Differences were significant (P &lt;0.001) during the first three LTRs, and stable until the end of the procedure. No changes (P =0.958) were found for mean RMS data at rest before (mean = 65.2 mv, 95% confidence interval [CI] = 47.3-83.1) and after (61.0 mv, 95% CI = 42.3-79.7) dry needling. Conclusions: We found that, in a series of LTRs elicited during the application of dry needling over latent TrPs in the medial gastrocnemius muscle, the RMS peak amplitude of each subsequent LTR decreased as compared with the initial RMS peak amplitude of previous LTRs. No changes in superficial EMG activity at rest were observed after dry needling of latent TrPs of the gastrocnemius medialis muscle. abstract_id: PUBMED:34632909 Treatment of thoracic spine pain and pseudovisceral symptoms with dry needling and manual therapy in a 78-year-old female: A case report. Design: Case Report. Background And Purpose: Thoracic spine pain and movement dysfunction is a relatively common problem in the general population but has received little attention in research. Dry needling is frequently utilized by physical therapists and has been shown to reduce pain and improve function in areas, such as the cervical and lumbar spine, shoulder, hip, and knee. However, little research has been performed on the use of dry needling in the thoracic area with only two prior case studies being published. This case report documents the use of dry needling and manual therapy to treat a patient with symptoms of thoracic spine pain with concurrent pseudovisceral symptoms of chest pain and difficulty breathing. Case Description: The patient was a 78-year-old female who was referred to physical therapy with complaints of pain focused in her mid-thoracic spine radiating anteriorly into her chest. The patient underwent medical diagnostic tests prior to her referral to physical therapy to rule out cardiac pathology, pulmonary pathology, and fracture. She was treated with dry needling and manual therapy for a total of four sessions over a two-week period. Outcomes: Fifteen days after her initial evaluation, the patient reported she was pain-free with a pain score of 0/10 on the VAS. She reported she was no longer taking pain medication or NSAIDS. She was able to return to normal daily activities without restriction and normal sleep pattern. Her score on the Oswestry disability index at intake was 42% impairment and 2% impairment after 4 treatments. At follow-up 6 weeks and 12 weeks after her discharge from physical therapy, the patient reported she continued to be pain-free. abstract_id: PUBMED:31986897 Benefits of dry needling of myofascial trigger points on autonomic function and photoelectric plethysmography in patients with fibromyalgia syndrome. Background: Fibromyalgia syndrome (FMS) is a condition characterised by the presence of chronic, widespread musculoskeletal pain, low pain threshold and hyperalgesia. Myofascial trigger points (MTrPs) may worsen symptoms in patients with FMS. Objective: The purpose of this randomised controlled trial was to compare the effects of dry needling and transcutaneous electrical nerve stimulation (TENS) on pain intensity, heart rate variability, galvanic response and oxygen saturation (SpO2). Methods: 74 subjects with FMS were recruited and randomly assigned to either the dry needling group or the TENS group. Outcomes measures (pain intensity, heart rate variability, galvanic skin response, SpO2 and photoplethysmography) were evaluated at baseline and after 6 weeks of treatment. 2×2 mixed-model analyses of variance (ANOVAs) were performed. Results: The mixed-model ANOVAs showed significant differences between groups for the sensory dimension of pain, affective dimension of pain, total dimension of pain, visual analogue scale (VAS) and present pain intensity (PPI) (P=0.001). ANOVAs also showed that significant differences between groups were achieved for very low frequency power of heart rate variability (P=0.008) and low frequency power (P=0.033). There were no significant differences in dry needling versus TENS groups on the spectral analysis of the photoplethysmography and SpO2. Conclusions: This trial showed that application of dry needling therapy and TENS reduced pain attributable to MTrPs in patients with FMS, with greater improvements reported in the dry needling group across all dimensions of pain. Additionally, there were between-intervention differences for several parameters of heart rate variability and galvanic skin responses. Trial Registration Number: NCT02393352. abstract_id: PUBMED:33740346 Effectiveness of Dry Needling with Percutaneous Electrical Nerve Stimulation of High Frequency Versus Low Frequency in Patients with Myofascial Neck Pain. Background: Percutaneous nerve electrical stimulation is a novel treatment modality for the management of acute and chronic myofascial pain syndrome. Objectives: To compare the effectiveness of dry needling combined with percutaneous electrical nerve stimulation of low frequency versus high frequency, in patients with chronic myofascial neck pain. Study Design: Randomized, single-blind trial. Setting: Laboratory in an academic institution. Methods: A total of 40 volunteer patients with chronic neck pain were randomly divided into 2 groups. All patients initially received deep dry needling in a myofascial trigger point of the upper trapezius. Then, one group received high frequency percutaneous electrical nerve stimulation while the other group received low frequency percutaneous electrical nerve stimulation. The primary outcomes were the visual analog scale and the pressure pain threshold, while Neck Disability Index and Kinesiophobia were secondary outcomes. Results: We detected significant improvements in the visual analog scale score in both groups without differences between them. We did not observe significantly different statistics in either group during the evaluation of data on pressure pain threshold. Limitations: Limitations of the study include (1) heterogeneity of the sample in relation to gender, with more women, (2) the small sample size (40 patients), (3) the absence of placebo group, and (4) the fact that the treatment is focused exclusively on the upper trapezium myofascial trigger point. . Conclusions: Low and high frequency percutaneous electrical nerve stimulation combined with deep dry needling showed similar effects, since no differences between groups were observed on any of the outcome measures. High and low frequency of percutaneous electrical nerve stimulation generates changes on pain intensity and disability, but not on pressure pain threshold or fear of movement. abstract_id: PUBMED:31262581 Prediction Model for Choosing Needle Length to Minimize Risk of Median Nerve Puncture With Dry Needling of the Pronator Teres. Objective: The aim of this study was to relate forearm anthropometric measures to ultrasound pronator teres depth to determine the necessary needle length to prevent median nerve (MN) injury during pronator teres dry needling. Methods: We conducted a study employing a diagnostic accuracy prediction model (NCT03308279) at a Spanish university center. The study recruited 65 participants to predict the depth of the MN (measured with ultrasound) in the pronator teres using a decision tree algorithm to reduce the risk of MN puncture using 2 needle lengths (13 mm or 25 mm). The decision tree was developed by automatically selecting a cutoff for body mass index, forearm length and circumference, and pronator teres thickness. Results: For forearm circumferences ≤27.5 cm, the predictive value for the 13-mm needle was 92%. For forearm circumferences &gt;27.5 cm and forearm lengths ≤26.75 cm, the predictive value for the 25-mm needle was 100%. Conclusion: Based upon the findings of this study, we suggest that needle length should be selected according to forearm anthropometric measures to prevent MN injury during pronator teres dry needling. Answer: Yes, EMG dry needling appears to reduce myofascial pain symptoms related to cervical nerve root irritation. According to a study (PUBMED:9298338), EMG examination at tender points along myofascial bands showed an average pain relief of 51.8% in 67.2% of patients, with the relief lasting an average of 10.2 days. The study concluded that needling at tender points on myofascial bands tends to improve symptoms and induces more relief than when needling random points. Additionally, other studies have supported the effectiveness of dry needling for myofascial pain. A systematic review and meta-analysis (PUBMED:33066556) found that dry needling reduced pain immediately after and at short-term when compared with sham/placebo/waiting list/other forms of dry needling, and also when compared with manual therapy. It was also effective for improving pressure pain thresholds immediately after the intervention. Another randomized controlled trial (PUBMED:37090321) compared the efficacy of dry needling versus transcutaneous electrical nerve stimulation (TENS) in patients with neck pain due to myofascial trigger points. The study found that both treatments were effective in reducing pain and improving neck disability index and cervical range of motion, with dry needling showing greater improvements in pain intensity. Furthermore, a systematic review and meta-analysis (PUBMED:34114639) indicated that trigger point injection (wet needling) may have a superior effect for decreasing pain of cervical muscle trigger points in the short term as compared with dry needling, although no significant effects on other outcomes were observed. In summary, the evidence suggests that EMG dry needling can be an effective intervention for reducing myofascial pain symptoms due to cervical nerve root irritation.
Instruction: Is one-stage ureterocele repair possible in children? Abstracts: abstract_id: PUBMED:12189752 Is one-stage ureterocele repair possible in children? Aim: To determine the long-term results of ureterocele repair, bearing in mind the relative rarity of the malformation, its very polymorphic appearance and the diversity of treatments. Material And Methods: Long-term results were assessed by postoperative follow-up of 126 children with 131 ureteroceles between 1970-2000. Results: With a mean follow-up of 72 months, only 64.2% of children were cured after a one-stage procedure. According to the anatomical type, favourable results were obtained in 81.6% of cases with a single lumen and 57.9% of cases with a double lumen. Treatment success rates for single or double lumens were 73% in the case of intravesical implantation and only 53.9% (34/63 children) in ectopic forms. According to the technique, cure rates were 67.6% after distal incision in 34 children, 61.9% after total nephrectomy or superior pole nephrectomy in 42 children, 50% after ureterocele repair and ureterovesical reimplantation in 20 patients, 75% after total resection of the pathological lumen, parietal reconstruction and ipsilateral and/or contralateral reimplantation in another 20 patients. Conclusions: A one-stage procedure is only able to cure 2/3 of patients. In view of the tendency to progressive regression of often monstrous distensions during the neonatal period, first-line treatment should consist of a distal incision, followed, in the case of recurrent infections, by partial or total nephrectomy, while reserving the intravesical approach to cases with recurrent pyelonephritis. When this surgery is performed on older children or adolescents, the ureteroceles will be smaller with a lesser risk of sphincter damage. abstract_id: PUBMED:12477658 Is one-stage ureterocele repair possible in children? Aim: To determine the long-term results of ureterocele repair, bearing in mind the relative rarity of the malformation, its very polymorphic appearance and the diversity of treatments. Materials And Methods: Long-term results were assessed by postoperative follow-up of 126 children with 131 ureteroceles between 1970-2000. Results: With a mean follow-up of 72 months, only 64.2% of children were cured after a one-stage procedure. According to the anatomical type, favourable results were obtained in 81.6% of cases with a single ureter and 57.9% of cases with a duplicated ureter. Treatment success rates for single or duplicated ureters were 73% in the case of intravesical implantation and only 53.9% (34/63 children) in ectopic forms. According to the technique, cure rates were 67.6% after distal incision in 34 children, 61.9% after total nephrectomy or upper pole nephrectomy in 42 children, 50% after ureterocele repair and ureterovesical reimplantation in 20 patients, 75% after total resection of the pathological ureter, parietal reconstruction and ipsilateral and/or contralateral reimplantation in another 20 patients. Conclusions: A one-stage procedure is only able to cure 2/3 of patients. In view of the tendency to progressive regression of often monstrous distensions during the neonatal period, first-line treatment should consist of a distal incision, followed, in the case of recurrent infections, by partial or total nephrectomy, while reserving the intravesical approach to cases with recurrent pyelonephritis. When this surgery is performed on older children or adolescents, the ureteroceles will be smaller with a lower risk of sphincter damage. abstract_id: PUBMED:29135146 Laparoscopic heminephrureterectomy for duplex kidney in children Introduction: Duplication of the upper urinary tract is one of the most common congenital urological anomalies. In patients with critically decreased or lost function of one of the renal segments, heminephrureterectomy is usually the treatment of choice. Until recently, this was an open surgery; in cases of complete removal of the ureter, an additional incision in the iliac region was required. Currently, heminephrureterectomy is increasingly performed laparoscopically. We report the experience in laparoscopic heminephrureterectomy (LHNUE) in 10 clinics in Russia and Belarus. Some of them have already used this technique for 10 years. Aim: The study aimed to to improve the treatment results in children with urodynamic dysfunction due to duplicated upper urinary tract. Materials And Methods: We retrospectively analyzed medical records of 111 children treated from 2007 to 2016. There were 26 (23.4%) boys and 85 (76.6%) girls with mean age 44.6 months (from 2 to 170) at the time of surgery. All children included in the study had complete duplex kidneys, including 51 (45.9%) right-sided and 60 (54.1%) left-sided. All the children underwent LHNUE for a critical decrease or absence of function of the upper or lower segment of the duplex kidney caused by the following pathology: obstruction of the ureterovesical junction with the development of the megaureter of the upper ureteral segment in 57 (51.4%) patients; ureterocele in 28 (25.2%); extra-vesical ectopic ureter with urinary incontinence in 10 (9.0%) girls; high-grade UVR in 16 (14.4%) patients. Results: There were no conversions in this series of patients. The mean operative time was 135 minutes (60-240 min.). All children included in the study were followed for 1 to 9 years after surgery. Complications occurred in 17 (15.3%) patients, of whom 12 (10.8%) required repeat surgery. In one patient with the loss of lower pole function, the treatment result was considered unsatisfactory. Conclusion: LHNUE for duplex kidney is performed by a few clinics and is still at the stage of development and accumulation of experience. Nevertheless, LHNUE, though an effective treatment modality, carries the risk of reducing or losing the function of the retained segment. abstract_id: PUBMED:470010 A one-stage surgical approach to ectopic ureterocele. Whenever a ureterectomy for treatment of ectopic ureterocele is appropriate total extravesical excision avoiding a ureteral stump is the preferred approach to avoid a pyoureter or alternating diverticulum and possible surgical complications from an intravesical procedure in small infants and children with large and distorting ectopic ureteroceles. abstract_id: PUBMED:1895430 Preperitoneal approach for hernia repair: clinical application in pediatric urology. The preperitoneal approach for inguinal hernia repair rarely is indicated in children. However, we report on its clinical usefulness for children in whom the perivesical space must be exposed, such as during surgical repair of bladder exstrophy, ureterocele and ureteral reimplantation. This approach allows for a true high ligation of the hernia sac, and the repair is safe, fast and effective. abstract_id: PUBMED:26530838 Simultaneous bilateral robotic-assisted laparoscopic procedures in children. Our main objective is to report the feasibility of performing simultaneous robotic-assisted laparoscopic (RAL) heminephrectomy with contralateral ureteroureterostomy in children with bilateral duplicated systems. Three female children with bilateral congenital renal/ureteral anomalies underwent concurrent RAL simultaneous unilateral partial nephrectomy with ureterectomy and contralateral ureteroureterostomy with redundant ureterectomy using a four/five-port approach. Mean age at repair was 32.9 months (range 7-46 months) and mean weight was 13.7 kg (range 10.4-13.6 kg). The RAL heminephroureterectomy and contralateral ureteroureterostomy were performed via a four-port approach (five ports in one patient), and the patients were repositioned and draped when moving to the other side. Mean operative time was 446 min (range 356-503 min). Mean estimated blood loss was 23.3 cc (range 10-50 cc). Postoperative length of stay for two patients was 2 days and 1 day for one patient (mean = 1.7 days). Mean length of follow-up was 18.3 months (range 7-36 months). No significant intraoperative or postoperative complications occurred for any of the three patients. Two children had no hydronephrosis on postoperative imaging in follow-up, and one child had a small stable, residual pararenal fluid collection on the side of heminephrectomy. Two patients underwent postoperative ureteral stent removal under general anesthesia. In children with bilateral duplicated urinary tract with ureterocele, ectopic ureter, and/or vesicoureteral reflux, laparoscopic repair with robotic assistance can be accomplished safely in a single operative procedure with a short hospital stay. abstract_id: PUBMED:24072201 Single-stage surgical approach in complicated paediatric ureteral duplication: surgical and functional outcome. Purpose: Surgical approach to children with complicated ureteral duplication is discussed controversially. Our aim was to determine the outcome of children with complicated renal duplication undergoing a single-stage surgical approach with laparoscopic partial nephrectomy and open bladder reconstruction. Methods: Data of patients from 2004 to 2008 were investigated retrospectively. Outcome was analyzed in terms of postoperative course, renal function, urinary tract infection and functional voiding. Results: Thirteen patients were treated with laparoscopic partial nephrectomy and reconstruction of the lower urinary tract in a single-stage approach. Median age at operation was 15 months (2-63 m). One girl had a renal triplication. 7/13 patients presented with an ectopic ureterocele, two with an ectopic ureter, severe vesicoureteral reflux occurred in 6 patients. All patients had non-functioning renal moieties. Mean operative time was 239 min (129-309; SD 50). One re-operation was necessary 4 years after primary surgery due to a pole remnant. All patients had uneventful recoveries without evidence of recurrent UTI. Postoperative 99mTc-MAG3 scans showed no significant reduction of partial renal function (p = 0.4), and no signs of obstruction (p = 0.188). During a median follow-up of 60 months (49-86), dysfunctional voiding occurred in one patient. Conclusions: In children with complicated ureteral duplication a definitive single-stage procedure is feasible and shows excellent functional results. abstract_id: PUBMED:34492206 Laparoscopic Partial Nephrectomy for Duplex Kidneys in Infants and Children: How We Do It. Duplication anomalies of the kidney represent common congenital malformations of the urinary tract. A duplex kidney has often one pole that is poorly or nonfunctioning. In this last case, surgery may be indicated to remove the nonfunctioning pole. The most common indications for partial nephrectomy in pediatrics include symptomatic vesicoureteral reflux to the nonfunctioning pole and/or ectopic ureter or ureterocele causing urinary incontinence. In this article, we describe the technique of laparoscopic partial nephrectomy in infants and children with duplex kidney. A surgical procedure properly executed following critical technical steps is the key factor for the success of surgery. abstract_id: PUBMED:4096511 Treatment of extra-vesical ureteroceles associated with pyeloureteral duplication in children The authors report their experience of the treatment of the extra-vesical ureteroceles associated with pyelo-ureteral duplication. Twelve nephrectomies were performed because of the destruction of the kidney. A one stage complete repair (upper partial nephroureterectomy and excision of the ureterocele) was performed in 24 cases. It is a difficult technique but achieves good results (only three reoperations). The upper pole partial nephroureterectomy (simplified method, 31 cases) is easier to perform; twelve re-operations were necessary due to bladder outlet obstruction or vesicoureteral reflux. This complication is impossible to anticipate after the simplified method. Long term follow up is therefore essential after upper pole partial nephroureterectomy. abstract_id: PUBMED:16097714 Treatment of ureterocele in children Twenty six patients (15 children and 11 adults) with ureterocele were treated. Heterotopic ureterocele prevailed over common forms in children while in adults--vice versa (66.7 and 33.3% versus 27.3 and 72.7%, respectively). Recurrences of ureterocele are rather frequent. Modified surgical correction of ureterocele is proposed which consists in intravesical mobilization of ureterocele together with a terminal part of the ureter followed by resection and its transverse re-implantation with antireflux defense. Such therapeutic policy facilitates performance of the operation and enhances reliability of the closing antireflux mechanism. Endoscopic correction of ureterocele with one-stage lithoextraction was made in 7 adult patients who had undergone transverse dissection of ureterocele, in large ureterocele--partial resection of the lower part. The residual upper part operates as a closing valve in filling of the urinary bladder. Long-term results (3-year follow-up maximum) were studied in 23 patients. Dissection of ureterocele with ureterocystoneostomy was associated with recurrences in 37.6% patients. The best results were achieved in plastic modified operations and endoscopic correction of ureterocele. A persistent clinical effect was seen in all the patients. Answer: Yes, one-stage ureterocele repair is possible in children, but the success rate varies depending on several factors. According to the long-term results assessed by postoperative follow-up of 126 children with 131 ureteroceles between 1970-2000, only 64.2% of children were cured after a one-stage procedure (PUBMED:12189752, PUBMED:12477658). The success rates differed based on the anatomical type, with 81.6% favorable results in cases with a single ureter and 57.9% in cases with a duplicated ureter (PUBMED:12477658). The cure rates also varied according to the surgical technique used, ranging from 50% to 75% (PUBMED:12189752). The studies suggest that the first-line treatment should consist of a distal incision, followed by partial or total nephrectomy in the case of recurrent infections, while reserving the intravesical approach for cases with recurrent pyelonephritis (PUBMED:12189752, PUBMED:12477658). When surgery is performed on older children or adolescents, the ureteroceles tend to be smaller, which reduces the risk of sphincter damage (PUBMED:12477658). Other approaches, such as laparoscopic heminephrureterectomy (LHNUE) for duplex kidney, have been performed with no conversions in a series of patients, but complications occurred in 15.3% of patients, with 10.8% requiring repeat surgery (PUBMED:29135146). A single-stage surgical approach with laparoscopic partial nephrectomy and open bladder reconstruction has also been reported to show excellent functional results in children with complicated ureteral duplication (PUBMED:24072201). In summary, while one-stage ureterocele repair is possible in children, the cure rates and outcomes can vary, and careful consideration of the anatomical type, age of the child, and surgical technique is necessary to optimize results.
Instruction: Does selection bias explain the obesity paradox among individuals with cardiovascular disease? Abstracts: abstract_id: PUBMED:25867852 Does selection bias explain the obesity paradox among individuals with cardiovascular disease? Objectives: The objectives of this article are to demonstrate that the obesity paradox may be explained by collider stratification bias and to estimate the biasing effects of unmeasured common causes of cardiovascular disease (CVD) and mortality on the observed obesity-mortality relationship. Methods: We use directed acyclic graphs, regression modeling, and sensitivity analyses to explore whether the observed protective effect of obesity among individuals with CVD can be plausibly attributed to selection bias. Data from the third National Health and Examination Survey was used for the analyses. Results: The adjusted total effect of obesity on mortality was a risk difference (RD) of 0.03 (95% confidence interval [CI]: 0.02, 0.05). However, the controlled direct effect of obesity on mortality among individuals without CVD was RD = 0.03 (95% CI: 0.01, 0.05) and RD = -0.12 (95% CI: -0.20, -0.04) among individuals with CVD. The adjusted total effect estimate demonstrates an increased number of deaths among obese individuals relative to nonobese counterparts, whereas the controlled direct effect shows a paradoxical decrease in morality among obese individuals with CVD. Conclusions: Sensitivity analysis demonstrates unmeasured confounding of the mediator-outcome relationship provides a sufficient explanation for the observed protective effect of obesity on mortality among individuals with CVD. abstract_id: PUBMED:24525165 The obesity paradox: understanding the effect of obesity on mortality among individuals with cardiovascular disease. Objective: To discuss possible explanations for the obesity paradox and explore whether the paradox can be attributed to a form of selection bias known as collider stratification bias. Method: The paper is divided into three parts. First, possible explanations for the obesity paradox are reviewed. Second, a simulated example is provided to describe collider stratification bias and how it could generate the obesity paradox. Finally, an example is provided using data from 17,636 participants in the US National and Nutrition Examination Survey (NHANES III). Generalized linear models were fit to assess the effect of obesity on mortality both in the general population and among individuals with diagnosed cardiovascular disease (CVD). Additionally, results from a bias analysis are presented. Results: In the general population, the adjusted risk ratio relating obesity and all-cause mortality was 1.24 (95% CI 1.11, 1.39). Adjusted risk ratios comparing obese and non-obese among individuals with and without CVD were 0.79 (95% CI 0.68, 0.91) and 1.30 (95% CI=1.12, 1.50), indicating that obesity has a protective association among individuals with CVD. Conclusion: Results demonstrate that collider stratification bias is one plausible explanation for the obesity paradox. After conditioning on CVD status in the design or analysis, obesity can appear protective among individuals with CVD. abstract_id: PUBMED:36106800 Collider bias and the obesity paradox. Obesity paradoxes have been reported in many diseases to date. As the wording "paradox" indicates, our intuition rejects the hypothesis that obese people have a better life expectancy or fewer cardiovascular events. One of the most plausible explanations for the obesity paradox is collider bias, but controversy about this is ongoing. If the findings of the original research are affected by collider bias, meta-analyses of that research will also be affected by the same bias. It is to be hoped that the use of appropriate analytical techniques will enable the true nature of the obesity bias to become clear. abstract_id: PUBMED:32606858 Is the Obesity Paradox in Type 2 Diabetes Due to Artefacts of Biases? An Analysis of Pooled Cohort Data from the Heinz Nixdorf Recall Study and the Study of Health in Pomerania. Aims/hypothesis: There is controversy on whether an obesity paradox exists in type 2 diabetes, ie, that mortality is lowest in overweight or obesity. We examined the role of potential biases in the obesity paradox. Methods: From two regional population-based German cohort studies - the Heinz Nixdorf Recall Study and the Study of Health in Pomerania (baseline examinations 2000-2003/1997-2001) - 1187 persons with diabetes at baseline were included (mean age 62.6 years, 60.9% males). Diabetes was ascertained by self-report of physician's diagnosis, antidiabetic medication, fasting/random glucose or haemoglobin A1c. Mortality data were assessed for up to 17.7 years. We used restricted cubic splines and Cox regression models to assess associations between body mass index (BMI) and mortality. Sensitivity analyses addressed, inter alia, exclusion of early death cases, of persons with cancer, kidney disease or with history of cardiovascular diseases, and of ever smokers. Furthermore, we examined the role of treatment bias and collider bias for the obesity paradox. Results: In spline models, mortality risk was lowest for BMI at about 31 kg/m2. Sensitivity analyses carried out one after another had hardly any impact on this result. In our cohort, persons with diabetes and BMI ≥30 kg/m2 did not have better treatment than non-obese patients, and we found that collider bias played only a minor role in the obesity paradox. Conclusion: In a cohort of 1187 persons with diabetes, mortality risk was lowest in persons with moderate obesity. We cannot explain this result by a variety of sensitivity analyses. abstract_id: PUBMED:34453272 Obesity paradox in joint replacement for osteoarthritis - truth or paradox? Obesity is associated with an increased risk of cardiovascular disease (CVD) and other adverse health outcomes. In patients with pre-existing heart failure or coronary heart disease, obese individuals have a more favourable prognosis compared to individuals who are of normal weight. This paradoxical relationship between obesity and CVD has been termed the 'obesity paradox'. This phenomenon has also been observed in patients with other cardiovascular conditions and diseases of the respiratory and renal systems. Taking into consideration the well-established relationship between osteoarthritis (OA) and CVD, emerging evidence shows that overweight and obese individuals undergoing total hip or knee replacement for OA have lower mortality risk compared with normal weight individuals, suggesting an obesity paradox. Factors proposed to explain the obesity paradox include the role of cardiorespiratory fitness ("fat but fit"), the increased amount of lean mass in obese people, additional adipose tissue serving as a metabolic reserve, biases such as reverse causation and confounding by smoking, and the co-existence of older age and specific comorbidities such as CVD. A wealth of evidence suggests that higher levels of fitness are accompanied by prolonged life expectancy across all levels of adiposity and that the increased mortality risk attributed to obesity can be attenuated with increased fitness. For patients about to have joint replacement, improving fitness levels through physical activities or exercises that are attractive and feasible, should be a priority if intentional weight loss is unlikely to be achieved. abstract_id: PUBMED:16736275 Predictors of follow-up and assessment of selection bias from dropouts using inverse probability weighting in a cohort of university graduates. Dropouts in cohort studies can introduce selection bias. In this paper, we aimed (i) to assess predictors of retention in a cohort study (the SUN Project) where participants are followed-up through biennial mailed questionnaires, and (ii) to evaluate whether differential follow-up introduced selection bias in rate ratio (RR) estimates. The SUN Study recruited 9907 participants from December 1999 to January 2002. Among them, 8647 (87%) participants answered the 2-year follow-up questionnaire. The presence of missing information in key variables at baseline, being younger, smoker, a marital status different of married, being obese/overweight and a history of motor vehicle injury were associated with being lost to follow-up, while a self-reported history of cardiovascular disease predicted a higher retention proportion. To assess whether differential follow-up affected RR estimates, we studied the association between body mass index and the risk of hypertension, using inverse probability weighting (IPW) to adjust for confounding and selection bias. Obese individuals had a higher crude rate of hypertension compared with normoweight participants (RR=6.4, 95% confidence interval (CI): 3.9-10.5). Adjustment for confounding using IPW attenuated the risk of hypertension associated to obesity (RR=2.4, 95% CI: 1.1-5.3). Additional adjustment for selection bias did not modify the estimations. In conclusion, we show that the follow-up through mailed questionnaires of a geographically disperse cohort in Spain is possible. Furthermore, we show that despite existing differences between retained or lost to follow-up participants this may not necessarily have an important impact on the RR estimates of hypertension associated to obesity. abstract_id: PUBMED:29801736 Accounting for Selection Bias in Studies of Acute Cardiac Events. Background: In cardiovascular research, pre-hospital mortality represents an important potential source of selection bias. Inverse probability of censoring weights are a method to account for this source of bias. The objective of this article is to examine and correct for the influence of selection bias due to pre-hospital mortality on the relationship between cardiovascular risk factors and all-cause mortality after an acute cardiac event. Methods: The relationship between the number of cardiovascular disease (CVD) risk factors (0-5; smoking status, diabetes, hypertension, dyslipidemia, and obesity) and all-cause mortality was examined using data from the Atherosclerosis Risk in Communities (ARIC) study. To illustrate the magnitude of selection bias, estimates from an unweighted generalized linear model with a log link and binomial distribution were compared with estimates from an inverse probability of censoring weighted model. Results: In unweighted multivariable analyses the estimated risk ratio for mortality ranged from 1.09 (95% confidence interval [CI], 0.98-1.21) for 1 CVD risk factor to 1.95 (95% CI, 1.41-2.68) for 5 CVD risk factors. In the inverse probability of censoring weights weighted analyses, the risk ratios ranged from 1.14 (95% CI, 0.94-1.39) to 4.23 (95% CI, 2.69-6.66). Conclusion: Estimates from the inverse probability of censoring weighted model were substantially greater than unweighted, adjusted estimates across all risk factor categories. This shows the magnitude of selection bias due to pre-hospital mortality and effect on estimates of the effect of CVD risk factors on mortality. Moreover, the results highlight the utility of using this method to address a common form of bias in cardiovascular research. abstract_id: PUBMED:20127393 Selection bias in a population survey with registry linkage: potential effect on socioeconomic gradient in cardiovascular risk. Non-participation in population studies is likely to be a source of bias in many types of epidemiologic studies, including those describing social disparities in health. The objective of this paper is to present a non-attendance analysis evaluating the possible impact of selection bias, when investigating the association between education level and cardiovascular risk factors. Data from the INTERGENE research programme including 3,610 randomly selected individuals aged 25-74 (1,908 women and 1,702 men), in West Sweden were used. Only 42% of the invited population participated. Non-attendance analyses were done by comparing data from official registries (Statistics Sweden) covering the entire invited study population. This analysis revealed that participants were more likely to be women, have university education, high income, be married and of Nordic origin compared to non-participants. Among participants, all health behaviours studied were significantly related to education. Physical activity, alcohol use and breakfast consumption were higher in the more educated group, while there were more smokers in the less educated group. Central obesity, obesity and hypertension were also significantly associated with lower education level. Weaker associations were observed for blood lipids, diabetes, high plasma glucose level and perceived stress. The socio-demographic differences between participants and non-participants indicated by the register analysis imply potential biases in epidemiological research. For instance, the positive association between education level and frequent alcohol consumption, may, in part be explained by participation bias. For other risk factors studied, an underestimation of the importance of low socioeconomic status may be more likely. abstract_id: PUBMED:35812616 The Obesity Paradox in Chronic Heart Disease and Chronic Obstructive Pulmonary Disease. Obesity in recent years has become an epidemic. A high body mass index (BMI) is one of today's most crucial population health indicators. BMI does not directly quantify body fat but correlates well with easier body fat measurements. Like smoking, obesity impacts multiple organ systems and is a major modifiable risk factor for countless diseases. Despite this, reports have emerged that obesity positively impacts the prognosis of patients with chronic illnesses such as chronic heart failure (CHF) and chronic obstructive pulmonary disease (COPD), a phenomenon known as the Obesity Paradox. This article attempts to explain and summarize this phenomenon. As it stands, two theories explain this paradox. The muscle mass hypothesis states that obese patients are better adapted to tide through acute exacerbations due to increased reserve because of greater muscle mass. The other theory focuses on brown adipose tissue and its anti-inflammatory effects on the body. We performed a literature review on research articles published in English from 1983 to the present in the following databases - PubMed, Elsevier, and Google Scholar. The following search strings and Medical Subject Headings (MeSH) terms were used: "Obesity," "Heart Failure," "COPD," and "Cardio-Respiratory Fitness." In this review, we looked at the obesity paradox in Heart Failure and COPD. We summarized the current literature on the Obesity Paradox and reviewed its relationship with Cardio-Respiratory Fitness. abstract_id: PUBMED:36808566 Obesity Paradox: Fact or Fiction? Purpose Of Review: Obesity is related to several comorbidities such as type 2 diabetes mellitus, cardiovascular disease, heart failure, and various types of cancers. While the detrimental effect of obesity in both mortality and morbidity has been well established, the concept of the obesity paradox in specific chronic diseases remains a topic of continuous interest. In the present review, we examine the controversial issues around the obesity paradox in certain conditions such as cardiovascular disease, several types of cancer and chronic obstructive pulmonary disease, and the factors that may confound the relation between obesity and mortality. Recent Findings: We refer to the obesity paradox when particular chronic diseases exhibit an interesting "paradoxical" protective association between the body mass index (BMI) and clinical outcomes. This association, however, may be driven by multiple factors among which the limitations of the BMI itself; the unintended weight loss precipitated by chronic illness; the various phenotypes of obesity, i.e., sarcopenic obesity or the athlete's obesity phenotype; and the cardiorespiratory fitness levels of the included patients. Recent evidence highlighted that previous cardioprotective medications, obesity duration, and smoking status seem to play a role in the obesity paradox. The obesity paradox has been described in a plethora of chronic diseases. It cannot be emphasized enough that the incomplete information received from a single BMI measurement may interfere with outcomes of studies arguing in favor of the obesity paradox. Thus, the development of carefully designed studies, unhampered by confounding factors, is of great importance. Answer: Yes, selection bias is one of the explanations for the obesity paradox among individuals with cardiovascular disease. The obesity paradox refers to the observation that, contrary to expectations, obese individuals with cardiovascular disease (CVD) sometimes have a better prognosis or lower mortality compared to their non-obese counterparts. Several studies have suggested that this counterintuitive finding may be due to various forms of bias, including selection bias. One form of selection bias that has been implicated in the obesity paradox is collider stratification bias. This occurs when conditioning on a common effect of two variables (in this case, CVD), which can induce a spurious association between the other variables (obesity and mortality) (PUBMED:25867852; PUBMED:24525165). The controlled direct effect of obesity on mortality among individuals with CVD showed a paradoxical decrease in mortality among obese individuals, which could be attributed to selection bias (PUBMED:25867852). Sensitivity analysis demonstrated that unmeasured confounding of the mediator-outcome relationship provides a sufficient explanation for the observed protective effect of obesity on mortality among individuals with CVD (PUBMED:25867852). Another study using data from the US National and Nutrition Examination Survey (NHANES III) also supported the idea that collider stratification bias could generate the obesity paradox, showing that after conditioning on CVD status, obesity appeared protective among individuals with CVD (PUBMED:24525165). Furthermore, the concept of collider bias as an explanation for the obesity paradox has been discussed in the literature, with the understanding that if the original research is affected by collider bias, meta-analyses of that research will also be affected by the same bias (PUBMED:36106800). In conclusion, selection bias, particularly collider stratification bias, is a plausible explanation for the obesity paradox observed among individuals with cardiovascular disease. This suggests that the protective association between obesity and mortality in CVD patients may not be causal but rather a result of the methodological issues in the studies that report this phenomenon.
Instruction: Emergency Management of Gallbladder Disease: Are Acute Surgical Units the New Gold Standard? Abstracts: abstract_id: PUBMED:26296834 Emergency Management of Gallbladder Disease: Are Acute Surgical Units the New Gold Standard? Introduction: Since 2011, all acute general surgical admissions have been managed by the consultant-led emergency general surgery service (EGS) at our institution. We aim to compare EGS management of acute biliary disease to its preceding model. Materials And Methods: Retrospective review of prospectively collated databases was performed to capture consecutive emergency admissions with biliary disease from 1st February 2009 to 31st January 2013. Patient demographics, surgical intervention, use of diagnostic radiology, histological diagnosis, complications and hospital length of stay (LOS) were retrieved. Results: A total of 566 patients were included (pre-EGS 254 vs. EGS 312). In the EGS period, the number of patients having surgery on index admission increased from 43.7 to 58.7 % (p &lt; 0.001) as did use of intra-operative cholangiography from 75.7 to 89.6 % (p = 0.003). The conversion to open cholecystectomy rate also was reduced from 14.4 to 3.3 % (p &lt; 0.001). Overall, a 14 % reduction in use of multiple (&gt;1) imaging modalities for diagnosis was noted (p = 0.003). There was a positive trend in reduction of bile leaks but no significant difference in the overall morbidity and mortality. Time to theatre was reduced by 1 day [pre-EGS 2.7 (IQR 1.5-5.0) vs. EGS 1.7 (IQR 1.2-2.6) p &lt; 0.001]. The overall hospital LOS was reduced by 1.5 days [pre-EGS 5.0 (IQR 3-7) vs. EGS 3.5 (IQR 2-5) p &lt; 0.001]. Conclusion: Since the advent of EGS, more judicious use of diagnostic radiology, reduced complications, reduced LOS, reduced time to theatre and an increased rate of definitive management during the index admission were demonstrated. abstract_id: PUBMED:20589582 Single-port transumbilical endoscopic cholecystectomy: a new standard? Background And Objective: Single-port transumbilical laparoscopic cholecystectomy (SPTLC) may become a standard procedure in the surgical treatment of acute and chronic gallbladder diseases. The initial experience with this new technique is reported. Methods: 186 patients underwent laparoscopic single-port laparoscopic cholecystectomy between September 2008 and February 2010 at the Vivantes Klinikum Am Urban, Berlin, Germany. All these operations were performed with conventional straight laparascopic instruments using a single-port system. Results: Conversion to a three-port technique or open procedure became necessary in four patients after failure to perform the single-port method. The average age of the 120 women (64%) and 66 men (36%) was 45 (range 15-88 years) years. The ASA grade (American Society of Anesthesiologists) averaged 2 (range, 1-3) and the BMI 28.5 (range 17-49). Mean operative time was 63 min (range, 28-17 min). 48 patients (26%) had histopathological evidence of acute cholecystitis. During a mean follow-up period of 39 weeks (range 1-78 weeks), 11 patients (6%) developed complications related to the surgery, five of these patients (3%) requiring a subsequent re-operation. Conclusions: Single-port transumbilical laparoscopic cholecystectomy for acute and chronic gallbladder disease is a feasible approach for routine cholecystectomy. After a short learning curve the operation time and rate of complications are comparable to standard multi-port laparoscopic cholecystectomy. A limitation of the procedure are very obese patients and multiple previously performed abdominal operations. abstract_id: PUBMED:25909409 Impact of specific postoperative complications on the outcomes of emergency general surgery patients. Background: The relative contribution of specific postoperative complications on mortality after emergency operations has not been previously described. Identifying specific contributors to postoperative mortality following acute care surgery will allow for significant improvement in the care of these patients. Methods: Patients from the 2005 to 2011 American College of Surgeons' National Surgical Quality Improvement Program database who underwent emergency operation by a general surgeon for one of seven diagnoses (gallbladder disease, gastroduodenal ulcer disease, intestinal ischemia, intestinal obstruction, intestinal perforation, diverticulitis, and abdominal wall hernia) were analyzed. Postoperative complications (pneumonia, myocardial infarction, incisional surgical site infection, organ/space surgical site infection, thromboembolic process, urinary tract infection, stroke, or major bleeding) were chosen based on surgical outcome measures monitored by national quality improvement initiatives and regulatory bodies. Regression techniques were used to determine the independent association between these complications and 30-day mortality, after adjustment for an array of patient- and procedure-related variables. Results: Emergency operations accounted for 14.6% of the approximately 1.2 million general surgery procedures that are included in American College of Surgeons' National Surgical Quality Improvement Program but for 53.5% of the 19,094 postoperative deaths. A total of 43,429 emergency general surgery patients were analyzed. Incisional surgical site infection had the highest incidence (6.7%). The second most common complication was pneumonia (5.7%). Stroke, major bleeding, myocardial infarction, and pneumonia exhibited the strongest associations with postoperative death. Conclusion: Given its disproportionate contribution to surgical mortality, emergency surgery represents an ideal focus for quality improvement. Of the potential postoperative targets for quality improvement, pneumonia, myocardial infarction, stroke, and major bleeding have the strongest associations with subsequent mortality. Since pneumonia is both relatively common after emergency surgery and strongly associated with postoperative death, it should receive priority as a target for surgical quality improvement initiatives. Level Of Evidence: Prognostic and epidemiologic study, level III. abstract_id: PUBMED:31399949 Does the adoption of an emergency general surgery service model influence volume of cholecystectomies at a tertiary care center. Introduction: The purpose of this study was to evaluate the rate of cholecystectomy before and after adoption of an emergency general surgery (EGS) model at our institution. Methods: A longitudinal, observational study was conducted prior to and following introduction of an EGS model at our institution. Using the New York SPARCS Administrative Database, all adult patients presenting to the emergency department with gallbladder-related emergencies were identified. The rates of laparoscopic and open cholecystectomies performed 3 years prior and 3 years following the adoption of the EGS model were examined. A multivariable logistic regression model was used to compare the incidence of cholecystectomy at initial ED visit at our institution pre- and post-EGS introduction as well as to those in the rest of the state as an external control group, while adjusting for potentially confounding factors. Results: There were 176,159 total ED visits of patients with gallbladder emergencies (154,743 excluding repeat presenters) in the studied period in NY State. Of these, 63,912 patients (41.3%) had a concurrent cholecystectomy in NY State. The rate of cholecystectomy at these institutions remained relatively steady from 38.8% from 2010 to 2013 and 38.6% from 2013 to 2016. At our institution, there were 2039 gallbladder emergencies, and of those 755 underwent cholecystectomy. At our institution, there was an increase from 28.21% 3 years prior to the adoption of the EGS model to 40.2% in the following 3 years (RR = 1.06, 95% CI 1.0164-1.1078, p = 0.0069). Conclusion: The initiation of the EGS model at a tertiary center was associated with a significant increase in the number of concurrent cholecystectomies from 28.21 to 40.2% over a 6-year period. This change was accompanied by an increase in the number of patient comorbidities and a lower insurance status. abstract_id: PUBMED:2663456 Acute diseases of the pancreas and biliary tract. Management in the emergency department. Pancreatitis, commonly encountered in the Emergency Department, possesses a very broad clinical spectrum and may be associated with shock and multiple organ failure. Its diagnosis is based on clinical, laboratory, and radiographic data, but there is no gold standard. Biliary tract disease ranges in severity from cholelithiasis with colic to acute suppurative cholangitis, which may lead to shock. Clinical examination and imaging studies are most useful in these disorders, and definitive treatment is primarily surgical. abstract_id: PUBMED:31388806 Use of minimally invasive surgery in emergency general surgery procedures. Background: Minimally invasive surgery (MIS) has demonstrated superior outcomes in many elective procedures. However, its use in emergency general surgery (EGS) procedures is not well characterized. The purpose of this study was to examine the trends in utilization and outcomes of MIS techniques in EGS over the past decade. Methods: The 2007-2016 ACS-NSQIP database was utilized to identify patients undergoing emergency surgery for four common EGS diagnoses: appendicitis, cholecystitis/cholangitis, peptic ulcer disease, and small bowel obstruction. Trends over time were described. Preoperative risk factors, operative characteristics, outcomes, morbidity, and trends were compared between MIS and open approaches using univariate and multivariate analysis. Results: During the 10-year study period, 190,264 patients were identified. The appendicitis group was the largest (166,559 patients) followed by gallbladder disease (9994), bowel obstruction (6256), and peptic ulcer disease (366). Utilization of MIS increased over time in all groups (p &lt; 0.001). There was a concurrent decrease in mean days of hospitalization in each group: appendectomy (2.4 to 2.0), cholecystectomy (5.7 to 3.2), peptic ulcer disease (20.3 to 11.7), and bowel obstruction (12.9 to 10.5); p &lt; 0.001 for all. On multivariate analysis, use of MIS techniques was associated with decreased odds of 30-day mortality, surgical site infection, and length of hospital stay in all groups (p &lt; 0.001). Conclusions: Use of MIS techniques in these four EGS diagnoses has increased in frequency over the past 10 years. When adjusted for preoperative risk factors, use of MIS was associated with decreased odds of wound infection, death, and length of stay. Further studies are needed to determine if increased access to MIS techniques among EGS patients may improve outcomes. abstract_id: PUBMED:33872846 Surgical Diseases are Common and Complicated for Criminal Justice Involved Populations. Background: At any given time, almost 2 million individuals are in prisons or jails in the United States. Incarceration status has been associated with disproportionate rates of cancer and infectious diseases. However, little is known about the burden emergency general surgery (EGS) in criminal justice involved (CJI) populations. Materials And Methods: The California Office of Statewide Health Planning and Development (OSHPD) database was used to evaluate all hospital admissions with common EGS diagnoses in CJI persons from 2012-2014. The population of CJI individuals in California was determined using United States Bureau of Justice Statistics data. Primary outcomes were rates of admission and procedures for five common EGS diagnoses, while the secondary outcome was probability of complex presentation. Results: A total of 4,345 admissions for CJI patients with EGS diagnoses were identified. The largest percentage of EGS admissions were with peptic ulcer disease (41.0%), followed by gallbladder disease (27.5%), small bowel obstruction (14.0%), appendicitis (13.8%), and diverticulitis (10.5%). CJI patients had variable probabilities of receipt of surgery depending on condition, ranging from 6.2% to 90.7%. 5.6% to 21.0% of admissions presented with complicated disease, the highest being with peptic ulcer disease and appendicitis. Conclusion: Admissions with EGS diagnoses were common and comparable to previously published rates of disease in general population. CJI individuals had high rates of complicated presentation, but low rates of surgical intervention. More granular evaluation of the burden and management of these common, morbid, and costly surgical diagnoses is essential for ensuring timely and quality care delivery for this vulnerable population. abstract_id: PUBMED:11957550 Emergency laparoscopic surgery--the Changi General Hospital experience. Introduction: This paper analyses the emergency laparoscopic procedures undertaken by our unit over a 1-year period in an effort to evaluate the diagnostic-therapeutic use of laparoscopy in an emergency situation. Materials And Methods: This is a retrospective study that analysed 137 emergency laparoscopic procedures that were performed for patients who presented with acute abdominal pain over a 1-year period from 31 December 1999 to 31 December 2000. Results: A definitive diagnosis was made in 91.2% (125). Of the 78 cases that required surgical intervention, 71.8% (56) were performed laparoscopically. The conversion rate (to open surgery) was 16.8% (23) and the morbidity rate was 8% (11); with no mortalities. Conclusion: We conclude that laparoscopic surgery is effective and safe in the non-elective setting and it offers a high potential for diagnosis and therapy in selected patients in whom the diagnoses are equivocal. abstract_id: PUBMED:34468597 The impact of COVID-19 and social avoidance in urgent and emergency surgeries - will a delay in diagnosis result in perioperative complications? Objective: The sudden COVID-19 outbreak has changed our health system. Physicians had to face the challenge of treating a large number of critically ill patients with a new disease and also maintain the essential healthcare services functioning properly. To prevent disease dissemination, authorities instructed people to stay at home and seek medical care only if they experienced respiratory distress. However, there are concerns those patients did not seek necessary health care because of these orientations. This study aims to see how the pandemic has influenced the severity of the disease, complication, and mortality of patients undergoing emergency cholecystectomy and appendectomy. Methods: Retrospective review of medical records of patients admitted to the emergency department and undergoing to cholecystectomy and appendicectomy in the periods from March to May 2019 and 2020. Results: We observed that COVID-19 did not change the severity of presentation or the outcome of patients with gallbladder disease, but caused a 24.2% increase in the prevalence of complicated appendicitis (p&lt;0.05). However, disagreeing with what was expected, we did not identify a greater number of perioperative complications in patients undergoing an appendectomy. Conclusion: Therefore, it seems that in a university tertiary referral center COVID-19 did not influence the management and outcome of inflammatory diseases treated in the surgical emergency department. abstract_id: PUBMED:11554861 Accuracy of emergency department bedside ultrasonography. Objectives: To determine which focused ultrasound examinations can be interpreted accurately by emergency physicians who have limited training and experience. To determine whether image quality and/or the operator's level of confidence in the findings correlates with accurate scan interpretation. Methods: A prospective sample of consenting adult emergency department patients with the conditions was selected for study. Scans were performed by emergency physicians who had attended a 3-day focused ultrasound examinations instruction course. All scans were videotaped and subsequently reviewed by a radiologist. Accuracy was determined by comparing the emergency physicians scan interpretation with preselected gold standards. Chi-squared tests were employed to determine if the individual performing the scan, the type of scan, patient's body habitus, image quality and/or operator confidence were reliable predictors of accuracy. Results: Between September 1997 and January 1999, 221 scans were studied. Accuracy varied widely depending on the type of scan performed: aortic scans were 100% accurate whereas renal scans had 68% accuracy. On bivariate analyses, there was little variation in the various operators' levels of proficiency and accuracy of interpretation was not associated with patient body habitus, image quality or operator confidence. Conclusions: Neophytes can accurately perform and interpret aortic scans; additional training and/or experience appear to be necessary to achieve proficiency in conducting most of the other scans studied. Inexperienced operators are unable to discern whether their scan interpretations will prove accurate. Answer: The management of acute gallbladder disease has evolved with the introduction of acute surgical units (ASUs), which are consultant-led emergency general surgery services. These units have been compared to previous models to determine if they represent a new gold standard in the emergency management of gallbladder disease. A retrospective review of emergency admissions with biliary disease before and after the implementation of an ASU at one institution showed that the number of patients having surgery on index admission increased significantly from 43.7% to 58.7%, and the use of intra-operative cholangiography also increased from 75.7% to 89.6%. Additionally, the conversion to open cholecystectomy rate was reduced from 14.4% to 3.3%. The study found a reduction in the use of multiple imaging modalities for diagnosis, a positive trend in the reduction of bile leaks, and no significant difference in overall morbidity and mortality. Time to theatre was reduced by 1 day, and the overall hospital length of stay was reduced by 1.5 days. These results suggest that the advent of ASUs has led to more judicious use of diagnostic radiology, reduced complications, reduced length of stay, reduced time to theatre, and an increased rate of definitive management during the index admission (PUBMED:26296834). Another study examining the impact of adopting an EGS model at a tertiary care center found a significant increase in the number of concurrent cholecystectomies performed, from 28.21% to 40.2% over a 6-year period. This change was accompanied by an increase in the number of patient comorbidities and a lower insurance status (PUBMED:31399949). In conclusion, the evidence suggests that ASUs, as part of the EGS model, may indeed represent a new gold standard in the emergency management of gallbladder disease, as they are associated with more efficient and effective care, leading to better patient outcomes and reduced hospital stays.
Instruction: The origin of fetal sterols in second-trimester amniotic fluid: endogenous synthesis or maternal-fetal transport? Abstracts: abstract_id: PUBMED:22728028 The origin of fetal sterols in second-trimester amniotic fluid: endogenous synthesis or maternal-fetal transport? Objective: Cholesterol is crucial for fetal development. To gain more insight into the origin of the fetal cholesterol pool in early human pregnancy, we determined cholesterol and its precursors in the amniotic fluid of uncomplicated, singleton human pregnancies. Study Design: Total sterols were characterized by gas chromatography-mass spectrometry in the second-trimester amniotic fluid of 126 healthy fetuses from week 15 until week 22. Results: The markers of cholesterol biosynthesis, lanosterol, dihydrolanosterol, and lathosterol, were present in low levels until the 19th week of gestation, after which their levels increased strongly. β-sitosterol, a marker for maternal-fetal cholesterol transport, was detectable in the amniotic fluid. The total cholesterol levels increased slightly between weeks 15 and 22. Conclusion: Our results support the hypothesis that during early life the fetus depends on maternal cholesterol supply because endogenous synthesis is relatively low. Therefore, maternal cholesterol can play a crucial role in fetal development. abstract_id: PUBMED:37176607 Is There a Correlation between Apelin and Insulin Concentrations in Early Second Trimester Amniotic Fluid with Fetal Growth Disorders? Introduction: Fetal growth disturbances place fetuses at increased risk for perinatal morbidity and mortality. As yet, little is known about the basic pathogenetic mechanisms underlying deranged fetal growth. Apelin is an adipokine with several biological activities. Over the past decade, it has been investigated for its possible role in fetal growth restriction. Most studies have examined apelin concentrations in maternal serum and amniotic fluid in the third trimester or during neonatal life. In this study, apelin concentrations were examined for the first time in early second-trimester fetuses. Another major regulator of tissue growth and metabolism is insulin. Materials And Methods: This was a prospective observational cohort study. We measured apelin and insulin concentrations in the amniotic fluid of 80 pregnant women who underwent amniocentesis in the early second trimester. Amniotic fluid samples were stored in appropriate conditions until delivery. The study groups were then defined, i.e., gestations with different fetal growth patterns (SGA, AGA, and LGA). Measurements were made using ELISA kits. Results: Apelin and insulin levels were measured in all 80 samples. The analysis revealed statistically significant differences in apelin concentrations among groups (p = 0.007). Apelin concentrations in large for gestational age (LGA) fetuses were significantly lower compared to those in AGA and SGA fetuses. Insulin concentrations did not differ significantly among groups. Conclusions: A clear trend towards decreasing apelin concentrations as birthweight progressively increased was identified. Amniotic fluid apelin concentrations in the early second trimester may be useful as a predictive factor for determining the risk of a fetus being born LGA. Future studies are expected/needed to corroborate the present findings and should ideally focus on the potential interplay of apelin with other known intrauterine metabolic factors. abstract_id: PUBMED:2207440 Biochemical and immunochemical identification of the fetal polypeptides of human amniotic fluid during the second trimester of pregnancy. 1. Human amniotic fluid contains a complex mixture of proteins, of which only the minority are of fetal origin. We have identified the fetal polypeptides of second trimester amniotic fluid samples by two different methods. 2. The first method was the side by side comparison of silver-stained two-dimensional polyacrylamide gels of amniotic fluid polypeptides with pregnant female plasma polypeptides, after passage of both through a Blue Sepharose affinity column to remove albumin. The second method was the identification of the fetal polypeptides in two-dimensional Western blots with an antiserum made specific for fetal proteins. 3. Using these techniques we have identified 13 major fetal polypeptide fractions with apparent molecular weights of 220, 200, 82, 70, 59, 52, 50, 36, 30, 25, 20, 18 and 11 kDa. Five of these polypeptides, with molecular weights of 82, 59, 50, 20 and 18 kDa, have not previously identified. The identification of these fetal components provides a reference base for molecular studies of normal and pathological fetal development. abstract_id: PUBMED:28347187 Second-trimester amniotic fluid corticotropin-releasing hormone and urocortin in relation to maternal stress and fetal growth in human pregnancy. This study explored the association between the acute psychobiological stress response, chronic social overload and amniotic fluid corticotropin-releasing hormone (CRH) and urocortin (UCN) in 34 healthy, second-trimester pregnant women undergoing amniocentesis. The study further examined the predictive value of second-trimester amniotic fluid CRH and UCN for fetal growth and neonatal birth outcome. The amniocentesis served as a naturalistic stressor, during which maternal state anxiety and salivary cortisol was measured repeatedly and an aliquot of amniotic fluid was collected. The pregnant women additionally completed a questionnaire on chronic social overload. Fetal growth parameters were obtained at amniocentesis using fetal ultrasound biometry and at birth from medical records. The statistical analyzes revealed that the acute maternal psychobiological stress response was unassociated with the amniotic fluid peptides, but that maternal chronic overload and amniotic CRH were positively correlated. Moreover, amniotic CRH was negatively associated with fetal size at amniocentesis and positively with growth in size from amniocentesis to birth. Hardly any studies have previously explored whether acute maternal psychological stress influences fetoplacental CRH or UCN levels significantly. Our findings suggest that (i) chronic, but not acute maternal stress may affect fetoplacental CRH secretion and that (ii) CRH is complexly involved in fetal growth processes as previously shown in animals. abstract_id: PUBMED:15708113 The influence of leptin on placental and fetal volume measured by three-dimensional ultrasound in the second trimester. For a couple of years mechanisms influencing placental and fetal growth and the functioning of leptin, the protein product of the ob/ob gene, have been subjects of intensive research. This study's aim was to investigate whether maternal serum leptin and amniotic fluid leptin have an influence on placental and fetal size measured by three-dimensional ultrasound in the second trimester. To determine this, 40 women with a singleton intrauterine pregnancy at the time of the amniocentesis were included in the study. Placental and fetal volume measurements were obtained and correlated to maternal serum leptin, amniotic fluid leptin, body mass index and gestational age. Multiple regression analysis identified amniotic fluid leptin as an independent negative predictor of placental and fetal volume (r = -2.29, p = 0.032 and r = -0.95, p = 0.011, respectively). In contrast, there was no correlation between maternal serum leptin and placental or fetal volume. The median leptin level in amniotic fluid (9.5 ng/ml) was significantly lower than in maternal blood (18.6 ng/ml). However, there was no significant correlation between maternal serum leptin and amniotic fluid leptin (r = 0.208, n.s.). Body mass index did not reveal any significant influences on placental or fetal volume. The relatively high level of amniotic fluid leptin and its inverse correlation on placental and fetal volume in the second trimester suggest that it possibly plays a role as an anti-placental growth hormone or feedback modulator of substrate supply to the fetus and placenta. abstract_id: PUBMED:744102 Maternal, fetal, and amniotic fluid transport of thyroxine, triiodothyronine, and iodide in sheep: a kinetic model. A mathematical model of iodine kinetics in maternal and fetal sheep has been developed by combining separate iodide, T3, and T4 subsystems. The individual subsystem models were developed from literature studies of maternal-fetal exchange under thyroid-blocked and unblocked conditions. Rates of exchange, concentrations, and spaces of distribution were calculated by the SAAM computer program. The models for each of the subsystems required exchange compartments within the mother and fetus, exchanges between maternal and fetal circulations, and between the fetus and amniotic fluid. The fetal-amniotic fluid exchange was observed directly for iodide and indirectly for T3 and T4. No exchange between mother and amniotic fluid was required. It is possible that the amniotic fluid acts as a reservoir for these and other substances. Maternal-fetal kinetics suggest that low fetal T3 levels are maintained by an active transport of T3 from fetus to mother, a decreased transport from mother to fetus, and a low fetal T3 production. The model also requires that all fetal T3 loss occur via transport to the maternal system rather than via fetal utilization. In contrast, the fetal T4 system is largely autonomous, the small maternal exchange not significantly contributing to the fetal T4 economy. Fetal iodide seems to be supplied by a facilitated bidirectional exchange with the mother. abstract_id: PUBMED:443303 Maternal and fetal renin activity and renin and big renin concentrations in second-trimester pregnancy. Plasma renin activity (PRA) and the concentrations of renin (PRC) and big renin (PBRC) have been determined in maternal and fetal blood, and renin and big renin have been measured in amniotic fluid, at 16 to 20 weeks of gestation. Gradients between peripheral arterial and venous and uterine venous maternal circulation were not apparent for PRA, PRC, or PBRC. PRC and PBRC but not PRA were consistently higher in fetal cord blood than in the maternal compartment. The concentrations of big renin and of renin were tenfold higher in amniotic fluid than in maternal plasma and were significantly correlated in amniotic fluid but not maternal or fetal plasma. abstract_id: PUBMED:1722580 Alpha-fetoprotein in fetal serum, amniotic fluid, and maternal serum. In order to gain more insight into the association between alpha-fetoprotein (AFP) and fetal chromosomal disorders, especially Down's syndrome, we measured AFP in fetal serum, amniotic fluid, and maternal serum at cordocentesis. We compared the concentration and gradient of AFP in these three compartments. Our data confirm earlier findings on second-trimester fetal serum AFP concentration. The results indicate that low maternal serum AFP in pregnancies with fetal chromosomal disorders could result from an impaired fetal kidney function as well as from impaired membrane or placental passage of AFP, rather than from reduced fetal AFP production. abstract_id: PUBMED:29685073 The association of second trimester biomarkers in amniotic fluid and fetal outcome. Objective: To identify the level of amniotic fluid lactate (AFL), placental growth factor (PLGF), and vascular endothelial growth factor (VEGF) at second trimester amniocentesis, and to compare levels in normal pregnancies with pregnancies ending in a miscarriage, an intrauterine growth restricted fetus (IUGR) or decreased fetal movements. Study design: A prospective cohort study. Amniotic fluid was consecutively collected at amniocentesis in 106 pregnancies. Fetal wellbeing at delivery was evaluated from medical files and compared with the levels of AFL, VEGF, and PLGF at the time of amniocentesis. Results: The median level of AFL was 6.9 mmol/l, VEGF 0.088 pg/ml, and PLGF 0.208 pg/ml. The median levels of AFL in pregnancies ended in miscarriage were significantly higher (10.7 mmol/l) compared to those with a live new-born (6.9 mmol/L, p = .02). The levels of VEGF (p = .2) and PLGF (p = .7) were not affected. In pregnancies with an IUGR, the median level of AFL was higher compared to those with normal fetal growth (p = .003). No differences VEGF (p = .5), but significant lower PLGF were found in IUGR pregnancies (p = .03). Conclusions: Pregnancies ending in a miscarriage or with IUGR had significantly higher median values of AFL but lower values of PLGF in the amniotic fluid at the time of second trimester amniocentesis compared to normal pregnancies. abstract_id: PUBMED:17175477 Fetal ascites and second trimester maternal hepatitis C virus infection. Objective: To present the first reported case of early second trimester maternal hepatitis C virus (HCV) associated with fetal ascites, which was treated with fetal paracentesis, and resulted in a successful outcome of a term liveborn infant with anti-HCV seropositivity. Case Report: A 26-year-old primigravida woman was diagnosed with acute HCV infection at 17 weeks of gestation. Ultrasound (US) at 23 weeks showed significant fetal as cites and echogenic bowel, and fetal viral infection was suspected. Maternal serum was positive for high HCV-RNA titers and cytomegalovirus (CMV) IgG. Amniocentesis, cordocentesis and therapeutic fetal paracentesis were performed at 23 weeks. Fetal karyotype was 46, XX. Cord blood showed anti-HCV positivity and HCV-RNA titer &lt; 10. Amniotic fluid was anti-HCV and CMV IgG positive. US at 27 weeks showed complete resolution of fetal ascites. A healthy 2,976 g female baby was delivered at 37 weeks, with anti-HCV seropositivity, high HCV-RNA titers, CMV IgG positive, IgM negative and normal liver function tests at the 1-month follow-up. Conclusion: Second trimester perinatal HCV infection with possible CMV coinfection associated with fetal ascites is a rare event. Fetal therapy resulting in a successful outcome has not been reported. Prompt fetal therapy with paracentesis in this case led to the delivery of a healthy term liveborn baby with anti-HCV seropositivity. Answer: The origin of fetal sterols in second-trimester amniotic fluid appears to be primarily from maternal-fetal transport rather than endogenous synthesis. This is supported by the findings that markers of cholesterol biosynthesis, such as lanosterol, dihydrolanosterol, and lathosterol, were present in low levels until the 19th week of gestation, after which their levels increased strongly. Additionally, β-sitosterol, a marker for maternal-fetal cholesterol transport, was detectable in the amniotic fluid. The total cholesterol levels increased slightly between weeks 15 and 22. These results suggest that during early life, the fetus depends on maternal cholesterol supply because endogenous synthesis is relatively low, indicating that maternal cholesterol plays a crucial role in fetal development (PUBMED:22728028).
Instruction: Does aortic valve sclerosis predicts the severity and complexity of coronary artery disease? Abstracts: abstract_id: PUBMED:26138181 Does aortic valve sclerosis predicts the severity and complexity of coronary artery disease? Aim: We assessed the association of aortic valve sclerosis (AVS) with atherosclerotic risk factors and severity and complexity of coronary artery disease (CAD). Methods: In this retrospective study, a total of 482 eligible patients were divided into 2 groups: AVS present and AVS absent. All major cardiovascular risk factors and coronary lesion characteristics were included. Results: Age was the only independent predictor of AVS. AVS was not independently associated with the number of obstructive vessels, degree of lesion obstruction and SYNTAX score. Conclusion: AVS is probably a benign marker of age-related degenerative changes in the heart independent of the severity and complexity of CAD. abstract_id: PUBMED:35780362 Association of Aortic Valve Sclerosis with Angiographic Severity of Coronary Artery Disease in Patients with Acute Coronary Syndrome, Aged ≤65 Years. Aortic valve sclerosis (AVS) represents a degenerative process that progresses with advancing age. The study was intended to find out the association between aortic valve sclerosis and the severity of CAD in patient's age ≤65 years with acute coronary syndrome. This cross-sectional analytical study was carried out in the department of cardiology, National Institute of Cardiovascular Diseases (NICVD), Dhaka, Bangladesh during a period of October 2017 to September 2018. A total of 140 Acute coronary syndrome (ACS) patients undergoing coronary angiogram during index hospitalization were included in the study. Study patients were divided into two groups on the basis of echocardiographic presence or absence of Aortic valve sclerosis (AVS), with 70 patients in each group. Group I was patients with aortic valve sclerosis and Group II was patients without aortic valve sclerosis. All patients underwent transthoracic echocardiography before they underwent coronary angiography on different days. Severity of CAD was determined by Gensini score and Vessel score. Association of traditional risk factors (smoking habit, hypertension, diabetes mellitus, dyslipidaemia and family history of CAD) with severity of CAD was investigated. Coronary angiography showed that AVS group had a higher positive rate of CAD (82.9% vs. 54.3%, p&lt;0.001) and incidence rate of triple vessel CAD (40% vs. 14.3%, p&lt;0.001) than non-AVS group. Gensini score had higher in AVS group than non AVS group (37.9±27.8 vs. 12.5±14.2; p&lt;0.001). Multivariate analysis showed that AVS (p=0.01) and age (p=0.04) were independent predictors of the presence of significant coronary artery disease. The study concluded that echocardiographically detected AVS is an independent predictor of coronary artery disease severity. There is positive correlation between severity of AVS and severity of CAD in patient's age ≤65 years with ACS. abstract_id: PUBMED:28425255 Aortic valve sclerosis is associated with the extent of coronaryartery disease in stable coronary artery disease. Background/aim: Aortic valve sclerosis (AVS) is characterized by lipid deposition and calcific infiltration on the edge of aortic leaflets without significant restriction of motion. The SYNTAX Score (SS) is an important method for evaluating coronary artery disease (CAD). Many studies showed that there is an important relation between the SS and undesired cardiac outcomes. In our study, we investigated the correlation between the SS and AVS by including both ACS and stable CAD cases. Materials And Methods: We enrolled 543 patients with CAD who underwent coronary angiography into this cross-sectional study between September 2013 and September 2014. Results: The study population was divided into two groups according to SS values above and below 22. Diabetes mellitus (DM) incidence was greater in the group with high SS values (26.3% vs. 19.2%, P = 0.052.). Left ventricular ejection fraction (LVEF) and glomerular filtration rate were lower. Low-density lipoprotein cholesterol and triglyceride levels were lower while platelet counts were higher. In multivariate analysis, for the stable coronary artery group AVS existence, platelet count, LVEF value, and chronic obstructive pulmonary disease were found as independent predictors. Conclusion: Our study results demonstrated that AVS is significantly associated with the complexity of CAD, especially in patients with stable CAD. This study provides new information regarding the role of AVS in CAD complexity. abstract_id: PUBMED:15819504 Progression of aortic valve sclerosis and aortic valve stenosis: what is the role of statin treatment? Background: It has recently been suggested that statins could slow the progression of aortic stenosis, but this hypothesis has not been validated in large series. Moreover, there is little information about the role of statin treatment in patients with aortic valve sclerosis. Methods: From our database 1988--2002, we retrospectively identified 1136 consecutive patients with aortic valve sclerosis (peak aortic velocity [Vmax] &gt; 1.5 and &lt; 2 m/s), or mild to moderate aortic stenosis (Vmax 2.0-3.9 m/s) and with &gt; or = 2 echocardiographic studies &gt; or = 6 months apart; 121 (11 %) were treated with statins. As a control group we randomly selected 121 age-gender-matched patients not treated with statins, with similar initial Vmax. Results: The mean follow-up duration was 54+/-34 months in the statin group, and 50+/-33 months in controls (p = 0.35). There were no differences between statin-treated patients and controls with respect to age, gender, and prevalence of hypertension. More patients in the statin group had documented hypercholesterolemia, diabetes, or had proven coronary artery disease. Overall, the rate of change of Vmax was not different between statin-treated patients and controls (0.13+/-0.24 vs 0.14+/-0.19 m/s/year, p = 0.72). However, in the subgroup of patients with aortic valve sclerosis (n = 52, 26 statin-treated, 26 controls), the rate of change of Vmax was significantly lower in statin-treated patients (0.04+/-0.04 vs 0.08+/-0.06 m/s/year, p = 0.007). Conclusions: The results of our retrospective study show that statins could be beneficial in retarding the progression of valvular aortic sclerosis to aortic stenosis. This suggests that statins retard the progression of aortic valve lesion in its early stage, a finding that may have important implications in the management of this very common disease. abstract_id: PUBMED:1894991 A case of quadricuspid aortic valve associated with coronary arterial legion A case of quadricuspid aortic valve is reported in a patient with coronary artery disease and abdominal aortic aneurysms. A 54-year-old male who had undergone aortic replacement because of abdominal aortic aneurysms three years before presentation was readmitted due to complaints of angina pectoris and palpitations. Aortography and coronary arteriography revealed severe aortic regurgitation and proximal occlusion of LAD and RCA. Surgical correction consisted of aortic valve replacement with a Björk-Shilely valve and coronary revascularization of LAD. During the operation, a quadricuspid aortic valve with one smaller and three larger cusps that showed mild myxomatous degeneration without dystrophic calcification and normal coronary arterial orifices were noted. Accordingly, severe aortic regurgitation may have resulted from the dysfunction of congenital malformed cusps and acquired sclerotic coronary disease was the main cause of the chest pain. abstract_id: PUBMED:24068023 Aortic valve sclerosis in acute coronary syndrome patients : potential value in predicting coronary artery lesion complexity. Objective: The purpose of the present study was to investigate the relation between aortic valve sclerosis (AVS) and coronary artery lesion complexity as assessed using the SYNTAX score (SxScore) in acute coronary syndrome (ACS) patients. Patients And Methods: A total of 164 patients with a first time diagnosis of acute coronary syndrome were consecutively enrolled. AVS was defined by echocardiography as thickening and calcification of the normal trileaflet aortic valve without obstruction to the left ventricular outflow. The SxScore was calculated using dedicated computer software. Results: There were significantly higher SxScores in subjects with AVS than those without AVS (18 ± 6 vs 12 ± 5, p = 0.02). In the univariate analysis, age (p = 0.03) and presence of AVS (p = 0.007) were significantly associated with higher SxScores. Logistic regression analysis demonstrated AVS [95 % confidence interval (CI) 0.17-0.86, p = 0.017] and age (95 % CI 1.01-1.21, p = 0.028) as independent determinants of higher SxScores. Conclusion: Aortic valve sclerosis was significantly and independently associated with a high SxScore in acute coronary syndrome patients. abstract_id: PUBMED:18163008 Aortic valve stenosis and coronary artery disease: pathophysiological and clinical links. Aortic valve stenosis (AVS), including a range of disorder severities, from mild leaflet thickening without valve obstruction, 'aortic sclerosis', to severe calcific aortic stenosis, is a progressive, active process of valve modification, mediating by chronic inflammation (similar to atherosclerosis for cardiovascular risk factors) and biological features. AVS is the expression of early tissue damage due to endothelial damage and oxidative, inflammatory processes, and appears as a surrogate marker for cardiovascular events associated with coronary artery disease (CAD). AVS progression correlates with coronary artery risk factors, such as hypertension, age and cholesterol, and a quantitative evaluation of valve and coronary calcium score comprises a useful marker for cardiovascular prognosis. The low concordance of AVS with CAD appears to be due to other genetic or metabolic factors more specific for calcification processes. Moreover, both pathologies appear to be included within atherosclerotic disease and may be the object of the same clinical therapy and prevention. abstract_id: PUBMED:3993515 The role of aortic valve calcium in the detection of aortic stenosis: an echocardiographic study. One hundred fifty-three men (mean age 67.0 +/- 10.0 years) with basal systolic murmurs and aortic valve calcium on the echocardiogram (group II) were studied to assess the relationship between the grade of calcium and severity of aortic valve obstruction. Patients were subdivided into group IIA (hypertension, no coronary artery disease), group IIB (coronary artery disease, no hypertension), group IIC (hypertension and coronary artery disease) and group IID (neither hypertension nor coronary artery disease). Group I consisted of 21 normal age-matched men (mean age 60.5 +/- 10.9 years). Aortic valve calcium was graded as 1+ (63 patients), 2+ (54 patients), or 3+ (36 patients) according to the degree of involvement. Left ventricular wall thickness was greater in group II than in group I, and close correlation between wall thickness parameters and grade of aortic valve calcium was observed for group IID. Of 31 catheterized patients, none of seven with 1+ aortic calcium and 11 of 14 with 3+ calcium had gradients greater than or equal to 50 mm Hg. With 3+ calcium the valve area was 0.8 +/- 0.4 cm2, and with 1+ calcium it was 2.8 +/- 0.7 cm2 (f = 0.0006). The presence of 3+ calcium or grade 2+ calcium combined with a left ventricular ejection time index greater than 433 msec and a left ventricular mass greater than 300 gm was highly suggestive of severe aortic stenosis and could be used to separate patients to be considered for invasive studies from those with benign aortic valve sclerosis. abstract_id: PUBMED:29174290 Associations of Mitral and Aortic Valve Calcifications with Complex Aortic Atheroma in Patients with Embolic Stroke of Undetermined Source. Background: This study investigated the associations of mitral and aortic valve calcification with complex aortic atheroma among patients with embolic stroke of undetermined source. Methods: We included 52 consecutive patients (mean age 58.1 years; 75.0% male) with embolic stroke of undetermined source. Mitral annular calcification, aortic annular calcification, and aortic valve sclerosis were assessed by transthoracic echocardiography. Complex aortic atheroma was assessed by transesophageal echocardiography and was defined as plaque protruding greater than or equal to 4 mm into the lumen or with ulcerated or mobile components. Results: Ten patients (19.2%) had complex aortic atheroma. Patients with and without complex aortic atheroma showed significant differences in terms of hypertension (80.0% versus 38.1%, P = .017), dyslipidemia (90.0% versus 31.0%, P &lt;.01), chronic kidney disease (60.0% versus 14.3%, P &lt;.01), previous coronary artery disease (30.0% versus 4.8%, P = .013), prior stroke (40.0% versus 7.1%, P &lt;.01), left atrial dimension (4.0 cm versus 3.6 cm, P = .023), aortic valve sclerosis (80.0% versus 26.2%, P &lt;.01), aortic valve calcification (aortic annular calcification or aortic valve sclerosis) (80.0% versus 26.0%, P &lt;.01), and left-sided valve calcification (mitral annular calcification or aortic annular calcification or aortic valve sclerosis) (80.0% versus 28.6%, P &lt;.01). In multivariate analysis, left-sided valve calcification was independently associated with complex aortic atheroma (odds ratio 4.1, 95% confidence interval 1.3-26.1, P = .049). Conclusions: Mitral or aortic valve calcification detected by transthoracic echocardiography can be a useful marker for predicting complex aortic atheroma in patients with embolic stroke of undetermined source. abstract_id: PUBMED:11827633 Aortic valve sclerosis as a marker of active atherosclerosis. Aortic sclerosis is a calcific disease of the aortic valvular leaflets defined as focal leaflet thickening without significant obstruction to left ventricular outflow. Several clinical factors are associated with calcific aortic valve disease, including male sex, smoking, hypertension, age, hypercholesterolemia, and diabetes. Histologic and biochemical studies suggest similarities between the mechanisms involved in the development of aortic sclerosis and atherosclerosis, suggesting these two diseases may share common pathophysiologic mechanisms. In a recent prospective trial, the presence of aortic sclerosis was associated with an approximately 50% increase in cardiovascular mortality and myocardial infarction, even after correction for age, gender, known coronary artery disease, and clinical factors associated with a aortic sclerosis. Answer: The relationship between aortic valve sclerosis (AVS) and the severity and complexity of coronary artery disease (CAD) has been explored in various studies, with mixed results. Some studies have found an association between AVS and CAD severity. For instance, a study on patients aged ≤65 years with acute coronary syndrome (ACS) found that AVS was an independent predictor of coronary artery disease severity, with a positive correlation between the severity of AVS and the severity of CAD (PUBMED:35780362). Another study in patients with stable CAD demonstrated that AVS is significantly associated with the complexity of CAD, as indicated by the SYNTAX Score (SS), especially in stable CAD patients (PUBMED:28425255). Similarly, AVS was found to be significantly and independently associated with a high SYNTAX score in ACS patients, suggesting a potential value in predicting coronary artery lesion complexity (PUBMED:24068023). On the other hand, another study concluded that AVS is probably a benign marker of age-related degenerative changes in the heart, independent of the severity and complexity of CAD (PUBMED:26138181). This suggests that while AVS may be associated with age, it does not necessarily predict the severity or complexity of CAD. Furthermore, AVS has been linked to the progression of aortic stenosis, and statin treatment may be beneficial in retarding the progression of valvular aortic sclerosis to aortic stenosis, particularly in its early stages (PUBMED:15819504). This indicates a potential therapeutic approach for managing AVS and possibly influencing its association with CAD. In summary, while some studies suggest that AVS is an independent predictor of CAD severity and complexity, particularly in younger patients with ACS or in those with stable CAD, other research indicates that AVS may not be directly associated with the severity and complexity of CAD. The relationship between AVS and CAD may be influenced by various factors, including age and the presence of other cardiovascular risk factors.
Instruction: The effects of standardized trauma training on prehospital pain control: have pain medication administration rates increased on the battlefield? Abstracts: abstract_id: PUBMED:22847093 The effects of standardized trauma training on prehospital pain control: have pain medication administration rates increased on the battlefield? Background: The US Military has served in some of the most austere locations in the world. In this ever-changing environment, units are organized into smaller elements operating in very remote areas. This often results in longer evacuation times, which can lead to a delay in pain management if treatment is not initiated in the prehospital setting. Early pain control has become an increasingly crucial military prehospital task and must be controlled from the pain-initiating event. The individual services developed their standardized trauma training based on the recommendations by Frank Butler and the Defense Health Board Committee on Tactical Combat Casualty Care. This training stresses evidence-based treatment modalities, including pain control, derived from casualty injury analysis. Inadequate early pain control may lead to multiple acute and potentially chronic effects. These effects encompass a wide range from changes in blood pressure to delayed wound healing and posttraumatic stress disorder. Therefore, it is essential that pain be addressed in the prehospital environment. Methods: Institutional Review Board approval was obtained to conduct a retrospective Joint Theater Trauma Registry comparative study evaluating whether standardized trauma training increased prehospital pain medication administration between 2007 and 2009. These years were selected on the basis of mandatory training initiation dates and available Joint Theater Trauma Registry records. Records were analyzed for all US prehospital trauma cases with documented pain medication administration from Operations Enduring Freedom and Iraqi Freedom for the specified years. Results: Data analysis revealed 232 patients available for review (102 for 2007 and 130 for 2009). A statistically significant prehospital pain treatment increase was noted, from 3.1% in 2007 to 6.7% in 2009 (p &lt; 0.0005; 95% confidence interval, 2.39-4.93). Conclusion: Standardized trauma training has increased the administration of prehospital pain medication and the awareness of the importance of early pain control. abstract_id: PUBMED:26727337 Analysis of Prehospital Documentation of Injury-Related Pain Assessment and Analgesic Administration on the Contemporary Battlefield. In addition to life-saving interventions, the assessment of pain and subsequent administration of analgesia are primary benchmarks for quality emergency medical services care which should be documented and analyzed. Analyze US combat casualty data from the Department of Defense Trauma Registry (DoDTR) with a primary focus on prehospital pain assessment, analgesic administration and documentation. Retrospective cohort study of battlefield prehospital and hospital casualty data were abstracted by DoDTR from available records from 1 September 2007 through 30 June 2011. Data included demographics; injury mechanism; prehospital and initial combat hospital pain assessment documented by standard 0-to-10 numeric rating scale; analgesics administered; and survival outcome. Records were available for 8,913 casualties (median ISS of 5 [IQR 2 to 10]; 98.7% survived). Prehospital analgesic administration was documented for 1,313 cases (15%). Prehospital pain assessment was recorded for 581 cases (7%; median pain score 6 [IQR 3 to 8]), hospital pain assessment was recorded for 5,007 cases (56%; median pain score5 [CI95% 3 to 8]), and 409 cases (5%) had both prehospital and hospital pain assessments that could be paired. In this paired group, 49.1% (201/409) had alleviation of pain evidenced by a decrease in pain score (median 4,, IQR 2 to 5); 23.5% (96/409) had worsening of pain evidenced by an increase in pain score (median 3, CI95 2.8 to 3.7, IQR 1 to 5); 27.4% (112/409) had no change; and the overall difference was an average decrease in pain score of 1.1 (median 0, IQR 0 to 3, p &lt; 0.01). Time-series analysis showed modest increases in prehospital and hospital pain assessment documentation and prehospital analgesic documentation. Our study demonstrates that prehospital pain assessment, management, and documentation remain primary targets for performance improvement on the battlefield. Results of paired prehospital to hospital pain scores and time-series analysis demonstrate both feasibility and benefit of prehospital analgesics. Future efforts must also include an expansion of the prehospital battlefield analgesic formulary. abstract_id: PUBMED:30118637 Trends in Prehospital Analgesia Administration by US Forces From 2007 Through 2016. Background: Tactical Combat Casualty Care (TCCC) guidelines regarding prehospital analgesia agents have evolved. The guidelines stopped recommending intramuscular (IM) morphine in 1996, recommending only intravenous (IV) routes. In 2006, the guidelines recommended oral transmucosal fentanyl citrate (OTFC), and in 2012 it added ketamine via all routes. It remains unclear to what extent prehospital analgesia administered on the battlefield adheres to these guidelines. We seek to describe trends in analgesia administration patterns on the battlefield during 2007-2016. Methods: This is a secondary analysis of a Department of Defense Trauma Registry data set from January 2007 to August 2016. Within that group, we searched for subjects who received IM morphine, IV morphine, OTFC, parenteral fentanyl, or ketamine (all routes). Results: Our predefined ED search codes captured 28,222 subjects during the study period. Of these, 594 (2.1%) received IM morphine; 3,765 (13.3%) received IV morphine; 589 (2.1%) received OTFC; and 1,510 (5.4%) subjects received ketamine. Annual rates of administration of IM morphine were relatively stable during the study period, while those for OTFC and ketamine generally trended upward starting in 2012. In particular, the proportion of subjects receiving ketamine rose from 3.9% (n = 995/25,618) during the study period preceding its addition to the TCCC guidelines (2007 to 2012) to 19.8% thereafter (2013-2016, n = 515/2,604, p &lt; 0.001). Conclusions: During the study period, rates of prehospital administration of IM morphine remained relatively stable while those for OTFC and ketamine both rose. These findings suggest that TCCC guidelines recommending the use of these agents had a material impact on prehospital analgesia patterns. abstract_id: PUBMED:23192075 Safety and efficacy of oral transmucosal fentanyl citrate for prehospital pain control on the battlefield. Background: Acute pain, resulting from trauma and other causes, is a common condition that imposes a need for prehospital analgesia on and off the battlefield. The narcotic most frequently used for prehospital analgesia on the battlefield during the past century has been morphine. Intramuscular morphine has a delayed onset of pain relief that is suboptimal and difficult to titrate. Although intravenously administered morphine can readily provide rapid and effective prehospital analgesia, oral transmucosal fentanyl citrate (OTFC) is a safe alternative that does not require intravenous access. This study evaluates the safety and efficacy of OTFC in the prehospital battlefield environment. Methods: Data collected during combat deployments (Afghanistan and Iraq) from March 15, 2003, to March 31, 2010, were analyzed. Patients were US Army Special Operations Command casualties. Patients receiving OTFC for acute pain were evaluated. Pretreatment and posttreatment pain intensities were quantified by the verbal numeric rating scale (NRS) from 0 to 10. OTFC adverse effects and injuries treated were also evaluated. Results: A total of 286 patients were administered OTFC, of whom 197 had NRS pain evaluations conducted before and approximately 15 minutes to 30 minutes following treatment. The difference between NRS pain scores at 0 minutes (NRS, 8.0 [1.4]) and 15 minutes to 30 minutes (NRS, 3.2 [2.1]) was significant (p &lt; 0.001). Only 18.3% (36 of 197) of patients were also administered other types of analgesics. Nausea was the most common adverse effect as reported by 12.7% (25 of 197) of patients. The only major adverse effect occurred in the patient who received the largest opioid dose, 3,200-µg OTFC and 20-mg morphine. This patient exhibited hypoventilation and saturation of less than 90% requiring low-dose naloxone. Conclusion: OTFC is a rapid and noninvasive pain management strategy that provides safe and effective analgesia in the prehospital battlefield setting. OTFC has considerable implications for use in civilian prehospital and austere environments. Level Of Evidence: Therapeutic study, level IV. abstract_id: PUBMED:36975606 Prehospital Blood Administration in Pediatric Patients: A Case Report. Prehospital blood administration programs have demonstrated success both on the battlefield and throughout civilian emergency medical services programs. While previous research often discusses the use of prehospital blood administration for adult trauma and medical patients, few studies have reported the benefits of prehospital blood administration for pediatric patients. This case report describes treatment received by a 7-year-old female gunshot victim who was successfully treated by a prehospital blood administration program in the southern United States. abstract_id: PUBMED:26727339 Prehospital Opioid Administration in the Emergency Care of Injured Children. Objective: Prior studies have identified provider and system characteristics that impede pain management in children, but no studies have investigated the effect of changing these characteristics on prehospital opioid analgesia. Our objectives were to determine: 1) the frequency of opioid analgesia and pain score documentation among prehospital pediatric patients after system wide changes to improve pain treatment, and 2) if older age, longer transport times, the presence of vascular access and pain score documentation were associated with increased prehospital administration of opioid analgesia in children. Methods: This was a retrospective cross-sectional study of pediatric patients aged 3-18 years assessed by a single EMS system between October 1, 2011 and September 30, 2013. Prior to October 2011, the EMS system had implemented 3 changes to improve pain treatment: (1) training on age appropriate pain scales, (2) protocol changes to allow opioid analgesia without contacting medical control, and (3) the introduction of intranasal fentanyl. All patients with working assessments of blunt, penetrating, lacerating, and/or burn trauma were included. We used descriptive statistics to determine the frequency of pain score documentation and opioid analgesia administration and logistic regression to determine the association of age, transport time, and the presence of intravenous access with opioid analgesia administration. Results: Of the 1,368 eligible children, 336 (25%) had a documented pain score. Eleven percent (130/1204) of children without documented contraindications to opioid administration received opioids. Of the children with no documented pain score and no protocol exclusions, 9% (81/929) received opioid analgesia, whereas 18% (49/275) with a documented pain score ≥4 and no protocol exclusions received opioids. Multivariate analysis revealed that vascular access (OR = 11.89; 95% CI: 7.33-19.29), longer patient transport time (OR = 1.07; 95% CI: 1.04-1.11), age (OR 0.93; 95% CI: 0.88-0.98) and pain score documentation (OR 2.23; 95% CI: 1.40-3.55) were associated with opioid analgesia. Conclusions: Despite implementation of several best practice recommendations to improve prehospital pain treatment, few children have a documented pain score and even fewer receive opioid analgesia. Children with longer transport times, successful IV placement, and/or documentation of pain score(s) were more likely to receive prehospital analgesia. abstract_id: PUBMED:36609399 Oral transmucosal fentanyl citrate analgesia in prehospital trauma care: an observational cohort study. Background: Pain is one of the major prehospital symptoms in trauma patients and requires prompt management. Recent studies have reported insufficient analgesia after prehospital treatment in up to 43% of trauma patients, leaving significant room for improvement. Good evidence exists for prehospital use of oral transmucosal fentanyl citrate (OTFC) in the military setting. We hypothesized that the use of OTFC for trauma patients in remote and challenging environment is feasible, efficient, safe, and might be an alternative to nasal and intravenous applications. Methods: This observational cohort study examined 177 patients who were treated with oral transmucosal fentanyl citrate by EMS providers in three ski and bike resorts in Switzerland. All EMS providers had previously been trained in administration of the drug and handling of potential adverse events. Results: OTFC caused a statistically significant and clinically relevant decrease in the level of pain by a median of 3 (IQR 2 to 4) in NRS units (P &lt; 0.0001). Multiple linear regression analysis showed a significant absolute reduction in pain, with no differences in all age groups and between genders. No major adverse events were observed. Conclusions: Prehospital administration of OTFC is safe, easy, and efficient for extrication and transport across all age groups, gender, and types of injuries in alpine environments. Side effects were few and mild. This could provide a valuable alternative in trauma patients with severe pain, without the delay of inserting an intravenous line, especially in remote areas, where fast action and easy administration are important. abstract_id: PUBMED:32782893 Evaluation of Safety and Efficacy of Prehospital Paramedic Administration of Sub-Dissociative Dose of Ketamine in the Treatment of Trauma-Related Pain in Adult Civilian Population. Opiates are addicting and have a high potential for dependency. In the past decades, opiates remained the first-line pharmaceutical option of prehospital treatment for acute traumatic pain in the civilian population. Ketamine is an N-methyl-d-aspartate (NMDA) receptor antagonist that has analgesic properties and may serve as an alternative agent for the treatment of acute traumatic pain in prehospital settings. This study aims to assess the safety and efficacy of ketamine administration by paramedics in civilian prehospital settings for the treatment of acute traumatic pain. This was a prospective observational study in San Bernardino, Riverside and Stanislaus counties. Patients were included if they were &gt; 15 years of age with complaints of traumatic or burn-related pain. Patients were excluded if they received opiates up to six hours prior to or concurrently with ketamine administration. The dose administered was 0.3 mg/kg intravenously over five minutes with a maximum dose of 30 mg. The option to administer a second dose was available to paramedics if the patient continued to have pain after 15 minutes following the first administration. Paired-T tests were conducted to assess the change in the primary outcome (pain score) and secondary outcomes (e.g. systolic blood pressure, pulse, and respiratory rate). P-value&lt;0.05 was considered to be statistically significant. A total of 368 patients were included in the final analysis. The average age was 52.9 ± 23.1 years, and the average weight was 80.4 ± 22.2 kg. There was a statistically significant reduction in the pain score (9.13 ± 1.28 vs 3.7 ± 3.4, delta=5.43 ± 3.38, p&lt;0.0001). Additionally, there was a statistically significant change in systolic blood pressure (143.42 ± 27.01 vs 145.65 ± 26.26, delta=2.22 ± 21.1, p=0.044), pulse (88.06 ± 18 vs 84.64 ± 15.92, delta= -3.42 ± 12.12, p&lt;0.0001), and respiratory rate (19.04 ± 3.59 vs 17.74 ± 3.06, delta=-1.3 ± 2.96, p&lt;0.0001). The current study suggested that paramedics are capable of safely identifying the appropriate patients for the administration of sub-dissociative doses of ketamine in the prehospital setting. Furthermore, the current study suggested that ketamine may be an effective analgesic in a select group of adult trauma patients. abstract_id: PUBMED:27411064 Multicenter Evaluation of Prehospital Opioid Pain Management in Injured Children. Background: The National Association of Emergency Medical Services Physicians' (NAEMSP) Position Statement on Prehospital Pain Management and the joint National Highway Traffic Safety Administration (NHTSA) and Emergency Medical Services for Children (EMSC) Evidence-based Guideline for Prehospital Analgesia in Trauma aim to improve the recognition, assessment, and treatment of prehospital pain. The impact of implementation of these guidelines on pain management in children by emergency medical services (EMS) agencies has not been assessed. Objective: Determine the change in frequency of documented pain severity assessment and opiate administration among injured pediatric patients in three EMS agencies after adoption of best practice recommendations. Methods: This is a retrospective study of children &lt;18 years of age with a prehospital injury-related primary impression from three EMS agencies. Each agency independently implemented pain protocol changes which included adding the use of age-appropriate pain scales, decreasing the minimum age for opiate administration, and updating fentanyl dosing. We abstracted data from prehospital electronic patient records before and after changes to the pain management protocols. The primary outcomes were the frequency of administration of opioid analgesia and documentation of pain severity assessment as recorded in the prehospital patient care record. Results: A total of 3,597 injured children were transported prior to pain protocol changes and 3,743 children after changes. Opiate administration to eligible patients across study sites regardless of documentation of pain severity was 156/3,089 (5%) before protocol changes and 175/3,509 (5%) after (p = 0.97). Prior to protocol changes, 580 (18%) children had documented pain assessments and 430 (74%) had moderate-to-severe pain. After protocol changes, 644 (18%) patients had pain severity documented with 464 (72%) in moderate-to-severe pain. For all study agencies, pain severity was documented in 13%, 19%, and 22% of patient records both before and after protocol changes. There was a difference in intranasal fentanyl administration rates before (27%) and after (17%) protocol changes (p = 0.02). Conclusion: The proportion of injured children who receive prehospital opioid analgesia remains suboptimal despite implementation of best practice recommendations. Frequency of pain severity assessment of injured children is low. Intranasal fentanyl administration may be an underutilized modality of prehospital opiate administration. abstract_id: PUBMED:30704401 Differences in pain treatment between surgeons and anaesthesiologists in a physician staffed prehospital emergency medical service: a retrospective cohort analysis. Background: Although pain treatment is an important objective in prehospital emergency medicine the incidence of oligoanalgesia is still high in prehospital patients. Given that prehospital emergency medicine in Germany is open for physicians of any speciality, the prehospital pain treatment may differ depending on the primary medical education. Aim of this study was to explore the difference in pain treatment between surgeons and anaesthesiologists in a physician staffed emergency medical service. Methods: Retrospective single centre cohort analysis in a physician staffed ground based emergency medical service from January 2014 until December 2016. A total of 8882 consecutive emergency missions were screened. Primary outcome measure was the difference in application frequency of prehospital analgesics by anaesthesiologist or surgeon. Univariate and multivariate logistic regression analysis was used for statistical analysis including subgroup analysis for trauma and acute coronary syndrome. Results: A total of 8238 patients were included in the analysis. There was a significant difference in the application frequency of analgesics between surgeons and anaesthesiologists especially for opioids (p &lt; 0.001, OR 0.68 [0.56-0.82]). Fentanyl was the most common administered analgesic in the trauma subgroup, but significantly less common used by surgeons (p = 0.005, OR 0.63 [0.46-0.87]). In acute coronary syndrome cases there was no significant difference in morphine administration between anaesthesiologists and surgeons (p = 0.49, OR 0.88 [0.61-1.27]). Conclusions: Increased training for prehospital pain treatment should be implemented, since opioids were administered notably less frequent by surgeons than by anaesthesiologists. Answer: Yes, the effects of standardized trauma training on prehospital pain control have been positive, with pain medication administration rates increasing on the battlefield. A retrospective study evaluating whether standardized trauma training increased prehospital pain medication administration between 2007 and 2009 found a statistically significant increase in prehospital pain treatment, from 3.1% in 2007 to 6.7% in 2009 (PUBMED:22847093). This suggests that standardized trauma training has indeed increased the administration of prehospital pain medication and the awareness of the importance of early pain control.
Instruction: School mental health services: signpost for out-of-school service utilization in adolescents with mental disorders? Abstracts: abstract_id: PUBMED:37467422 The relationship between school mental health service use in high school and educational outcomes of adolescents with psychiatric disorders. This study aimed to examine the relationship between school mental health service use in high school and educational outcomes of adolescents with psychiatric disorders. The sample included 2617 adolescents who were enrolled in eighth grade in a large urban school district in the United States, were enrolled in Medicaid during eighth grade, and had a mental health diagnosis. Psychiatric hospitalization, school enrollment, school absences, out-of-school suspensions, school dropouts, and school exits for negative reasons were examined as mental health and educational outcomes. Compared with adolescents who used school mental health services for 2 years following eighth grade, adolescents who did not use school mental health service during the high school years had a significantly lower annual number of days enrolled in school and higher rates of exiting school for negative reasons such as school dropout and long-term hospitalization. Our findings support the positive role of school mental health care delivery in high schools in preventing negative educational outcomes for adolescents with psychiatric disorder. abstract_id: PUBMED:24911241 School mental health services: signpost for out-of-school service utilization in adolescents with mental disorders? A nationally representative United States cohort. Background: School mental health services are important contact points for children and adolescents with mental disorders, but their ability to provide comprehensive treatment is limited. The main objective was to estimate in mentally disordered adolescents of a nationally representative United States cohort the role of school mental health services as guide to mental health care in different out-of-school service sectors. Methods: Analyses are based on weighted data (N = 6483) from the United States National Comorbidity Survey Replication Adolescent Supplement (participants' age: 13-18 years). Lifetime DSM-IV mental disorders were assessed using the fully structured WHO CIDI interview, complemented by parent report. Adolescents and parents provided information on mental health service use across multiple sectors, based on the Service Assessment for Children and Adolescents. Results: School mental health service use predicted subsequent out-of-school service utilization for mental disorders i) in the medical specialty sector, in adolescents with affective (hazard ratio (HR) = 3.01, confidence interval (CI) = 1.77-5.12), anxiety (HR = 3.87, CI = 1.97-7.64), behavior (HR = 2.49, CI = 1.62-3.82), substance use (HR = 4.12, CI = 1.87-9.04), and eating (HR = 10.72, CI = 2.31-49.70) disorders, and any mental disorder (HR = 2.97, CI = 1.94-4.54), and ii) in other service sectors, in adolescents with anxiety (HR = 3.15, CI = 2.17-4.56), behavior (HR = 1.99, CI = 1.29-3.06), and substance use (HR = 2.48, CI = 1.57-3.94) disorders, and any mental disorder (HR = 2.33, CI = 1.54-3.53), but iii) not in the mental health specialty sector. Conclusions: Our findings indicate that in the United States, school mental health services may serve as guide to out-of-school service utilization for mental disorders especially in the medical specialty sector across various mental disorders, thereby highlighting the relevance of school mental health services in the trajectory of mental care. In light of the missing link between school mental health services and mental health specialty services, the promotion of a stronger collaboration between these sectors should be considered regarding the potential to improve and guarantee adequate mental care at early life stages. abstract_id: PUBMED:23622851 School mental health resources and adolescent mental health service use. Objective: Although schools are identified as critical for detecting youth mental disorders, little is known about whether the number of mental health providers and types of resources that they offer influence student mental health service use. Such information could inform the development and allocation of appropriate school-based resources to increase service use. This article examines associations of school resources with past-year mental health service use among students with 12-month DSM-IV mental disorders. Method: Data come from the U.S. National Comorbidity Survey Adolescent Supplement (NCS-A), a national survey of adolescent mental health that included 4,445 adolescent-parent pairs in 227 schools in which principals and mental health coordinators completed surveys about school resources and policies for addressing student emotional problems. Adolescents and parents completed the Composite International Diagnostic Interview and reported mental health service use across multiple sectors. Multilevel multivariate regression was used to examine associations of school mental health resources and individual-level service use. Results: Nearly half (45.3%) of adolescents with a 12-month DSM-IV disorder received past-year mental health services. Substantial variation existed in school resources. Increased school engagement in early identification was significantly associated with mental health service use for adolescents with mild/moderate mental and behavior disorders. The ratio of students to mental health providers was not associated with overall service use, but was associated with sector of service use. Conclusions: School mental health resources, particularly those related to early identification, may facilitate mental health service use and may influence sector of service use for youths with DSM disorders. abstract_id: PUBMED:35488938 The impact of school-based screening on service use in adolescents at risk for mental health problems and risk-behaviour. Early detection and intervention can counteract mental disorders and risk behaviours among adolescents. However, help-seeking rates are low. School-based screenings are a promising tool to detect adolescents at risk for mental problems and to improve help-seeking behaviour. We assessed associations between the intervention "Screening by Professionals" (ProfScreen) and the use of mental health services and at-risk state at 12 month follow-up compared to a control group. School students (aged 15 ± 0.9 years) from 11 European countries participating in the "Saving and Empowering Young Lives in Europe" (SEYLE) study completed a self-report questionnaire on mental health problems and risk behaviours. ProfScreen students considered "at-risk" for mental illness or risk behaviour based on the screening were invited for a clinical interview with a mental health professional and, if necessary, referred for subsequent treatment. At follow-up, students completed another self-report, additionally reporting on service use. Of the total sample (N = 4,172), 61.9% were considered at-risk. 40.7% of the ProfScreen at-risk participants invited for the clinical interview attended the interview, and 10.1% of subsequently referred ProfScreen participants engaged in professional treatment. There were no differences between the ProfScreen and control group regarding follow-up service use and at-risk state. Attending the ProfScreen interview was positively associated with follow-up service use (OR = 1.783, 95% CI = 1.038-3.064), but had no effect on follow-up at-risk state. Service use rates of professional care as well as of the ProfScreen intervention itself were low. Future school-based interventions targeting help-seeking need to address barriers to intervention adherence.Clinical Trials Registration: The trial is registered at the US National Institute of Health (NIH) clinical trial registry (NCT00906620, registered on 21 May, 2009), and the German Clinical Trials Register (DRKS00000214, registered on 27 October, 2009). abstract_id: PUBMED:28793369 Mental health work in school health services and school nurses' involvement and attitudes, in a Norwegian context. Aims And Objectives: To explore school nurses' experiences with and attitudes towards working with young people with mental health problem in the school health services. Background: Worldwide, 10%-20% of children and adolescents are affected by mental health problems. When these occur during youth, they constitute a considerable burden and are one of the main causes of disability among adolescents. School nurses are at the forefront of care for children and adolescents, identifying pupils struggling with physical, mental, psychosocial or emotional issues. Design: A qualitative, explorative study was performed based on open-ended questions in a cross-sectional study of 284 school nurses in Norway. Inclusion criteria were as follows: working as a school nurse in the school health services with children and adolescents between the ages of 11-18 years. A qualitative inductive content analysis was conducted. Results: Three generic categories emerged: perception of their role and experiences with mental health: the school nurses acknowledge their important role in work with adolescents focusing on their mental health. Perception of their professional competence: the school nurses described a lack of confidence and unmet training needs concerning mental health problems. Experiences with collaboration: the school nurses requested more knowledge about inter- and multidisciplinary cooperation regarding follow-up of pupils with mental health problems. Conclusions: The school nurses lacked knowledge and confidence in respect of working with children and adolescents suffering from mental health problems. This may be a barrier to giving pupils adequate aid. Relevance To Clinical Practice: Nurses need to acquire more knowledge about mental health problems among children and adolescents as this is a growing public health issue. Educational programmes for school nurses need to be revised to achieve this. abstract_id: PUBMED:28467270 Educator Preparedness for Mental Health in Adolescents: Opportunities for School Nurse Leadership. One in five adolescents will experience a mental health event in their lifetime. If left untreated, depression, anxiety, attention-deficit/hyperactivity, and anorexia/bulimia can elevate the risk of dropping out of high school. As a key principle of 21st-century nursing practice, school nurses must provide leadership in educating school staff in identifying and responding to mental health issues in high school settings. This article describes the results of an online survey assessing secondary educators' knowledge of and experience with mental health issues in one school district. Resources are suggested to assist nurses in educating school staff, providing them with ways to decrease stigma in the classroom, and partnering with the community to improve services. abstract_id: PUBMED:27149432 Brief report: Association between psychological sense of school membership and mental health among early adolescents. Mental health problems among adolescents are prevalent and are associated with important difficulties for a normal development during this period and later in life. Understanding better the risk factors associated with mental health problems may help to design and implement more effective preventive interventions. Several personal and family risk factors have been identified in their relationship to mental health; however, much less is known about the influence of school-related factors. One of these school factors is school belonging or the psychological sense of school membership. This is a well-known protective factor to develop good academic commitment, but it has been scarcely studied in its relationship to mental health. We explored this association in a sample of early adolescents and found that students who reported having a high level of school membership had lower mental health problems, even after controlling for several personal and family factors. abstract_id: PUBMED:21088104 Mental health service utilization of Somali adolescents: religion, community, and school as gateways to healing. This mixed-method study examines the utility of the Gateway Provider Model (GPM) in understanding service utilization and pathways to help for Somali refugee adolescents. Somali adolescents living in the Northeastern United States, and their caregivers, were interviewed. Results revealed low rates of use of mental health services. However other sources of help, such as religious and school personnel, were accessed more frequently. The GPM provides a helpful model for understanding refugee youth access to services, and an elaborated model is presented showing how existing pathways to help could be built upon to improve refugee youth access to services. abstract_id: PUBMED:9211040 Bridging the gap between service need and service utilization: a school-based mental health program. In an effort to bridge the gap between service need and service utilization, an urban based, university affiliated children's psychiatric outpatient clinic has implemented a program which provides mental health services in inner city schools. When compared with the central clinic populations (N = 304), the school sample (N = 44) was markedly socioeconomically disadvantaged, minority, and as psychiatrically impaired as the central clinic population. School based mental health services have the potential for bridging the gap between need and utilization by reaching disadvantaged children who would otherwise not have access to these services. Implications for such services are discussed. abstract_id: PUBMED:16403139 Contribution of 'school-link' to an area mental health service. Objective: To evaluate the contribution of the 'School-Link' initiative to an Area Mental Health Service. Method: Surveys and focus groups of school and health service personnel were conducted to examine the three foci of the School-Link initiative, namely prevention, early intervention and service access. Results: Improvements witnessed since the commencement of School-Link include an increase in the number of evidence-based mental health promotion programmes in schools, improvements in the communication between health and education departments, improvements in referral patterns and better practices, such as improved feedback, in both health and education sectors. School counsellors feel more supported in their role by health services and better able to manage certain patients. Conclusions: The actual and potential benefits of School-Link for young persons appear considerable. School-Link has provided opportunities for communication, upgrading of skills and collaborative work on mental health issues, in both education and health settings. Answer: Yes, school mental health services can act as a signpost for out-of-school service utilization in adolescents with mental disorders. The study represented in abstract PUBMED:24911241 found that the use of school mental health services predicted subsequent out-of-school service utilization for mental disorders in various sectors, particularly in the medical specialty sector across different types of mental disorders. This indicates that school mental health services may guide adolescents to appropriate mental health care outside of school, highlighting the importance of these services in the overall trajectory of mental health care for young people. Additionally, the study in abstract PUBMED:23622851 suggests that school mental health resources, especially those related to early identification, may facilitate mental health service use and influence the sector of service use for youths with DSM disorders. This further supports the role of school-based services in directing students to the necessary care. Moreover, the study in abstract PUBMED:35488938, while showing low service use rates for professional care and the school-based intervention itself, found that attending a school-based screening interview was positively associated with follow-up service use. This suggests that school-based screenings and interventions can encourage adolescents to seek professional treatment. Furthermore, the study in abstract PUBMED:16403139 on the 'School-Link' initiative found improvements in referral patterns and practices between health and education sectors, indicating that school-linked mental health services can enhance access to and coordination with out-of-school mental health services. In summary, school mental health services play a crucial role in guiding adolescents with mental disorders to out-of-school services, improving access to care, and potentially influencing the type of services utilized.
Instruction: Can the efficacy of behavioral and cognitive therapy for obsessive compulsive disorder be augmented by innovative computerized adjuvant? Abstracts: abstract_id: PUBMED:27109326 Can the efficacy of behavioral and cognitive therapy for obsessive compulsive disorder be augmented by innovative computerized adjuvant? Aim: Cognitive behavioral therapy (CBT) is recognized as an effective treatment for obsessive-compulsive disorder (OCD). To maximize its effectiveness, we designed an "experimental" CBT defined by the addition of a computerized psychoeducative tool. Method: In a participative process involving patients through meetings of the French OCD association (AFTOC) and therapists through methodological workshops, we built a therapeutic tool from an experimental checking task. This task, which had been published in an earlier work, was adapted for its psychoeducative dimension. We here report on a randomized double-blind trial which included 35 patients with a moderate to severe OCD (Yale-Brown obsessive-compulsive scale, YBOCS between 16 and 25) predominant checking symptoms, no comorbidities, and 2-month stabilized or no treatment. Patients were randomly assigned to either "standard" versus "experimental" CBT. Both therapies were conducted by four CBT-experienced therapists specialized in OCD through weekly individualized sessions over 3 months. Therapy sessions of the experimental CBT were conducted as the standard CBT except for a short exercise with the computerized psychoeducative tool performed by the patient and debriefed with the therapist at the end of the sessions. Patients were assessed before, during, after therapy and again 6 months later using standard clinical tools and a neurobehavioral assessment based on an original symptom-provocation task with anxiety ratings including three types of photographs: neutral, generic inducing obsessions (e.g., doorknobs, electric wires…) and personalized (taken by the patients in their own environment). Results: Clinically, "standard" and "experimental" CBT resulted in a significant but equivalent improvement (48% vs 45% reduction of the Y-BOCS score; P=0.36; d=0.12). Therapists were satisfied with the psychoeducative dimension of the computerized psychoeducative tool but reported variable acceptance across patients. Patients appreciated its usability. The clinical improvement was associated with a reduction of the task-induced anxiety (r=0.42, P&lt;0.05), especially towards personalized items (-28,2% vs -20.41% for generic and -6.24% for neutral photographs, P&lt;0.001). Mid-therapy response level was predictive of the final improvement (r=0.82, P&lt;0.001). Conclusion: The computerized tool may provide a well-accepted therapeutic adjuvant even though it doesn't improve the therapeutic effect. Using a personalized symptom-provocation task reveals the parallel evolution of symptoms and neurobehavioral markers through CBT. Despite the difficulty of improving an evidence-based therapy, mid-therapy results call for investigating the possible adjustments of treatment strategies at an early stage. abstract_id: PUBMED:34335402 The Effect of Computerized Cognitive Behavioral Therapy on People's Anxiety and Depression During the 6 Months of Wuhan's Lockdown of COVID-19 Epidemic: A Pilot Study. Background: The effectiveness of computerized cognitive behavioral therapy (CCBT) has been proven for mild and moderate anxiety and depression. In 2016, the first official Chinese CCBT system was launched by Chinese Cognitive Behavior Therapy Professional Organizations and included four items: getting out of depression, overcoming anxiety, staying away from insomnia and facing Obsessive-compulsive disorder. During the COVID-19 epidemic, Chinese CCBT system served the public for free. This study explored the effects of CCBT on anxiety and depression by comparing the use of the platform during the epidemic and during the same period in 2019. Methods: Users were divided into a depression group or an anxiety group according to their own discretion. The subjects used the self-rating anxiety scale (SAS) and self-rating depression scale (SDS) before each training. Each training group completed the corresponding CCBT training project, which had 5-6 training sessions, an average of once every 5 days. The training content in 2019 and 2020 was identical. This study compared the demographic characteristics, depression, and anxiety levels of CCBT platform users during the lockdown period in Wuhan (LP2020), where the outbreak was concentrated in China, from January 23 to July 23, 2020 and the same period in 2019 (SP2019). Result: (1) There were significant differences in gender (χ2 = 7.215, P = 0.007), region (χ2 = 4.225, P = 0.040) and duration of illness (χ2 = 7.867, P = 0.049) between the two periods. (2) There was a positive Pearson correlation between the number of users of CCBT platform during LP2020 and number of confirmed cases of COVID-19 in each province (r = 0.9429, P &lt; 0.001). (3) In LP2020, the SAS (t = 2.579, P = 0.011) and SDS (t = 2.894, P = 0.004) scores at T0 in Hubei were significantly higher than those in other regions. (4) The CCBT platform has an obvious effect on anxiety (F = 4.74, P = 0.009) and depression on users (F = 4.44, P = 0.009). Conclusion: This study showed women, students and people who are more seriously affected by the epidemic were more likely to accept the CCBT training. The CCBT platform made a significant contribution toward alleviating the anxiety and depression symptoms of users during the epidemic. When face-to-face psychotherapy is not available during the epidemic, CCBT can be used as an effective alternative. abstract_id: PUBMED:25613661 Efficacy of cognitive-behavioral therapy for obsessive-compulsive disorder. Cognitive-behavioral therapy (CBT), which encompasses exposure with response prevention (ERP) and cognitive therapy, has demonstrated efficacy in the treatment of obsessive-compulsive disorder (OCD). However, the samples studied (reflecting the heterogeneity of OCD), the interventions examined (reflecting the heterogeneity of CBT), and the definitions of treatment response vary considerably across studies. This review examined the meta-analyses conducted on ERP and cognitive therapy (CT) for OCD. Also examined was the available research on long-term outcome associated with ERP and CT. The available research indicates that ERP is the first line evidence based psychotherapeutic treatment for OCD and that concurrent administration of cognitive therapy that targets specific symptom-related difficulties characteristic of OCD may improve tolerance of distress, symptom-related dysfunctional beliefs, adherence to treatment, and reduce drop out. Recommendations are provided for treatment delivery for OCD in general practice and other service delivery settings. The literature suggests that ERP and CT may be delivered in a wide range of clinical settings. Although the data are not extensive, the available research suggests that treatment gains following ERP are durable. Suggestions for future research to refine therapeutic outcome are also considered. abstract_id: PUBMED:22071667 Efficacy of cognitive behavioral therapy in the treatment of mood and anxiety disorders in adults Cognitive behavioral therapy (CBT) represents that form of psychotherapy which has most research data to build on in the treatment of mood and anxiety disorders for adults. In this review we will introduce CBT and present the results of pertinent outcome research. Efficacy at the end of treatment is discussed, as well as long term effectiveness and the efficacy of combined treatment with medication and CBT. In addition, we discuss the pros and cons of group CBT compared to CBT in individual format, and comorbidity of mental disorders. According to this review CBT is efficacious for major depressive disorder, generalized anxiety disorder, panic disorder, post-traumatic stress disorder, obsessive compulsive disorder, social phobia and specific phobia. Efficacy of CBT is equal to or better than efficacy of drugs in the treatment of the above disorders, but there is less access to CBT. Longterm effectiveness of CBT appears to be good, but research on combined treatment is yet in its infancy and conclusions are premature on its place in treatment. Key words: Cognitive behavioral therapy, psychotropic treatment, efficacy, long-term effects, combined treatment, mental disorders, adults. abstract_id: PUBMED:25937054 Efficacy of cognitive-behavioral therapy for obsessive-compulsive disorder. Cognitive-behavioral therapy (CBT), which encompasses exposure with response prevention (ERP) and cognitive therapy (CT), has demonstrated efficacy in the treatment of obsessive-compulsive disorder (OCD). However, the samples studied (reflecting the heterogeneity of OCD), the interventions examined (reflecting the heterogeneity of CBT), and the definitions of treatment response vary considerably across studies. This review examined the meta-analyses conducted on ERP and cognitive therapy (CT) for OCD. Also examined was the available research on long-term outcome associated with ERP and CT. The available research indicates that ERP is the first line evidence based psychotherapeutic treatment for OCD and that concurrent administration of cognitive therapy that targets specific symptom-related difficulties characteristic of OCD may improve tolerance of distress, symptom-related dysfunctional beliefs, adherence to treatment, and reduce drop out. Recommendations are provided for treatment delivery for OCD in general practice and other service delivery settings. The literature suggests that ERP and CT may be delivered in a wide range of clinical settings. Although the data are not extensive, the available research suggests that treatment gains following ERP are durable. Suggestions for future research to refine therapeutic outcome are also considered. abstract_id: PUBMED:36740350 Cognitive-Behavioral Therapy for Obsessive-Compulsive Disorder. Obsessive-compulsive disorder (OCD) is characterized by the presence of debilitating obsessions and compulsions. Cognitive and behavioral models of OCD provide a strong theoretic and empirical foundation for informing effective psychotherapeutic treatment. Cognitive-behavioral therapy (CBT) for OCD, which includes a deliberate emphasis on exposure and response/ritual prevention, has consistently demonstrated robust efficacy for the treatment of pediatric and adult OCD and is the front-line psychotherapeutic treatment for OCD. Two case vignettes describing CBT for OCD in practice as well as recommendations for clinicians are provided. abstract_id: PUBMED:20623924 Cognitive behavioral therapy of obsessive-compulsive disorder. Until the mid-1960s, obsessive-compulsive disorder (OCD) was considered to be treatment-resistant, as both psychodynamic psychotherapy and medication had been unsuccessful in significantly reducing OCD symptoms. The first real breakthrough came in 1966 with the introduction of exposure and ritual prevention. This paper will discuss the cognitive behavioral conceptualizations that influenced the development of cognitive behavioral treatments for OCD. There will be a brief discussion of the use of psychodynamic psychotherapy and early behavioral therapy, neither of which produced successful outcomes with OCD. The main part of the paper will be devoted to current cognitive behavioral therapy (CBT) with an emphasis on variants of exposure and ritual or response prevention (EX/RP) treatments, the therapy that has shown the most empirical evidence of its efficacy. abstract_id: PUBMED:32746424 Introduction to the Special Issue: Challenges in Treating Obsessive-Compulsive Disorder With Cognitive-Behavioral Therapy. Cognitive-behavioral therapy (CBT) is a recommended treatment for obsessive-compulsive disorder (OCD). CBT offers specific interventions with demonstrated efficacy, including both cognitive therapy and exposure and ritual prevention (ERP). Yet, implementing these techniques with individual patients can present clinical challenges. In this special issue, expert clinicians and researchers present cases in which they addressed some common challenges in implementing CBT for OCD, including comorbid depression, poor insight, and family accommodation of symptoms in pediatric OCD. Other papers present how CBT can be tailored to specific OCD manifestations, including post-partum OCD, taboo thoughts, and relationship OCD. Other authors detail how they have augmented standard ERP with additional treatment techniques to maximize outcomes. These articles present a diverse set of case histories and specific treatment techniques that illustrate ways to maximize the effectiveness of CBT for OCD, serving as a useful guide for clinicians in practice and highlighting areas for future research. abstract_id: PUBMED:19039446 Cognitive-behavioral therapy in obsessive-compulsive disorder Objective: To describe the cognitive-behavioral therapy in the treatment of the symptoms of obsessive-compulsive disorder. Method: Through the revision of text books and articles the origins and fundamentals of cognitive-behavioral therapy in the treatment of the symptoms of obsessive-compulsive disorder are presented. Through the review of randomized clinical trials and meta-analysis in Medline the evidences of effectiveness of cognitive-behavioral therapy in obsessive-compulsive disorder are highlighted. Results And Conclusions: The cognitive-behavioral therapy is effective in reducing symptoms of obsessive-compulsive disorder by approximately 70% of patients who adhere to treatment. The challenges ahead are to clarify the reasons why many patients do not respond to this kind of treatment and develop new strategies to increase its effectiveness. abstract_id: PUBMED:2209494 Cognitive therapy in obsessive-compulsive disorders Cognitive approach is presented both at a theoretical and clinical level. It enlarges and enriches the classical behavioral approach of obsessive-compulsive disorders. To date, controlled studies are too few to fully assert its value. Further investigations are justified to ascertain whether cognitive therapy has a value as such, or enhances the effects of more classical pharmacological and/or behavioral treatments. Answer: The efficacy of behavioral and cognitive therapy for obsessive-compulsive disorder (OCD) may potentially be augmented by innovative computerized adjuvants, although the evidence is not conclusive. A study that designed an "experimental" cognitive-behavioral therapy (CBT) by adding a computerized psychoeducative tool to standard CBT found that both "standard" and "experimental" CBT resulted in significant but equivalent improvement in OCD symptoms. The computerized tool was well-accepted by patients and therapists appreciated its psychoeducative dimension, but it did not improve the therapeutic effect beyond that of standard CBT (PUBMED:27109326). However, the use of computerized cognitive behavioral therapy (CCBT) has been shown to be effective for mild and moderate anxiety and depression, which are often comorbid with OCD. During the COVID-19 epidemic, a Chinese CCBT system was used to serve the public for free, and it made a significant contribution toward alleviating the anxiety and depression symptoms of users (PUBMED:34335402). While these findings suggest that computerized tools can be a useful adjuvant to CBT for OCD, especially in situations where face-to-face therapy is not available, more research is needed to fully understand their impact on treatment outcomes. Cognitive-behavioral therapy, including exposure with response prevention (ERP) and cognitive therapy (CT), remains the first line evidence-based psychotherapeutic treatment for OCD, with ERP demonstrating robust efficacy and durability of treatment gains (PUBMED:25613661, PUBMED:25937054, PUBMED:36740350, PUBMED:20623924, PUBMED:32746424, PUBMED:19039446, PUBMED:2209494). In conclusion, while computerized adjuvants may offer additional benefits and are well-accepted by patients, they have not yet been shown to significantly augment the efficacy of behavioral and cognitive therapy for OCD beyond the established benefits of these therapies. Further research is warranted to explore the potential of computerized tools in enhancing OCD treatment outcomes.
Instruction: Is the sagittal postural alignment different in normal and dysphonic adult speakers? Abstracts: abstract_id: PUBMED:24836364 Is the sagittal postural alignment different in normal and dysphonic adult speakers? Objective: Clinical research in the field of voice disorders, in particular functional dysphonia, has suggested abnormal laryngeal posture due to muscle adaptive changes, although specific evidence regarding body posture has been lacking. The aim of our study was to verify if there were significant differences in sagittal spine alignment between normal (41 subjects) and dysphonic speakers (33 subjects). Study Design: Cross-sectional study. Methods: Seventy-four adults, 35 males and 39 females, were submitted to sagittal plane photographs so that spine alignment could be analyzed through the Digimizer-MedCalc Software Ltd program. Perceptual and acoustic evaluation and nasoendoscopy were used for dysphonic judgments: normal and dysphonic speakers. Results: For thoracic length curvature (TL) and for the kyphosis index (KI), a significant effect of dysphonia was observed with mean TL and KI significantly higher for the dysphonic speakers than for the normal speakers. Concerning the TL variable, a significant effect of sex was found, in which the mean of the TL was higher for males than females. The interaction between dysphonia and sex did not have a significant effect on TL and KI variables. For the lumbar length curvature variable, a significant main effect of sex was demonstrated; there was no significant main effect of dysphonia or significant sex×dysphonia interaction. Conclusions: Findings indicated significant differences in some sagittal spine posture measures between normal and dysphonic speakers. Postural measures can add useful information to voice assessment protocols and should be taken into account when considering particular treatment strategies. abstract_id: PUBMED:27927463 Generation of a Patient-Specific Model of Normal Sagittal Alignment of the Spine. Study Design: Mathematical modeling of normal sagittal spinal alignment. Objective: To create a patient specific 3-dimensional (3D) model of normal adolescent spinal shape and alignment. Summary Of Background Data: Recreating normal sagittal balance is a key goal in spinal deformity surgery. Because of the variation in normal sagittal alignment based on inherent pelvic parameters, it is difficult to know what is normal for a given patient who presents with spinal deformity. Methods: Simultaneous biplanar 2-dimensional digital radiographs were taken for pediatric patients with no known spinal disease using the EOS system. Three-dimensional reconstructions were produced using sterEOS and imported into custom MATLAB software. The researchers defined relationships to approximate orientations and positions of the vertebral bodies from patients' pelvic incidence (PI). The predicted spinal contour was then calculated to optimize congruence to patients' sagittal T1-sacrum offset, sagittal curve inflection point location, and predicted vertebral body orientations and positions. Results: A total of 75 patients (26 male and 49 female) were included, mean age 14.5 ± 2.6 years. Baseline measurements were PI 46.7° ± 10.2°, sacral lope 40.2° ± 8.9°, T1-T12 kyphosis 39.8° ± 8.8°, and L1-L5 lordosis -37.1° ± 11.2°. Average difference in vertebral position in the anteroposterior direction between actual spines and their predicted models was 1.2 ± 1.2 mm and varied from an absolute minimum of 0.2 mm (T3) to an absolute maximum of 3.7 mm (L2). Conclusions: This model uses an adolescent patient's PI to predict the normal sagittal alignment that best matches that patient's native sagittal curve. The model was validated on patients with no spinal deformity; average difference between actual sagittal positions of each vertebra and those predicted by the model was less than 5 mm at each vertebral level. This model may be useful in adolescent scoliotic patients with altered sagittal alignment to determine the magnitude of 3D deformity (compared with predicted normal values) and the completeness of 3D correction. abstract_id: PUBMED:26791745 Somatotype and Body Composition of Normal and Dysphonic Adult Speakers. Objective: Voice quality provides information about the anatomical characteristics of the speaker. The patterns of somatotype and body composition can provide essential knowledge to characterize the individuality of voice quality. The aim of this study was to verify if there were significant differences in somatotype and body composition between normal and dysphonic speakers. Study Design: Cross-sectional study. Methods: Anthropometric measurements were taken of a sample of 72 adult participants (40 normal speakers and 32 dysphonic speakers) according to International Society for the Advancement of Kinanthropometry standards, which allowed the calculation of endomorphism, mesomorphism, ectomorphism components, body density, body mass index, fat mass, percentage fat, and fat-free mass. Perception and acoustic evaluations as well as nasoendoscopy were used to assign speakers into normal or dysphonic groups. Results: There were no significant differences between normal and dysphonic speakers in the mean somatotype attitudinal distance and somatotype dispersion distance (in spite of marginally significant differences [P &lt; 0.10] in somatotype attitudinal distance and somatotype dispersion distance between groups) and in the mean vector of the somatotype components. Furthermore, no significant differences were found between groups concerning the mean of percentage fat, fat mass, fat-free mass, body density, and body mass index after controlling by sex. Conclusion: The findings suggested no significant differences in the somatotype and body composition variables, between normal and dysphonic speakers. abstract_id: PUBMED:25346812 Characteristics of sagittal spino-pelvic alignment in Japanese young adults. Study Design: Radiological analysis of normal patterns of sagittal alignment of the spine. Purpose: This study aimed to clarify the characteristics of normal sagittal spino-pelvic alignment in Asian people. Overview Of Literature: It is known that there are differences in these parameters based on age, gender, and race. In order to properly plan for surgical correction of the spine for Asian patients, it is necessary to understand the normal spino-pelvic alignment parameters for this population. Methods: This study analyzed 86 Japanese healthy young adult volunteers (48 men and 38 women; age 35.9±11.1 (mean±standard deviation [SD]). The following parameters were measured on lateral standing radiographs of the entire spine: sagittal vertical axis (SVA), horizontal distance between the C7 plumb line and the posterior superior corner of the superior margin of S1, thoracic kyphotic angle (TK), lumbar lordotic angle (LLA), sacral slope (SS), pelvic tilt (PT), and pelvic incidence (PI). Results: The values (mean±SD) of SVA, TK, LLA, SS, PT, and PI were 8.45±25.7 mm, 27.5±9.6°, 43.4±14.6°, 34.6±7.8°, 13.2±8.2°, and 46.7±8.9°, respectively. The Japanese young adults evaluated in this study tended to have a smaller PI, LLA, TK, and SVA than most Caucasian people. Regarding gender differences, SVA was significantly longer and TK was significantly smaller in men; however, there was no statistically significant difference in LLA, SS, PA, and PI. Conclusions: Japanese young adults apparently have smaller PI and LLA values than Caucasian people. When making decisions for optimal sagittal spinal alignment, racial differences should be considered. abstract_id: PUBMED:36263343 Correlation between the cervical sagittal alignment and spine - pelvic sagittal alignment in asymptomatic adults. Background: Although there are studies that adequately document the linear correlation between pelvic incidence (PI), sacral slope, lumbar lordosis, and thoracic kyphosis, few have analyzed the pelvic-spine correlation including the cervical spine. Methods: This is a cross-sectional study, wherein the cervical spine was evaluated using radiography and computed tomography (CT) scans, the lumbosacral spine and the pelvis was evaluated using radiography, in adult patients without spinal pathology. Using the Surgimap tool, cervical and spinopelvic parameters were calculated by several investigators. To evaluate the correlation between cervical and spinopelvic parameters, Spearman's coefficient was calculated. To evaluate the concordance correlation of the measured parameters of cervical sagittal alignment on tomography and conventional radiography, Lin's coefficient was calculated and Bland-Altman plots were performed. Results: A total of 51 healthy adults were included in a follow-up from January 2019 to December 2020. Cervical sagittal alignment and sagittal spinopelvic alignment were assessed using radiography, and a correlation was observed between T1 slope (T1S) and lumbar mismatch (coefficient of 0.28, P = 0.047). Then, cervical sagittal alignment was evaluated using CT and sagittal spinopelvic alignment using radiography, and no correlation was observed between PI and thoracic inlet angle or cervical mismatch with lumbar mismatch. Conclusion: In asymptomatic patients, in whom cervical sagittal alignment and spinal-pelvic alignment were evaluated, only a positive correlation was found between lumbar mismatch and T1S, which lacks clinical significance. No concordance was identified between lumbar mismatch and cervical mismatch. Therefore, it is inferred that there is an independence between the sagittal spine-pelvic alignment with respect to the sagittal cervical alignment. abstract_id: PUBMED:29423886 Paraplegic patients: how to measure balance and what is normal or functional? Purpose: To review the current understanding and data of sagittal balance and alignment considerations in paraplegic patients. Methods: A PubMed literature search was conducted to identify all relevant articles relating to sagittal alignment and sagittal balance considerations in paraplegic and spinal cord injury patients. Results: While there are numerous studies and publications on sagittal balance in the ambulatory patient with spinal deformity or complex spine disorders, there is paucity of the literature on "normal" sagittal balance in the paraplegic patients. Studies have reported significantly alterations of the sagittal alignment parameters in the non-ambulatory paraplegic patients compared to ambulatory patients. The variability of the alignment changes is related to the differences in the level of the spinal cord injury and their differences in the activations of truncal muscles to allow functional movements in those patients, particularly in optimizing sitting and transferring. Surgical goal in treating paraplegic patients with complex pathologies should not be solely directed to achieve the "normal" radiographic parameters of sagittal alignment in the ambulatory patients. The goal should be to maintain good coronal balance to allow ideal sitting position and to preserve motion segment to optimize functions of paraplegia patients. Conclusion: Current available literature data have not defined normal sagittal parameters for paraplegic patients. There are significant differences in postural sagittal parameters and muscle activations in paraplegic and non-spinal cord injury patients that can lead to differences in sagittal alignment and balance. Treatment goal in spine surgery for paraplegic patients should address their global function, sitting balance, and ability to perform self-care rather than the accepted radiographic parameters for adult spinal deformity in ambulatory patients. abstract_id: PUBMED:35064287 Effects of global postural alignment on posture-stabilizing synergy and intermuscular coherence in bipedal standing. Clinicians frequently assess and intervene on postural alignment; however, notions of what constitutes good postural alignment are variable. Furthermore, the majority of current evidence appeals either to population norms or defines good postural alignment as the negation of what has been observed to correlate with pathology. The purpose of this study was to identify affirmative indicators of good postural alignment in reference to motor control theory. Electromyography (anterior leg, posterior leg, and trunk muscles) and motion capture data were acquired from 13 participants during 4 min bipedal standing trials in 4 conditions: control, - 10%, + 30%, and + 60% of subject-specific anterior limits of stability. Synergistic kinematic coordination was quantified via the uncontrolled manifold framework, and correlated neural drive was quantified in posture-relevant muscle groups (anterior, posterior, and trunk) via intermuscular coherence. Multilevel models assessed the effects of sagittal plane alignment on both outcomes. We observed a within-subjects fixed effect in which kinematic synergistic coordination decreased as subjects became more misaligned. We also observed within-subjects fixed effects for middle- and high-frequency intermuscular coherence in the posterior group (increased coherence with increased misalignment) and for trunk intermuscular coherence across all frequency bands (decreased coherence with increased misalignment). Our findings indicate that it may be possible to describe healthy postural alignment in light of referent control theory. Greater misalignment with respect to vertical is associated with compromises in synergistic control of posture and increased corticospinal drive to specific muscle groups. These results suggest that postural alignment may not simply be an empirical phenomenon. abstract_id: PUBMED:27421283 Effects of total hip arthroplasty on spinal sagittal alignment and static balance: a prospective study on 28 patients. Purpose: The aim of this study was to investigate postoperative changes in spinal sagittal alignment and postural balance in patients with hip-spine syndrome (HSS) and to verify whether any significant correlation exists between these changes and improvement of low back pain (LBP) symptoms following total hip replacement (THR) surgery. Methods: Twenty-eight consecutive patients with HSS undergoing unilateral THR were prospectively enrolled. Whole spine X-rays were obtained before surgery and 6 months after surgery. The following parameters were measured: cervical lordosis, thoracic kyphosis, lumbar lordosis, pelvic incidence (PI), pelvic tilt (PT), sacral slope (SS), and sagittal vertical axis (C7 SVA). Patients underwent pre- and postoperative postural balance assessment (950-460 BioSwayTM system; clinical test of sensory integration-CTSIB, limit of stability test-LOS) and patient reported outcome measures assessment (Short Form-36, SF-36, Oswestry Disability Index, ODI, Visual Analog Scale, VAS and Western Ontario and McMaster Universities Arthritis Index, WOMAC). Results: Mean age of the patients was 61.7 ± 6.4. Median (interquartile range, IQR) pre-operative PI and PT were 50.0 (35.0, 60.0) and 11.0 (7.0, 23.0), respectively; lumbar lordosis was 49.0 (41.0, 68.0) and SVA 5.0 (-11.0, 41.0). No significant changes in sagittal alignment were observed postoperatively. Median LBP VAS decreased from 6.0 (5.0, 7.0) to 3.0 (2.0, 4.0) and ODI from 54.0 (39.0, 64.0) to 34.0 (26.0, 48.0) (p &lt; 0.001). Median CTSIB improved from 1.22 (1.07, 1.45) to 1.01 (0.80, 1.19) and LOS from 46.0 (42.0, 58.0) to 37.0 (32.0, 39.0) postoperatively. No significant correlation was noted between postoperative changes in spinal sagittal alignment or postural balance and improvement of LBP VAS and ODI scores. Conclusions: Our study demonstrated an improvement in LBP levels (VAS and ODI) and postural balance in patients with HSS following THR surgery. No significant changes have been noted in radiographic spinal sagittal alignment postoperatively. The improvement in LBP levels does not correlate with post-operative changes in spinopelvic alignment or postural balance. abstract_id: PUBMED:38270602 Sagittal alignment of diverse mechanical complications following adult spinal deformity surgery. Purpose: To compare the sagittal alignment of patients with diverse mechanical complications (MCs) following adult spinal deformity (ASD) surgery with that of patients without MCs. Methods: A total of 371 patients who underwent ASD surgery were enrolled. The sagittal spinopelvic parameters were measured preoperatively and at the 6-month and last follow-up, and the global alignment and proportion (GAP) score was calculated. The subjects were divided into non-MC and MCs groups, and the MCs group was further divided into rod fracture (RF), screw breakage (SB), screw dislodgement (SD) and proximal junctional kyphosis (PJK) subgroups. Results: Preoperatively, the RF group had greater thoracolumbar kyphosis (TLK) and relative upper lumbar lordosis (RULL); the SB group had the largest pelvic incidence (PI) and lumbar lordosis (LL); the SD group had the least global sagittal imbalance; and the PJK group had the highest thoracic kyphosis (TK), TLK and RULL. At the last follow-up, the RF and SB groups featured a large PI minus LL (PI-LL), while the PJK group featured a prominent TK; all the MCs subgroups had sagittal malalignment and a higher GAP score, and the SB group had the most severe cases. Logistic regressions showed that the relative spinopelvic alignment (RSA) score was correlated with RF, SB and SD, while the RSA and age scores were associated with PJK. Conclusion: Each patient with MCs had individual characteristics in the sagittal plane following ASD surgery, which may be helpful to understand the pathophysiology of poor sagittal alignment with its subsequent MCs and guide an eventual revision strategy. abstract_id: PUBMED:12509023 Direct magnitude estimation and interval scaling of pleasantness and severity in dysphonic and normal speakers. The purpose of this study was to determine the validity of voice pleasantness and overall voice severity ratings of dysphonic and normal speakers using direct magnitude estimation (DME) and equal-appearing interval (EAI) auditory-perceptual scaling procedures. Twelve naive listeners perceptually evaluated voice pleasantness and severity from connected speech samples produced by 24 adult dysphonic speakers and 6 normal adult speakers. A statistical comparison of the two auditory-perceptual scales yielded a linear relationship representative of a metathetic continuum for voice pleasantness. A statistical relationship that is consistent with a prothetic continuum was revealed for ratings of voice severity. These data provide support for the use of either DME or EAI scales when making auditory-perceptual judgments of pleasantness, but only DME scales when judging overall voice severity for dysphonic speakers. These results suggest further psychophysical study of perceptual dimensions of voice and speech must be undertaken in order to avoid the inappropriate and invalid use of EAI scales used in the auditory-perceptual evaluation of the normal and dysphonic voice. Answer: Yes, the sagittal postural alignment is different in normal and dysphonic adult speakers. A study that analyzed sagittal spine alignment through photographs and Digimizer-MedCalc Software found significant differences in thoracic length curvature (TL) and the kyphosis index (KI) between normal and dysphonic speakers. The mean TL and KI were significantly higher for dysphonic speakers compared to normal speakers. Additionally, a significant effect of sex was observed on the TL variable, with males having a higher mean TL than females. However, there was no significant interaction between dysphonia and sex on the TL and KI variables. For the lumbar length curvature variable, a significant main effect of sex was demonstrated, but no significant main effect of dysphonia or significant sex×dysphonia interaction was found (PUBMED:24836364).
Instruction: Traffic-related air pollution and the onset of myocardial infarction: disclosing benzene as a trigger? Abstracts: abstract_id: PUBMED:24932584 Traffic-related air pollution and the onset of myocardial infarction: disclosing benzene as a trigger? A small-area case-crossover study. Background And Objectives: Exposure to traffic is an established risk factor for the triggering of myocardial infarction (MI). Particulate matter, mainly emitted by diesel vehicles, appears to be the most important stressor. However, the possible influence of benzene from gasoline-fueled cars has not been explored so far. Methods And Results: We conducted a case-crossover study from 2,134 MI cases recorded by the local Coronary Heart Disease Registry (2000-2007) in the Strasbourg Metropolitan Area (France). Available individual data were age, gender, previous history of ischemic heart disease and address of residence at the time of the event. Nitrogen dioxide, particles of median aerodynamic diameter &lt;10 µm (PM10), ozone, carbon monoxide and benzene air concentrations were modeled on an hourly basis at the census block level over the study period using the deterministic ADMS-Urban air dispersion model. Model input data were emissions inventories, background pollution measurements, and meteorological data. We have found a positive, statistically significant association between concentrations of benzene and the onset of MI: per cent increase in risk for a 1 µg/m3 increase in benzene concentration in the previous 0, 0-1 and 1 day was 10.4 (95% confidence interval 3-18.2), 10.7 (2.7-19.2) and 7.2 (0.3-14.5), respectively. The associations between the other pollutants and outcome were much lower and in accordance with the literature. Conclusion: We have observed that benzene in ambient air is strongly associated with the triggering of MI. This novel finding needs confirmation. If so, this would mean that not only diesel vehicles, the main particulate matter emitters, but also gasoline-fueled cars--main benzene emitters-, should be taken into account for public health action. abstract_id: PUBMED:35240186 Short-term exposure to traffic-related air pollution and STEMI events: Insights into STEMI onset and related cardiac impairment. Aims: Evidence on the impacts of traffic-related air pollution (TRAP) on ST-segment elevation myocardial infarction (STEMI) events is limited. We aimed to assess the acute effects of TRAP exposure on the clinical onset of STEMI and related cardiac impairments. Methods And Results: We recruited patients who were admitted for STEMI and underwent primary percutaneous coronary intervention at Peking University Third Hospital between 2014 and 2020. Indicators relevant to cardiac impairments were measured. Concomitantly, hourly concentrations of traffic pollutants were monitored throughout the study period, including fine particulate matter, black carbon (BC), particles in size ranges of 5-560 nm, oxides of nitrogen (NOX), nitrogen dioxide, and carbon monoxide. The mean (SD) age of participants was 62.4 (12.5) years. Daily average (range) concentrations of ambient BC and NOX were 3.9 (0.1-25.0) μg/m3 and 90.8 (16.6-371.7) μg/m3. Significant increases in STEMI risks of 5.9% (95% CI: 0.1, 12.0) to 21.9% (95% CI: 6.0, 40.2) were associated with interquartile range increases in exposure to TRAP within a few hours. These changes were accompanied by significant elevations in cardiac troponin T levels of 6.9% (95% CI: 0.2, 14.1) to 41.7% (95% CI: 21.2, 65.6), as well as reductions in left ventricular ejection fraction of 1.5% (95% CI: 0.1, 2.9) to 3.7% (95% CI: 0.8, 6.4). Furthermore, the associations were attenuated in participants living in areas with higher residential greenness levels. Conclusions: Our findings extend current understanding that short-term exposure to higher levels of traffic pollution was associated with increased STEMI risks and exacerbated cardiac impairments, and provide evidence on traffic pollution control priority for protecting vulnerable populations who are at greater risks of cardiovascular events. abstract_id: PUBMED:37429056 Can traffic-related air pollution trigger myocardial infarction within a few hours of exposure? Identifying hourly hazard periods. Introduction: Traffic-related air pollution can trigger myocardial infarction (MI). However, the hourly hazard period of exposure to nitrogen dioxide (NO2), a common traffic tracer, for incident MI has not been fully evaluated. Thus, the current hourly US national air quality standard (100 ppb) is based on limited hourly-level effect estimates, which may not adequately protect cardiovascular health. Objectives: We characterized the hourly hazard period of NO2 exposure for MI in New York state (NYS), USA, from 2000 to 2015. Methods: For nine cities in NYS, we obtained data on MI hospitalizations from the NYS Department of Health Statewide Planning and Research Cooperative System and hourly NO2 concentrations from the US Environmental Protection Agency's Air Quality System database. We used city-wide exposures and a case-crossover study design with distributed lag non-linear terms to assess the relationship between hourly NO2 concentrations over 24 h and MI, adjusting for hourly temperature and relative humidity. Results: The mean NO2 concentration was 23.2 ppb (standard deviation: 12.6 ppb). In the six hours preceding MI, we found linearly increased risk with increasing NO2 concentrations. At lag hour 0, a 10 ppb increase in NO2 was associated with 0.2 % increased risk of MI (Rate Ratio [RR]: 1.002; 95 % Confidence Interval [CI]: 1.000, 1.004). We estimated a cumulative RR of 1.015 (95 % CI: 1.008, 1.021) for all 24 lag hours per 10 ppb increase in NO2. Lag hours 2-3 had consistently elevated risk ratios in sensitivity analyses. Conclusions: We found robust associations between hourly NO2 exposure and MI risk at concentrations far lower than current hourly NO2 national standards. Risk of MI was most elevated in the six hours after exposure, consistent with prior studies and experimental work evaluating physiologic responses after acute traffic exposure. Our findings suggest that current hourly standards may be insufficient to protect cardiovascular health. abstract_id: PUBMED:20617041 Air pollution exposure--a trigger for myocardial infarction? The association between ambient air pollution exposure and hospitalization for cardiovascular events has been reported in several studies with conflicting results. A case-crossover design was used to investigate the effects of air pollution in 660 first-time myocardial infarction cases in Stockholm in 1993-1994, interviewed shortly after diagnosis using a standard protocol. Air pollution data came from central urban background monitors. No associations were observed between the risk for onset of myocardial infarction and two-hour or 24-hour air pollution exposure. No evidence of susceptible subgroups was found. This study provides no support that moderately elevated air pollution levels trigger first-time myocardial infarction. abstract_id: PUBMED:26454658 Long-term traffic air and noise pollution in relation to mortality and hospital readmission among myocardial infarction survivors. Background: There is relatively little evidence of health effects of long-term exposure to traffic-related pollution in susceptible populations. We investigated whether long-term exposure to traffic air and noise pollution was associated with all-cause mortality or hospital readmission for myocardial infarction (MI) among survivors of hospital admission for MI. Methods: Patients from the Myocardial Ischaemia National Audit Project database resident in Greater London (n = 1 8,138) were followed for death or readmission for MI. High spatially-resolved annual average air pollution (11 metrics of primary traffic, regional or urban background) derived from a dispersion model (resolution 20 m × 20 m) and road traffic noise for the years 2003-2010 were used to assign exposure at residence. Hazard ratios (HR, 95% confidence interval (CI)) were estimated using Cox proportional hazards models. Results: Most air pollutants were positively associated with all-cause mortality alone and in combination with hospital readmission. The largest associations with mortality per interquartile range (IQR) increase of pollutant were observed for non-exhaust particulate matter (PM(10)) (HR = 1.05 (95% CI 1.00, 1.10), IQR = 1.1 μg/m(3)); oxidant gases (HR = 1.05 (95% CI 1.00, 1.09), IQR = 3.2 μg/m(3)); and the coarse fraction of PM (HR = 1.05 (95% CI 1.00, 1.10), IQR = 0.9 μg/m(3)). Adjustment for traffic noise only slightly attenuated these associations. The association for a 5 dB increase in road-traffic noise with mortality was HR = 1.02 (95% CI 0.99, 1.06) independent of air pollution. Conclusions: These data support a relationship of primary traffic and regional/urban background air pollution with poor prognosis among MI survivors. Although imprecise, traffic noise appeared to have a modest association with prognosis independent of air pollution. abstract_id: PUBMED:26867595 Road traffic noise, air pollution and myocardial infarction: a prospective cohort study. Purpose: Both road traffic noise and air pollution have been linked to cardiovascular disease. However, there are few prospective epidemiological studies available where both road traffic noise and air pollution have been analyzed simultaneously. The aim of this study was to investigate the relation between road traffic noise, air pollution and incident myocardial infarction in both current (1-year average) and medium-term (3-year average) perspective. Methods: This study was based on a stratified random sample of persons aged 18-80 years who answered a public health survey in Skåne, Sweden, in 2000 (n = 13,512). The same individuals received a repeated survey in 2005 and 2010. Diagnoses of myocardial infarction (MI) were obtained from medical records for both inpatient and outpatient specialized care. The endpoint was first MI during 2000-2010. Participants with prior myocardial infarction were excluded at baseline. Yearly average levels of noise (L DEN) and air pollution (NO x ) were estimated using geographic information system for residential address every year until censoring. Results: The mean exposure levels for road traffic noise and air pollution in 2005 were L DEN 51 dB(A) and NO x 11 µg/m(3), respectively. After adjustment for individual confounders (age, sex, body mass index, smoking, education, alcohol consumption, civil status, year, country of birth and physical activity), a 10-dB(A) increase in current noise exposure did not increase the incidence rate ratio (IRR) for MI, 0.99 (95 % CI 0.86-1.14). Neither did a 10-μg/m(3) increase in current NO x increase the risk of MI, 1.02 (95 % CI 0.86-1.21). The IRR for MI associated with combined exposure to road traffic noise &gt;55 dB(A) and NO x &gt;20 µg/m(3) was 1.21 (95 % CI 0.90-1.64) compared to &lt;55 dB(A) and &lt;20 µg/m(3). Conclusions: This study did not provide evidence for an increased risk of MI due to exposure to road traffic noise or air pollution at moderate average exposure levels. abstract_id: PUBMED:35088601 Redox Regulatory Changes of Circadian Rhythm by the Environmental Risk Factors Traffic Noise and Air Pollution. Significance: Risk factors in the environment such as air pollution and traffic noise contribute to the development of chronic noncommunicable diseases. Recent Advances: Epidemiological data suggest that air pollution and traffic noise are associated with a higher risk for cardiovascular, metabolic, and mental disease, including hypertension, heart failure, myocardial infarction, diabetes, arrhythmia, stroke, neurodegeneration, depression, and anxiety disorders, mainly by activation of stress hormone signaling, inflammation, and oxidative stress. Critical Issues: We here provide an in-depth review on the impact of the environmental risk factors air pollution and traffic noise exposure (components of the external exposome) on cardiovascular health, with special emphasis on the role of environmentally triggered oxidative stress and dysregulation of the circadian clock. Also, a general introduction on the contribution of circadian rhythms to cardiovascular health and disease as well as a detailed mechanistic discussion of redox regulatory pathways of the circadian clock system is provided. Future Directions: Finally, we discuss the potential of preventive strategies or "chrono" therapy for cardioprotection. Antioxid. Redox Signal. 37, 679-703. abstract_id: PUBMED:27472911 Long-Term Exposure to Traffic-Related Air Pollution and Risk of Incident Atrial Fibrillation: A Cohort Study. Background: Atrial fibrillation is the most common sustained arrhythmia and is associated with cardiovascular morbidity and mortality. The few studies conducted on short-term effects of air pollution on episodes of atrial fibrillation indicate a positive association, though not consistently. Objectives: The aim of this study was to evaluate the long-term impact of traffic-related air pollution on incidence of atrial fibrillation in the general population. Methods: In the Danish Diet, Cancer, and Health cohort of 57,053 people 50-64 years old at enrollment in 1993-1997, we identified 2,700 cases of first-ever hospital admission for atrial fibrillation from enrollment to end of follow-up in 2011. For all cohort members, exposure to traffic-related air pollution assessed as nitrogen dioxide (NO2) and nitrogen oxides (NOx) was estimated at all present and past residential addresses from 1984 to 2011 using a validated dispersion model. We used Cox proportional hazard model to estimate associations between long-term residential exposure to NO2 and NOx and risk of atrial fibrillation, after adjusting for lifestyle and socioeconomic position. Results: A 10 μg/m3 higher 10-year time-weighted mean exposure to NO2 preceding diagnosis was associated with an 8% higher risk of atrial fibrillation [incidence rate ratio: 1.08; 95% confidence interval (CI): 1.01, 1.14] in adjusted analysis. Though weaker, similar results were obtained for long-term residential exposure to NOx. We found no clear tendencies regarding effect modification of the association between NO2 and atrial fibrillation by sex, smoking, hypertension or myocardial infarction. Conclusion: We found long-term residential traffic-related air pollution to be associated with higher risk of atrial fibrillation. Accordingly, the present findings lend further support to the demand for abatement of air pollution. Citation: Monrad M, Sajadieh A, Christensen JS, Ketzel M, Raaschou-Nielsen O, Tjønneland A, Overvad K, Loft S, Sørensen M. 2017. Long-term exposure to traffic-related air pollution and risk of incident atrial fibrillation: a cohort study. Environ Health Perspect 125:422-427; http://dx.doi.org/10.1289/EHP392. abstract_id: PUBMED:35868579 Long-term exposure to air pollution, coronary artery calcification, and carotid artery plaques in the population-based Swedish SCAPIS Gothenburg cohort. Long-term exposure to air pollution is associated with cardiovascular events. A main suggested mechanism is that air pollution accelerates the progression of atherosclerosis, yet current evidence is inconsistent regarding the association between air pollution and coronary artery and carotid artery atherosclerosis, which are well-established causes of myocardial infarction and stroke. We studied associations between low levels of long-term air pollution, coronary artery calcium (CAC) score, and the prevalence and area of carotid artery plaques, in a middle-aged population-based cohort. The Swedish CArdioPulmonary bioImage Study (SCAPIS) Gothenburg cohort was recruited during 2013-2017 and thoroughly examined for cardiovascular risk factors, including computed tomography of the heart and ultrasonography of the carotid arteries. In 5070 participants (age 50-64 years), yearly residential exposures to air pollution (PM2.5, PM10, PMcoarse, NOx, and exhaust-specific PM2.5 1990-2015) were estimated using high-resolution dispersion models. We used Poisson regression to examine associations between long-term (26 years' mean) exposure to air pollutants and CAC score, and prevalence of carotid artery plaques, adjusted for potential confounders. Among participants with carotid artery plaques, we also examined the association with plaque area using linear regression. Mean exposure to PM2.5 was low by international standards (8.5 μg/m3). There were no consistent associations between long-term total PM2.5 exposure and CAC score or presence of carotid artery plaques, but an association between total PM2.5 and larger plaque area in participants with carotid plaques. Associations with traffic-related air pollutants were consistently positive for both a high CAC score and bilateral carotid artery plaques. These associations were independent of road traffic noise. We found stronger associations among men and participants with cardiovascular risk factors. The results lend some support to atherosclerosis as a main modifiable pathway between low levels of traffic-related ambient air pollution and cardiovascular disease, especially in vulnerable individuals. abstract_id: PUBMED:30021805 Effects of Leisure-Time and Transport-Related Physical Activities on the Risk of Incident and Recurrent Myocardial Infarction and Interaction With Traffic-Related Air Pollution: A Cohort Study. Background: Physical activity enhances the uptake of air pollutants, possibly reducing its beneficial effects. We examined the effects of leisure-time and transport-related physical activities on the risk of myocardial infarction (MI), and whether potential benefits on MI are reduced by exposure to traffic-related air pollution. Methods And Results: A group of 57 053 participants (50-65 years of age) from the Danish Diet, Cancer, and Health cohort reported physical activity at baseline (1993-1997) and were linked to registry data on hospital contacts and out-of-hospital deaths caused by MI, until December 2015. Nitrogen dioxide levels were estimated at participants' baseline residences. We used Cox regressions to associate participation in sports, cycling, walking, and gardening with incident and recurrent MI, and tested for interaction by nitrogen dioxide. Of 50 635 participants without MI at baseline, 2936 developed incident MI, and of 1233 participants with MI before baseline, 324 had recurring MI during follow-up. Mean nitrogen dioxide concentration was 18.7 μg/m3 at baseline (1993-1997). We found inverse statistically significant associations between participation in sports (hazard ratio; 95% confidence interval: 0.85; 0.79-0.92), cycling (0.91; 0.84-0.98), gardening (0.87; 0.80-0.95), and incident MI, while the association with walking was statistically nonsignificant (0.95; 0.83-1.08). Recurrent MI was statistically nonsignificantly inversely associated with cycling (0.80; 0.63-1.02), walking (0.82, 0.57-1.16), and gardening (0.91; 0.71-1.18), and positively with sports (1.06; 0.83-1.35). There was no effect modification of the associations between physical activity and MI by nitrogen dioxide. Conclusions: Benefits of physical activity on both the incidence and the recurrence of MI are not reduced by exposure to high levels of air pollution. Answer: The association between traffic-related air pollution (TRAP) and the onset of myocardial infarction (MI) has been investigated in several studies, with some identifying specific pollutants as potential triggers. A case-crossover study conducted in the Strasbourg Metropolitan Area (France) found a statistically significant association between benzene concentrations and the onset of MI, suggesting that benzene in ambient air could be a strong trigger for MI. This study highlighted that not only diesel vehicles, which are the main emitters of particulate matter, but also gasoline-fueled cars, which are the main emitters of benzene, should be considered in public health actions to prevent MI (PUBMED:24932584). Other studies have also found associations between TRAP and increased risks of ST-segment elevation myocardial infarction (STEMI) events, as well as exacerbated cardiac impairments such as elevated cardiac troponin T levels and reduced left ventricular ejection fraction. These findings suggest that short-term exposure to higher levels of traffic pollution is associated with increased STEMI risks and cardiac impairments (PUBMED:35240186). In New York state (USA), a study characterized the hourly hazard period of nitrogen dioxide (NO2) exposure for MI, finding robust associations between hourly NO2 exposure and MI risk at concentrations far lower than current hourly NO2 national standards. The risk of MI was most elevated in the six hours after exposure (PUBMED:37429056). However, not all studies have found a clear link between air pollution and MI. For instance, a study in Stockholm reported no associations between the risk for onset of MI and short-term air pollution exposure (PUBMED:20617041). Similarly, a study based on a public health survey in Skåne, Sweden, did not find evidence for an increased risk of MI due to exposure to road traffic noise or air pollution at moderate average exposure levels (PUBMED:26454658). In summary, while several studies suggest that TRAP, including specific pollutants like benzene and NO2, can trigger MI, the evidence is not entirely consistent across all research. The novel finding regarding benzene as a potential trigger for MI (PUBMED:24932584) indicates a need for further confirmation and consideration in public health initiatives aimed at reducing the risk of MI associated with TRAP.
Instruction: Are cold light sources really cold? Abstracts: abstract_id: PUBMED:17057587 Are cold light sources really cold? Background: A fiber optic light source is the central part of endoscopic surgery. However, the light generation process causes heat transmission from a source to tip of a scope. In this study, we measured the amount of heating and pathologic effects of direct contact with the tip of scopes on the small bowel in an experimental set-up. Materials And Methods: Temperature measurements were performed at the tip of 4 different scopes (Aesculap, Olympus, Karl Storz, and Richard Wolf), which were connected to either of 3 different xenon light sources (Olympus, Richard Wolf, Karl Storz). Temperatures at the outlet of light sources and the tip of fiber optic cables were measured as well. Tissue samples from the small bowel of a pig were obtained after exposing them to direct contact with the tip of the scopes or the fiber optic cable. Results: The temperature measurements at the tip of the scopes varied between 60 degrees C and 100 degrees C (Celsius). The temperatures showed a wide variation according to the type of light source and fiber optic cable the scopes were connected to. The average temperature at the outlet of the light sources and the tip of fiber optic cables was 750 degrees C and 250 degrees C, respectively. The microscopic scores of the small bowel injury induced by exposition to the heat at the tip of the scopes were significantly high after 5 seconds of contact. Direct contact of the tip of the fiber optic cable caused total carbonization in the wall of the small bowel. Conclusion: Direct contact of the tip of the scope with small bowel may cause functional and cytologic injury even after short durations of exposure. Therefore, we do not recommend direct contact of scopes with the intra-abdominal organs to avoid heat injuries. In addition, this study also emphasizes the variation in heat generation at the tip of the scopes when used with a mismatching light source and fiber optic cable. abstract_id: PUBMED:29200323 Refinements to light sources used to analyze the chloroplast cold-avoidance response over the past century. Chloroplasts alter their subcellular positions in response to ambient light and temperature conditions. This well-characterized light-induced response, which was first described nearly 100 years ago, is regulated by the blue-light photoreceptor, phototropin. By contrast, the molecular mechanism of low temperature-induced chloroplast relocation (i.e., the cold-avoidance response) was unexplored until its discovery in the fern Adiantum capillus-veneris in 2008. Because this response is also regulated by phototropin, it was thought to occur in a blue light-dependent manner. However, until recently, the blue light dependency of this response could not be examined due to the lack of a stable light source under cold conditions. We recently refined the light source to precisely control light intensity under cold conditions. Using this light source, we observed the blue light dependency of the cold-avoidance response in the liverwort Marchantia polymorpha and the phototropin2-mediated cold-avoidance response in the flowering plant Arabidopsis thaliana. Thus, this mechanism is evolutionarily conserved among land plants. abstract_id: PUBMED:32552683 Cold priming uncouples light- and cold-regulation of gene expression in Arabidopsis thaliana. Background: The majority of stress-sensitive genes responds to cold and high light in the same direction, if plants face the stresses for the first time. As shown recently for a small selection of genes of the core environmental stress response cluster, pre-treatment of Arabidopsis thaliana with a 24 h long 4 °C cold stimulus modifies cold regulation of gene expression for up to a week at 20 °C, although the primary cold effects are reverted within the first 24 h. Such memory-based regulation is called priming. Here, we analyse the effect of 24 h cold priming on cold regulation of gene expression on a transcriptome-wide scale and investigate if and how cold priming affects light regulation of gene expression. Results: Cold-priming affected cold and excess light regulation of a small subset of genes. In contrast to the strong gene co-regulation observed upon cold and light stress in non-primed plants, most priming-sensitive genes were regulated in a stressor-specific manner in cold-primed plant. Furthermore, almost as much genes were inversely regulated as co-regulated by a 24 h long 4 °C cold treatment and exposure to heat-filtered high light (800 μmol quanta m- 2 s- 1). Gene ontology enrichment analysis revealed that cold priming preferentially supports expression of genes involved in the defence against plant pathogens upon cold triggering. The regulation took place on the cost of the expression of genes involved in growth regulation and transport. On the contrary, cold priming resulted in stronger expression of genes regulating metabolism and development and weaker expression of defence genes in response to high light triggering. qPCR with independently cultivated and treated replicates confirmed the trends observed in the RNASeq guide experiment. Conclusion: A 24 h long priming cold stimulus activates a several days lasting stress memory that controls cold and light regulation of gene expression and adjusts growth and defence regulation in a stressor-specific manner. abstract_id: PUBMED:25489279 Surfactant-free synthesis of Cu2O hollow spheres and their wavelength-dependent visible photocatalytic activities using LED lamps as cold light sources. A facile synthesis route of cuprous oxide (Cu2O) hollow spheres under different temperatures without the aid of a surfactant was introduced. Morphology and structure varied as functions of reaction temperature and duration. A bubble template-mediated formation mechanism was proposed, which explained the reason of morphology changing with reaction temperature. The obtained Cu2O hollow spheres were active photocatalyst for the degradation of methyl orange under visible light. A self-designed equipment of light emitting diode (LED) cold light sources with the wavelength of 450, 550, and 700 nm, respectively, was used for the first time in the photocatalysis experiment with no extra heat introduced. The most suitable wavelength for Cu2O to photocatalytic degradation is 550 nm, because the light energy (2.25 eV) is closest to the band gap of Cu2O (2.17 eV). These surfactant-free synthesized Cu2O hollow spheres would be highly attractive for practical applications in water pollutant removal and environmental remediation. abstract_id: PUBMED:35755673 Light Quality Modulates Plant Cold Response and Freezing Tolerance. The cold acclimation process is regulated by many factors like ambient temperature, day length, light intensity, or hormonal status. Experiments with plants grown under different light quality conditions indicate that the plant response to cold is also a light-quality-dependent process. Here, the role of light quality in the cold response was studied in 1-month-old Arabidopsis thaliana (Col-0) plants exposed for 1 week to 4°C at short-day conditions under white (100 and 20 μmol m-2s-1), blue, or red (20 μmol m-2s-1) light conditions. An upregulated expression of CBF1, inhibition of photosynthesis, and an increase in membrane damage showed that blue light enhanced the effect of low temperature. Interestingly, cold-treated plants under blue and red light showed only limited freezing tolerance compared to white light cold-treated plants. Next, the specificity of the light quality signal in cold response was evaluated in Arabidopsis accessions originating from different and contrasting latitudes. In all but one Arabidopsis accession, blue light increased the effect of cold on photosynthetic parameters and electrolyte leakage. This effect was not found for Ws-0, which lacks functional CRY2 protein, indicating its role in the cold response. Proteomics data confirmed significant differences between red and blue light-treated plants at low temperatures and showed that the cold response is highly accession-specific. In general, blue light increased mainly the cold-stress-related proteins and red light-induced higher expression of chloroplast-related proteins, which correlated with higher photosynthetic parameters in red light cold-treated plants. Altogether, our data suggest that light modulates two distinct mechanisms during the cold treatment - red light-driven cell function maintaining program and blue light-activated specific cold response. The importance of mutual complementarity of these mechanisms was demonstrated by significantly higher freezing tolerance of cold-treated plants under white light. abstract_id: PUBMED:29545168 Systematic identification of light-regulated cold-responsive proteome in a model cyanobacterium. Differential expression of cold-responsive proteins is necessary for cyanobacteria to acclimate to cold stress frequently occurring in their natural habitats. Accumulating evidence indicates that cold-induced expression of certain proteins is dependent on light illumination, but a systematic identification of light-dependent and/or light-independent cold-responsive proteins in cyanobacteria is still lacking. Herein, we comprehensively identified cold-responsive proteins in a model cyanobacterium Synechocystis sp. PCC 6803 (Hereafter Synechocystis) that was cold-stressed in light or in dark. In total, 72 proteins were identified as cold-responsive, including 19 and 17 proteins whose cold-responsiveness are light-dependent and light-independent, respectively. Bioinformatic analysis revealed that outer membrane proteins, proteins involved in translation, and proteins involved in divergent types of stress responses were highly enriched in the cold-responsive proteins. Moreover, a protein network responsible for nitrogen assimilation and amino acid biosynthesis, transcription, and translation were upregulated in response to the cold stress. The network contains both light-dependent and light-independent cold-responsive proteins, probably for fine tuning its activity to endow Synechocystis the flexibility necessary for cold adaptation in their natural habitats, where days and nights are alternating. Together, our results should serve as an important resource for future study toward understanding the mechanism of cold acclimation in cyanobacteria. Significance: Photosynthetic cyanobacteria need to acclimate to frequently occurring abiotic stresses such as cold in their natural habitats, and the acclimation process has to be coordinated with photosynthesis, the light-dependent process that provides carbon and energy for propagation of cyanobacteria. It is conceivable that cold-induced differential protein expression can also be regulated by light. Hence it is important to systematically identify cold responsive proteins that are regulated or not regulated by light to better understand the mechanism of cold acclimation in cyanobacteria. In this manuscript, we identified a network involved in protein synthesis that were upregulated by cold. The network contains both light-dependent and light-independent cold-inducible proteins, presumably for fine tuning the activity of the network in natural habitats of cyanobacteria where days and nights are alternating. This finding underscores the significance of proteome reprograming toward enhancing protein synthesis in cold adaptation. abstract_id: PUBMED:34427646 The CRY2-COP1-HY5-BBX7/8 module regulates blue light-dependent cold acclimation in Arabidopsis. Light and temperature are two key environmental factors that coordinately regulate plant growth and development. Although the mechanisms that integrate signaling mediated by cold and red light have been unraveled, the roles of the blue light photoreceptors cryptochromes in plant responses to cold remain unclear. In this study, we demonstrate that the CRYPTOCHROME2 (CRY2)-COP1-HY5-BBX7/8 module regulates blue light-dependent cold acclimation in Arabidopsis thaliana. We show that phosphorylated forms of CRY2 induced by blue light are stabilized by cold stress and that cold-stabilized CRY2 competes with the transcription factor HY5 to attenuate the HY5-COP1 interaction, thereby allowing HY5 to accumulate at cold temperatures. Furthermore, our data demonstrate that B-BOX DOMAIN PROTEIN7 (BBX7) and BBX8 function as direct HY5 targets that positively regulate freezing tolerance by modulating the expression of a set of cold-responsive genes, which mainly occurs independently of the C-repeat-binding factor pathway. Our study uncovers a mechanistic framework by which CRY2-mediated blue-light signaling enhances freezing tolerance, shedding light on the molecular mechanisms underlying the crosstalk between cold and light signaling pathways in plants. abstract_id: PUBMED:35588566 Asymmetry of the pupillary light reflex during a cold pressor test. Pupillary light reflexes were monitored in 20 healthy participants while they immersed one foot in painfully cold water (the cold pressor test) or in warm water for 1 min. Pupillary dilatation was greater during the cold pressor test than during the warm-water immersion. In addition, during the cold pressor test, re-dilation after exposure to bright light proceeded more rapidly for the ipsilateral than contralateral pupil. These findings suggest that sympathetic pupillary drive is greater ipsilateral than contralateral to pain. abstract_id: PUBMED:18379966 Introduction of cold light to endoscopy It is the aim of the paper to describe how, 40 years ago, optic glass fibers were developed, and what has been K. Storz's contribution to the new technology. In 1951 the term "Cold Light" was used the first time for illumination of a French type film- and photoendoscope. In 1957 the gastroenterologist B. Hirschowitz at Ann Arbor, U.S.A. succeeded making glass fibers of high light-guiding properties. In 1961 the Cystoscope Makers Inc (ACMI) at New York using these fibers brought the first flexible gastroscope on the market, still equipped with a conventional electric lamp. But in 1960, the year before, the physicist's of ACMI, J. H. Hett and L. Curtiss built the first cold light endoscope using glass fibers for both light and images conduction. In the following years ACMI equipped all of his endoscopes with this new type of illumination. Not before 1963 did K. Storz and the other German manufacturers produce their first cold light cystoscopes. Not possessing the know-how of glass fiber manufacturing, they had to get their fibers from abroad. K. Storz transmitted the term "cold light", which before had been the label of his French-type endoscopes, to the new glass fiber illumination. He constructed an excellent light source for fiber illumination without having light cables of his own fabrication. That is why his name is intimately connected with cold light illumination. But, nevertheless, the invention of the new glass fiber illumination must be credited to B. Hirschowitz and the physicists of ACMI in the U.S.A. abstract_id: PUBMED:24703081 Cold-induced gene expression and ω(3) fatty acid unsaturation is controlled by red light in Synechocystis. The expression of cold-induced genes, which are controlled by the cold sensor histidine kinase Hik33, and the formation of ω(3) polyunsaturated fatty acids are controlled by light in the cyanobacterium Synechocystis sp. PCC 6803. Cold-induced Hik33-dependent gene expression is initiated by red light (∼700nm), but not by blue or green light. Red light also turns on the ω(3) fatty acid desaturation. Different combinations of other wavelengths in red spectral region (635 and 726nm) had no effect on the red-light-activated cold-induced transcription or fatty acid desaturation. Therefore, the involvement of phytochrome-like photoreceptor(s), similar to phytochromes of higher plants, in this regulation was not confirmed. The absence of light-dependence of gene expression in the mutant cells deficient in Hik33 suggests the involvement of this histidine kinase in direct or mediated with red light regulation of cold responses in Synechocystis. Answer: Cold light sources are often perceived as "cold" because they are designed to emit light with minimal heat. However, the term "cold light" can be misleading, as these light sources are not entirely devoid of heat generation. For instance, in the context of endoscopic surgery, fiber optic light sources, which are considered cold light sources, can still transmit significant heat from the source to the tip of a scope. Temperature measurements have shown that the tip of such scopes can reach temperatures between 60°C and 100°C, and direct contact with tissue can cause injury or even carbonization after short durations of exposure (PUBMED:17057587). In the field of plant biology, cold light sources are used to study the cold-avoidance response in chloroplasts and other cold-related processes in plants. These studies often require precise control of light intensity under cold conditions, and advancements have been made to refine light sources to enable such control (PUBMED:29200323). Additionally, cold priming in plants can uncouple the regulation of gene expression by light and cold, indicating that the interaction between light sources and cold responses is complex (PUBMED:32552683). Moreover, cold light sources like LED lamps have been used in photocatalysis experiments to avoid introducing extra heat, which can influence the reaction (PUBMED:25489279). In the context of plant acclimation to cold, light quality has been shown to modulate plant responses to cold and freezing tolerance, with different light qualities (blue, red, white) having distinct effects on the cold acclimation process (PUBMED:35755673). Research in cyanobacteria has also revealed that cold-responsive proteins can be differentially expressed depending on whether the organism is exposed to light or darkness during cold stress, suggesting that light can influence the cold response (PUBMED:29545168). Additionally, the blue light photoreceptor cryptochrome has been implicated in the regulation of blue light-dependent cold acclimation in Arabidopsis (PUBMED:34427646). In summary, while cold light sources are designed to minimize heat emission, they are not completely cold and can still produce heat that may affect biological tissues or processes.
Instruction: Can we reduce the burden of depression? Abstracts: abstract_id: PUBMED:34939062 Nutritional therapy can reduce the burden of depression management in low income countries: A review. Depression is a serious mental and mood disorder with global health and economic burden. This burden may be overwhelming in low income countries, although there are insufficient data. Most antidepressant formulations are predicated on the monoamine, neuroendocrine and neuro-inflammation hypotheses, with little or no cognizance to other neurochemicals altered in depression. A nutritional strategy with or without conventional antidepressants is recommended, as nutrition plays vital roles in the onset, severity and duration of depression, with poor nutrition contributing to its pathogenesis. This review discusses nutritional potentials of utilizing omega-3 fatty acids, proteins, vitamins, minerals and herbs or their phytochemicals in the management of depression with the aim of reducing depression burden. Literature search of empirical data in books and journals in data bases including but not limited to PubMed, Scopus, Science Direct, Web of Science and Google Scholar that might contain discussions of sampling were sought, their full text obtained, and searched for relevant content to determine eligibility. Omega-3 fatty and amino acids had significant positive anti-depression outcomes, while vitamins and minerals although essential, enhanced omega-3 fatty and amino acids activities. Some herbs either as whole extracts or their phytochemicals/metabolites had significant positive anti-depression efficacy. Nutrition through the application of necessary food classes or herbs as well as their phytochemicals, may go a long way to effectively manage depression. This therefore will provide inexpensive, natural, and non-invasive therapeutic means with reduced adverse effects that can also be applied alongside clinical management. This nutritional strategy should be given more attention in research, assessment and treatment for those with depression and other mental illness in low income countries, especially in Africa. abstract_id: PUBMED:21750622 The increasing burden of depression. Recent epidemiological surveys conducted in general populations have found that the lifetime prevalence of depression is in the range of 10% to 15%. Mood disorders, as defined by the World Mental Health and the Diagnostic and Statistical Manual of Mental Disorders, 4th edition, have a 12-month prevalence which varies from 3% in Japan to over 9% in the US. A recent American survey found the prevalence of current depression to be 9% and the rate of current major depression to be 3.4%. All studies of depressive disorders have stressed the importance of the mortality and morbidity associated with depression. The mortality risk for suicide in depressed patients is more than 20-fold greater than in the general population. Recent studies have also shown the importance of depression as a risk factor for cardiovascular death. The risk of cardiac mortality after an initial myocardial infarction is greater in patients with depression and related to the severity of the depressive episode. Greater severity of depressive symptoms has been found to be associated with significantly higher risk of all-cause mortality including cardiovascular death and stroke. In addition to mortality, functional impairment and disability associated with depression have been consistently reported. Depression increases the risk of decreased workplace productivity and absenteeism resulting in lowered income or unemployment. Absenteeism and presenteeism (being physically present at work but functioning suboptimally) have been estimated to result in a loss of $36.6 billion per year in the US. Worldwide projections by the World Health Organization for the year 2030 identify unipolar major depression as the leading cause of disease burden. This article is a brief overview of how depression affects the quality of life of the subject and is also a huge burden for both the family of the depressed patient and for society at large. abstract_id: PUBMED:31408359 Training in improvisation techniques helps reduce caregiver burden and depression: Innovative Practice. This study measured outcomes of a novel pilot program designed to teach improvisation skills to caregivers of family members with dementia. Fifteen caregivers completed questionnaires measuring changes in their perception of burden (Zarit Burden Interview), depression (Beck Depression Inventory), their cared-for person's neuropsychiatric symptoms (Neuropsychiatric Inventory Questionnaire), and experiences related to caregiving. Caregivers' depressive symptoms and sense of burden significantly decreased after completing the six-week program. Caregivers reported that their loved ones' neuropsychiatric symptoms increased during the course of the intervention, though associated distress did not also increase. The Improv for Care program shows promise as an intervention for caregivers to improve stress, mood, and coping skills. abstract_id: PUBMED:28092207 Healthcare burden of depression in adults with arthritis. Introduction: Arthritis and depression are two of the top disabling conditions. When arthritis and depression exist in the same individual, they can interact with each other negatively and pose a significant healthcare burden on the patients, their families, payers, healthcare systems, and society as a whole. Areas covered: The primary objective of this review is to summarize, identify knowledge gaps and discuss the challenges in estimating the healthcare burden of depression among individuals with arthritis. Electronic literature searches were performed on PubMed, Embase, EBSCOhost, Scopus, the Cochrane Library, and Google Scholar to identify relevant studies. Expert Commentary: Our review revealed that the prevalence of depression varied depending on the definition of depression, type of arthritis, tools and threshold points used to identify depression, and the country of residence. Depression exacerbated arthritis-related complications as well as pain and was associated with poor health-related quality of life, disability, mortality, and high financial burden. There were significant knowledge gaps in estimates of incident depression rates, depression attributable disability, and healthcare utilization, direct and indirect healthcare costs among individuals with arthritis. abstract_id: PUBMED:38131143 Spirituality moderates the relationship between cancer caregiver burden and depression. Objectives: Cancer has become a chronic disease that requires a considerable amount of informal caregiving, often quite burdensome to family caregivers. However, the influence of spirituality on the caregivers' burden and mental health outcomes has been understudied. This study was to examine how caregiver burden, spirituality, and depression change during cancer treatment and investigate the moderating role of spirituality in the relationship between caregiver burden and depression for a sample of caregivers of persons with cancer. Methods: This secondary analysis used a longitudinal design employing 3 waves of data collection (at baseline, 3 months, and 6 months). Family caregivers completed the Caregiver Reaction Assessment, Spiritual Perspective Scale, and the PROMIS® depression measure. Linear mixed model analyses were used, controlling for pertinent covariates. Results: Spirituality, total caregiver burden, and depression remained stable over 6 months. More than 30% of the caregivers had mild to severe depressive symptoms at 3 time points. There was evidence of overall burden influencing depression. Of note was a protective effect of caregivers' spirituality on the relationship between depression and caregiver burden over time (b = -1.35, p = .015). The lower the spirituality, the stronger the relationship between depression and burden, especially regarding subscales of schedule burden, financial burden, and lack of family support. Significance Of Results: Spirituality was a significant resource for coping with caregiving challenges. This study suggests that comprehensive screening and spiritual care for cancer caregivers may improve their cancer caregiving experience and possibly influence the care recipients' health. abstract_id: PUBMED:35253231 A new way to measure partner burden in depression: Construction, validation, and sensitivity to change of the partner burden in depression questionnaire. Depression occurs in an interpersonal dynamic and living with a depressed person can lead to a significant burden on the partner. Instruments measuring burden do not address couples and often measure caregiving for individuals with schizophrenic disorders. The partner burden in depression (PBD) questionnaire is a new instrument measuring PBD by asking individuals, (1) which symptoms they can observe in their depressed partners and (2) to which degree this burdens them. Hence, PBD combines measuring the awareness of observed depressive symptoms and the resulting burden. Additionally, it addresses aspects unique to couple relationships. Our German validation confirmed a one-factor model with 12 items. The PBD had good psychometric properties and was sensitive to change. Partner burden predicted self-reported depressive symptoms (PHQ-9) over time. PBD is short, easily applicable in research and practice and can add to the understanding of partner effects in depression. abstract_id: PUBMED:35337783 Burden and depression among informal caregivers of visually impaired patients in Mexico. Background: The needs of informal caregivers who provide care to family relatives with visual impairment are often neglected, resulting in burden and depression. Objective: To determine the degree of burden and the prevalence of major depression experienced by caregivers, defined as non-paid family relatives, of legally blind individuals in a Mexican population. Methods: Observational, single-center, cross-sectional study in adults providing care to their family relatives with visual impairment (visual acuity ≤ 20/200 in the best eye for at least 3 months). According to visual impairment degree, care provided included activities of daily living (ADL) and instrumental ADL. Burden of care was evaluated with the Zarit burden interview (ZBI)-22 and the prevalence of major depression was determined by the patient health questionnaire (PHQ)-9. Results: 115 patients and 115 caregivers were included. Male caregivers had significantly higher ZBI-22 (28.7 ± 15.5 vs. 19.2 ± 12.6, p = 0.001) and PHQ-9 (10.0 ± 5.5 vs. 5.3 ± 5.1, p &lt; 0.001) scores than females. Likewise, parent caregivers of adult children and the hours of daily care were significantly associated with higher burden and depression scores. A significant linear correlation between ZBI-22 and PHQ-9 scores in caregivers was also found (r = 0.649, p &lt; 0.001). Conclusions: Male caregivers, parent caregivers of adult children, and caregivers providing greater hours of care were at higher risk of burden and depression. Upon diagnosis of visual impairment, adults providing care to visually impaired family relatives should be screened for burden and depression and referred to a mental health specialist when necessary. Tailored interventions targeting the caregivers' needs are required to reduce burden and depression. abstract_id: PUBMED:25533912 The prevalence and burden of bipolar depression. Background: Bipolar disorder is characterized by debilitating episodes of depression and mood elevation (mania or hypomania). For most patients, depressive symptoms are more pervasive than mood elevation or mixed symptoms, and thus have been reported in individual studies to impose a greater burden on affected individuals, caregivers, and society. This article reviews and compiles the literature on the prevalence and burden of syndromal as well as subsyndromal presentations of depression in bipolar disorder patients. Methods: The PubMed database was searched for English-language articles using the search terms "bipolar disorder," "bipolar depression," "burden," "caregiver burden," "cost," "costs," "economic," "epidemiology," "prevalence," "quality of life," and "suicide." Search results were manually reviewed, and relevant studies were selected for inclusion as appropriate. Additional references were obtained manually from reviewing the reference lists of selected articles found by computerized search. Results: In aggregate, the findings support the predominance of depressive symptoms compared with mood elevation/mixed symptoms in the course of bipolar illness, and thus an overall greater burden in terms of economic costs, functioning, caregiver burden, and suicide. Limitations: This review, although comprehensive, provides a study-wise aggregate (rather than a patient-wise meta-analytic) summary of the relevant literature on this topic. Conclusion: In light of its pervasiveness and prevalence, more effective and aggressive treatments for bipolar depression are warranted to mitigate its profound impact upon individuals and society. Such studies could benefit by including metrics not only for mood outcomes, but also for illness burden. abstract_id: PUBMED:29213930 Burden, anxiety and depression in caregivers of Alzheimer patients in the Dominican Republic. Alzheimer's disease (AD) has a major impact by limiting the ability to live independently. This condition of dependency involves all members of the family, particularly those who take direct care of patients. The changes that take place in caregivers' lives may alter their health and have an effect on the care of the sick. Objective: To determine the presence of burden, anxiety and depression in caregivers of Alzheimer's patients. Method: A descriptive cross-sectional study was performed in 67 family caregivers from the Alzheimer's Clinic Research Unit, Memory and Alzheimer, in the city of Santiago, Dominican Republic. Caregivers were evaluated for burden intensity with the Zarit scale and for both depression and anxiety using the respective Hamilton scales. Descriptive statistical analysis and Pearson correlation were used. Results: 84% of caregivers were female, and 52% were older than 50 years. A total of 36% exhibited caregiver burden; 19% anxiety symptoms; and 43% depressive symptoms. No statistical significance was found between age, sex and number of hours of care. A significant association was found in the Pearson correlation coefficient between caregiver burden, anxiety and depression. Conclusion: Caregiver burden was associated with anxiety and depression. It is important for health professionals to include caregiver assessments in the treatment protocols of dementia. Policy should include support programs for carers. abstract_id: PUBMED:25947182 Depression and caregiving burden in families of patients with amyotrophic lateral sclerosis Purpose: The purpose of this study was to describe depression, caregiving burden and the correlation of the two variables in the families of patients with amyotrophic lateral sclerosis (ALS) and to clarify factors predicting caregiving burden. Methods: A descriptive and cross-sectional study was conducted with 139 family members who provided care to patients with ALS. The characteristics of patients and families, Korean-Beck Depression Inventory (K-BDI), Korean version of Zarit Burden Interview (K-ZBI) and Korean-Amyotrophic Lateral Sclerosis Functional Rating Scale - Revised (K-ALSFRS-R) were used as study measures. Results: The mean score for K-BDI was 19.39 out of 63 suggesting sub-clinical depression and 38.2% of the family members exhibited depression. The mean score for K-ZBI was 66.03 out of 88. The predictors for K-ZBI were K-BDI, age of family member, length of time spent per day in caring, relationship to patient and K-ALSFRS-R. Conclusion: The results of this study suggest that levels of depression and caregiving burden are high among family members caring for patients with ALS. As depression is associated with caregiving burden, screening and emotional supports should be provided to reduce the burden of care for these family. Support programs to alleviate the care burden are also needed, considering family demographics, time per day in caring giving and K-ALSFRS-R. Answer: Yes, we can reduce the burden of depression through various strategies. Nutritional therapy is one approach that can be particularly beneficial in low-income countries, where the economic burden of managing depression can be overwhelming. Nutritional strategies that include omega-3 fatty acids, proteins, vitamins, minerals, and herbs or their phytochemicals have shown potential in managing depression and may provide an inexpensive, natural, and non-invasive means of therapy with reduced adverse effects. These can be used alongside clinical management to effectively manage depression (PUBMED:34939062). Training in improvisation techniques has also been shown to help reduce caregiver burden and depression. A study found that caregivers of family members with dementia experienced a significant decrease in depressive symptoms and sense of burden after completing a six-week program that taught improvisation skills (PUBMED:31408359). Moreover, spirituality has been identified as a significant resource for coping with caregiving challenges. A study found that spirituality moderated the relationship between cancer caregiver burden and depression, suggesting that comprehensive screening and spiritual care for cancer caregivers may improve their caregiving experience and possibly influence the care recipients' health (PUBMED:38131143). Additionally, the development of specific instruments like the Partner Burden in Depression (PBD) questionnaire can help measure and address the burden experienced by partners of individuals with depression. This tool combines the measurement of awareness of observed depressive symptoms and the resulting burden, which can be useful in both research and practice (PUBMED:35253231). Furthermore, addressing the needs of informal caregivers, such as those providing care to visually impaired patients, can reduce their risk of burden and depression. Tailored interventions targeting caregivers' needs are required to alleviate their burden and improve mental health outcomes (PUBMED:35337783). Overall, the burden of depression can be reduced by addressing the multifaceted aspects of the condition, including nutritional, psychological, spiritual, and social support for both patients and their caregivers.
Instruction: Strictosidine activation in Apocynaceae: towards a "nuclear time bomb"? Abstracts: abstract_id: PUBMED:20723215 Strictosidine activation in Apocynaceae: towards a "nuclear time bomb"? Background: The first two enzymatic steps of monoterpene indole alkaloid (MIA) biosynthetic pathway are catalysed by strictosidine synthase (STR) that condensates tryptamine and secologanin to form strictosidine and by strictosidine beta-D-glucosidase (SGD) that subsequently hydrolyses the glucose moiety of strictosidine. The resulting unstable aglycon is rapidly converted into a highly reactive dialdehyde, from which more than 2,000 MIAs are derived. Many studies were conducted to elucidate the biosynthesis and regulation of pharmacologically valuable MIAs such as vinblastine and vincristine in Catharanthus roseus or ajmaline in Rauvolfia serpentina. However, very few reports focused on the MIA physiological functions. Results: In this study we showed that a strictosidine pool existed in planta and that the strictosidine deglucosylation product(s) was (were) specifically responsible for in vitro protein cross-linking and precipitation suggesting a potential role for strictosidine activation in plant defence. The spatial feasibility of such an activation process was evaluated in planta. On the one hand, in situ hybridisation studies showed that CrSTR and CrSGD were coexpressed in the epidermal first barrier of C. roseus aerial organs. However, a combination of GFP-imaging, bimolecular fluorescence complementation and electromobility shift-zymogram experiments revealed that STR from both C. roseus and R. serpentina were localised to the vacuole whereas SGD from both species were shown to accumulate as highly stable supramolecular aggregates within the nucleus. Deletion and fusion studies allowed us to identify and to demonstrate the functionality of CrSTR and CrSGD targeting sequences. Conclusions: A spatial model was drawn to explain the role of the subcellular sequestration of STR and SGD to control the MIA metabolic flux under normal physiological conditions. The model also illustrates the possible mechanism of massive activation of the strictosidine vacuolar pool upon enzyme-substrate reunion occurring during potential herbivore feeding constituting a so-called "nuclear time bomb" in reference to the "mustard oil bomb" commonly used to describe the myrosinase-glucosinolate defence system in Brassicaceae. abstract_id: PUBMED:37316551 Nuclear bomb and public health. Since the nuclear bomb attack against Hiroshima and Nagasaki in 1945, the world has advanced in nuclear technology. Today, a nuclear bomb could target a large-scale attack, at a longer range, and with much greater destructive force. People are increasingly concerned about the potential destructive humanitarian outcomes. We discuss actual conditions detonation of an atomic bomb would create, radiation injuries, and diseases. We also address concerns about functionality of medical care systems and other systems that support medical systems (i.e., transport, energy, supply chain, etc. systems) following a massive nuclear attack and whether citizens able to survive this. abstract_id: PUBMED:16481164 Substrate specificity of strictosidine synthase. Strictosidine synthase catalyzes a Pictet-Spengler reaction in the first step in the biosynthesis of terpene indole alkaloids to generate strictosidine. The substrate requirements for strictosidine synthase are systematically and quantitatively examined and the enzymatically generated compounds are processed by the second enzyme in this biosynthetic pathway. abstract_id: PUBMED:7763429 Strictosidine: from alkaloid to enzyme to gene. In this review, the elucidation of the structure of the first key alkaloidal intermediate in monoterpenoid indole alkaloid biosynthesis, 3 alpha(S)-strictosidine, is presented. The discovery of the enzyme which catalyses the stereospecific formation of this alkaloidal precursor from tryptamine and secologanin, strictosidine synthase, is also detailed. From the knowledge provided by the stereochemical structure of strictosidine and the biochemical characteristics of the biosynthetic enzyme, strictosidine synthase, a new approach to the study of monoterpenoid indole alkaloid biosynthesis was developed. Physiological studies of monoterpenoid indole alkaloid biosynthesis at the enzymic level in plants and plant cell cultures were performed followed by the analyses of these systems at the level of molecular genetics. abstract_id: PUBMED:3661997 A spectrophotometric assay for strictosidine synthase. A spectrophotometric assay for strictosidine synthase is described. Strictosidine is extracted with ethyl acetate and, where high substrate concentrations are used, the organic extract is washed with dilute ammonia to remove coextracted secologanin; after evaporation of the solvent, the residue is heated with 5 M H2SO4 for 45 min and the A348 value is measured. Strictosidine production is calculated from the response of similarly treated standards. A minimum production of 10-25 nmol of strictosidine may be determined. The assay is demonstrated using extracts of cultured Cinchona ledgeriana cells. abstract_id: PUBMED:35732938 Pictet-Spengler Reaction for the Chemical Synthesis of Strictosidine. Strictosidine is the common biosynthetic precursor of Monoterpene Indole Alkaloids (MIA). A practical single-step procedure to assemble strictosidine from secologanin is described via a bioinspired Pictet-Spengler reaction. Mild conditions and purification by crystallization and flash chromatography allow access to the targeted product in fair yield. abstract_id: PUBMED:35294193 Engineered Production of Strictosidine and Analogues in Yeast. Monoterpene indole alkaloids (MIAs) are an expansive class of plant natural products, many of which have been named on the World Health Organization's List of Essential Medicines. Low production from native plant hosts necessitates a more reliable source of these drugs to meet global demand. Here, we report the development of a yeast-based platform for high-titer production of the universal MIA precursor, strictosidine. Our fed-batch platform produces ∼50 mg/L strictosidine, starting from the commodity chemicals geraniol and tryptamine. The microbially produced strictosidine was purified to homogeneity and characterized by NMR. Additionally, our approach enables the production of halogenated strictosidine analogues through the feeding of modified tryptamines. The MIA platform strain enables rapid access to strictosidine for reconstitution and production of downstream MIA natural products. abstract_id: PUBMED:2742131 Assay of strictosidine synthase from plant cell cultures by high-performance liquid chromatography. An HPLC assay is described for the enzyme strictosidine synthase in which the formation of strictosidine and the decrease of tryptamine can be followed at the same time. In cell cultures of Catharanthus roseus significant amounts of strictosidine glucosidase activity were detected. In crude preparations, the strictosidine synthase reaction is therefore best measured by the secologanin-dependent decrease of tryptamine. In this way, the specific synthase activity in a cell free extract was found to be 56 pkat/mg of protein. Inclusion of 100 mM D(+)-gluconic acid-delta-lactone in the incubation mixture inhibited 75% of the glucosidase activity, without inhibiting the synthase activity. The synthase activity was readily separated from the glucosidase activity by gel filtration on Sephadex G-75 or Ultrogel AcA-44. Cell cultures of Tabernaemontana orientalis did not contain measurable amounts of strictosidine glucosidine activity. The specific strictosidine synthase activity was 130-200 pkat/mg of protein during the growth of this cell culture. Strictosidine synthase is stable at -20 degrees C for at least 2 months. abstract_id: PUBMED:18280746 3D-Structure and function of strictosidine synthase--the key enzyme of monoterpenoid indole alkaloid biosynthesis. Strictosidine synthase (STR; EC 4.3.3.2) plays a key role in the biosynthesis of monoterpenoid indole alkaloids by catalyzing the Pictet-Spengler reaction between tryptamine and secologanin, leading exclusively to 3alpha-(S)-strictosidine. The structure of the native enzyme from the Indian medicinal plant Rauvolfia serpentina represents the first example of a six-bladed four-stranded beta-propeller fold from the plant kingdom. Moreover, the architecture of the enzyme-substrate and enzyme-product complexes reveals deep insight into the active centre and mechanism of the synthase highlighting the importance of Glu309 as the catalytic residue. The present review describes the 3D-structure and function of R. serpentina strictosidine synthase and provides a summary of the strictosidine synthase substrate specificity studies carried out in different organisms to date. Based on the enzyme-product complex, this paper goes on to describe a rational, structure-based redesign of the enzyme, which offers the opportunity to produce novel strictosidine derivatives which can be used to generate alkaloid libraries of the N-analogues heteroyohimbine type. Finally, alignment studies of functionally expressed strictosidine synthases are presented and the evolutionary aspects of sequence- and structure-related beta-propeller folds are discussed. abstract_id: PUBMED:17884630 Structure-based engineering of strictosidine synthase: auxiliary for alkaloid libraries. The highly substrate-specific strictosidine synthase (EC 4.3.3.2) catalyzes the biological Pictet-Spengler condensation between tryptamine and secologanin, leading to the synthesis of about 2000 monoterpenoid indole alkaloids in higher plants. The crystal structure of Rauvolfia serpentina strictosidine synthase (STR1) in complex with strictosidine has been elucidated here, allowing the rational site-directed mutation of the active center of STR1 and resulting in modulation of its substrate acceptance. Here, we report on the rational redesign of STR1 by generation of a Val208Ala mutant, further describing the influence on substrate acceptance and the enzyme-catalyzed synthesis of 10-methyl- and 10-methoxystrictosidines. Based on the addition of strictosidine to a crude strictosidine glucosidase preparation from Catharanthus cells, a combined chemoenzymatic approach to generating large alkaloid libraries for future pharmacological screenings is presented. Answer: The concept of strictosidine activation in Apocynaceae being compared to a "nuclear time bomb" is based on the idea that the plant has a defense mechanism involving the enzymatic conversion of strictosidine, which can be rapidly activated upon herbivore attack, much like a bomb being triggered. In the study by PUBMED:20723215, it was shown that a pool of strictosidine exists in planta and that the deglucosylation products of strictosidine are responsible for protein cross-linking and precipitation in vitro, suggesting a role in plant defense. The spatial feasibility of this activation process was evaluated in Catharanthus roseus, where strictosidine synthase (STR) and strictosidine beta-D-glucosidase (SGD) were found to be coexpressed in the epidermal first barrier of aerial organs. However, STR was localized to the vacuole, while SGD accumulated as stable supramolecular aggregates within the nucleus. The subcellular sequestration of STR and SGD is thought to control the metabolic flux of monoterpene indole alkaloids (MIAs) under normal conditions. Upon herbivore feeding, the reunion of enzyme and substrate could trigger a massive activation of the strictosidine vacuolar pool, constituting the "nuclear time bomb" analogy. This mechanism is likened to the "mustard oil bomb" defense system in Brassicaceae, where the myrosinase-glucosinolate system is activated upon tissue damage. The term "nuclear time bomb" in this context is metaphorical and refers to the rapid and potent defense response rather than an actual nuclear reaction. It contrasts with the literal implications of a nuclear bomb discussed in PUBMED:37316551, which addresses the public health concerns of a nuclear bomb attack and its aftermath. The "nuclear time bomb" in Apocynaceae is a biological defense strategy, whereas the latter is a human-made weapon with catastrophic consequences.
Instruction: Is a twelve-percent cesarean section rate at a perinatal center safe? Abstracts: abstract_id: PUBMED:8817435 Is a twelve-percent cesarean section rate at a perinatal center safe? Objective: Our purpose was to examine the pregnancy and neonatal outcomes at a perinatal center with a consistent cesarean section rate approximately half the national average. Study Design: Ten years of vaginal delivery and cesarean section rates (1983 to 1992) and 5 years of mortality and morbidity outcomes (1988 to 1992) were compared with national health statistics and national health objectives. Results: The cesarean section rate during the 10-year period ranged from 10% to 15%, with an average of 12.5%. The cesarean section rate for the 5 years during which maternal and neonatal outcome data were obtained was 11.3%. The forceps and vacuum extraction rates during that time were consistently less than 5%. The nurse-midwifery service delivered approximately 36% of all babies during this period. In an examination of maternal mortality, we discovered only one death during the 5-year interval. The rate of maternal admission to the intensive care unit after delivery was 0.2%. The percent of women who received blood transfusions was 1%. The average length of stay for both vaginal and cesarean section deliveries declined steadily across the whole interval and was 2.5 days for a vaginal delivery and 5.5 days for a cesarean section. An examination of neonatal morbidity and mortality revealed an admission rate to the intensive care unit of less than 6%. The distribution of Apgar scores indicated less than 4% of neonates had scores &lt; or = 3 at 1 minute; 0.5% had scores &lt; or = 3 at 5 minutes. The neonatal death rate was 614 per 100,000 births, and fetal mortality was 729 per 100,000 births from 1988 to 1992. Conclusions: The lowest safe cesarean section rate is not known; it will undoubtedly vary with location and patient mix. We believe that we have been able to establish a rate of cesarean section one half of the national average with good maternal and fetal outcomes. This has been accomplished through a vigorous prenatal care program, excellent perinatal and infertility services, a vigorous program of vaginal birth after cesarean section, and a competent nurse-midwifery service. abstract_id: PUBMED:21656998 Perinatal indicators of the Zilina region in Slovak Republic during the period 2000-2009 Objective: To provide health care providers, patients, and the general public the improvements and objective data outcomes in perinatal health care indicators for the Zilina district, northern part of Slovak republic. Setting: Martin perinatology center (Department of Gynecology and Obstetrics, Department of Neonatology, Jessenius Medical Faculty, Commenius University, Martin, Slovak Republic). Subject And Method: Retrospective analysis of selected main perinatal outcomes for the period of 10 years (last decade) in the Zilina region. The comparison between the regional data and similar ones retrieved for the Slovak republic. Results: During the analyzed period we have observed a significant decrease in perinatal mortality (PM) with lowest rate of 3.1 per thousand in year 2009 with the 4.13 per thousand decade average. The more favorable trend in PM drop was observed when analyzed separately from congenital disorders, which when compared to national rate decreased by 2.9 per thousand and by 1 per thousand when compared to crude PM in Zilina region for the year 2009. Furthermore, the sophisticated clinical management and improved technical equipment led to the decrease in other main perinatal indicators (e.g. decade average of frequency drop in preterm labors to 5.4%; early neonatal mortality to 2.14 per thousand; stillbirth rate drop to 0.327 per thousand, decrease in neonatal asfyxia rate with pH &lt;7.15 to 0.01% in 2009 with decade average of 0.08% and increased proportion of in-utero transports with 5-years average of 90.9%). Contrary to that, there was revealed a doubling effect of cesarean section rate per observed period (15.7% vs. 32.9%). Conclusions: Our results showed that the symbiosis in organization of health care, basic and applied clinical research together with improved technical equipment and introducing the WHO guidelines into the obstetrical praxis has prepared the clinical background which led to the immense improvement in the perinatal outcomes in the northern part of Slovakia during the last decade. abstract_id: PUBMED:28775474 INCREASING CAESAREAN SECTION AND PERINATAL OUTCOME. An analysis of births by caesarean sections for ten years at a service hospital was carried out to identify the benefit in terms of reduction in perinatal mortality over the period without increase in maternal mortality and morbidity. An increase of 43.25 per cent in caesarean section rate was observed. Since 1986 there had been no significant change in the indications for caesarean sections or obstetrical care in terms of man and machine modernisation at this hospital. New born's care in this hospital is supervised by obstetrician and medical specialist. However, a definite reduction in perinatal mortality rate by 59.68 per cent was noted with no maternal mortality in caesarean cases. This retrospective study showed that the judicious increase of caesarean sections could improve perinatal outcome. abstract_id: PUBMED:2198508 Current trends in perinatal mortality statistics at the New Jersey Obstetric Center The study presents an overview of the changes in perinatal mortality rates at the Statewide Perinatal Center of New Jersey during the past decades. According to the data, the increase in the rate of cesarean sections from 4.5 percent to 17 percent, and the comparable reduction of the rates of manipulative intrapartum and extraction procedures, contributed significantly to the decrease of the perinatal mortality rates from 51/1000 to 17/1000 between 1971 and 1983. Of the new technical tools, those utilized for the evaluation of fetal well-being antepartum appeared to be more useful then those used intrapartum. On account of the high prevalence of genital infections in the population, the recent acceptance in the service of the use of invasive intrapartum technology, appears to have impacted unfavorably upon the perinatal mortality trends. The increased rate of births of premature babies, the widespread abuse of habit forming drugs in the community, and the routine use of procedures requiring artificial rupture of the membranes, probably all contributed to the rapid increase of the perinatal mortality rate in the Center from 15/1000 in 1986 to 28/1000 in 1988. It is concluded that perinatal care is a complex medical and social task. The overall result of the relevant efforts depends to a great extent upon the social environment, and the moral standing, educational level and motivation of the recipients. abstract_id: PUBMED:22108042 The safe motherhood referral system to reduce cesarean sections and perinatal mortality - a cross-sectional study [1995-2006 Background: In 2000, the eight Millennium Development Goals (MDGs) set targets for reducing child mortality and improving maternal health by 2015. Objective: To evaluate the results of a new education and referral system for antenatal/intrapartum care as a strategy to reduce the rates of Cesarean sections (C-sections) and maternal/perinatal mortality. Methods: Design: Cross-sectional study. Setting: Department of Gynecology and Obstetrics, Botucatu Medical School, Sao Paulo State University/UNESP, Brazil. Population: 27,387 delivering women and 27,827 offspring. Data Collection: maternal and perinatal data between 1995 and 2006 at the major level III and level II hospitals in Botucatu, Brazil following initiation of a safe motherhood education and referral system. Main Outcome Measures: Yearly rates of C-sections, maternal (/100,000 LB) and perinatal (/1000 births) mortality rates at both hospitals. Data Analysis: Simple linear regression models were adjusted to estimate the referral system's annual effects on the total number of deliveries, C-section and perinatal mortality ratios in the two hospitals. The linear regression were assessed by residual analysis (Shapiro-Wilk test) and the influence of possible conflicting observations was evaluated by a diagnostic test (Leverage), with p &lt; 0.05. Results: Over the time period evaluated, the overall C-section rate was 37.3%, there were 30 maternal deaths (maternal mortality ratio = 109.5/100,000 LB) and 660 perinatal deaths (perinatal mortality rate = 23.7/1000 births). The C-section rate decreased from 46.5% to 23.4% at the level II hospital while remaining unchanged at the level III hospital. The perinatal mortality rate decreased from 9.71 to 1.66/1000 births and from 60.8 to 39.6/1000 births at the level II and level III hospital, respectively. Maternal mortality ratios were 16.3/100,000 LB and 185.1/100,000 LB at the level II and level III hospitals. There was a shift from direct to indirect causes of maternal mortality. Conclusions: This safe motherhood referral system was a good strategy in reducing perinatal mortality and direct causes of maternal mortality and decreasing the overall rate of C-sections. abstract_id: PUBMED:16946228 Intrapartum electronic fetal heart rate monitoring and the prevention of perinatal brain injury. Objective: Electronic fetal heart rate monitoring (EFM) is the most widely used method of intrapartum surveillance, and our objective is to review its ability to prevent perinatal brain injury and death. Data Sources: Studies that quantified intrapartum EFM and its relation to specific neurologic outcomes (seizures, periventricular leukomalacia, cerebral palsy, death) were eligible for inclusion. MEDLINE was searched from 1966 to 2006 for studies that examined the relationship between intrapartum EFM and perinatal brain injury using these MeSH and text words: "cardiotocography," "electronic fetal monitoring," "intrapartum fetal heart rate monitoring," "intrapartum fetal monitoring," and "fetal heart rate monitoring." Methods Of Study Selection: This search strategy identified 1,628 articles, and 41 were selected for further review. Articles were excluded for the following reasons: in case reports, letters, commentaries, and review articles, intrapartum EFM was not quantified, or specific perinatal neurologic morbidity was not measured. Three observational studies and a 2001 meta-analysis of 13 randomized controlled trials were selected for determination of the effect of intrapartum EFM on perinatal brain injury. Tabulation, Integration, And Results: Electronic fetal monitoring was introduced into widespread clinical practice in the late 1960s based on retrospective studies comparing its use to historical controls where auscultation was performed in a nonstandardized manner. Case-control studies have shown correlation of EFM abnormalities with umbilical artery base excess, but EFM was not able to identify cerebral white matter injury or cerebral palsy. Of 13 randomized controlled trials, one showed a significant decrease in perinatal mortality with EFM compared with auscultation. Meta-analysis of the randomized controlled trials comparing EFM with auscultation have found an increased incidence of cesarean delivery and decreased neonatal seizures but no effect on the incidence of cerebral palsy or perinatal death. Conclusion: Although intrapartum EFM abnormalities correlate with umbilical cord base excess and its use is associated with decreased neonatal seizures, it has no effect on perinatal mortality or pediatric neurologic morbidity. abstract_id: PUBMED:23608346 Evaluation of a perinatal network using the first certificates of health Objective: To describe the methodology for continuous reporting of perinatal indicators in Maternité en Yvelines et Pays Associés (MYPA) network, and the main results for its evaluation. To discuss the implications for practice in a perinatal network. Material And Methods: CoNaissance 78 program is a collaboration between MYPA network, Conseil général des Yvelines, ARS Île-de-France and U953 Inserm unit. Continuous recording of data is produced using the first certificate of health (PCS) of infants born in the network maternities, an additional health certificate including data about severe maternal morbidity, perineal tears and episiotomies, and a stillbirth certificate including all cases of fetal deaths and medical termination of pregnancy from 22weeks of gestation. Description of the population and obstetric practices with comparison between the network maternities covers the period from 2008 to 2011. Results: The analysis includes 79,232 births. The used variables had a missing data rate below 5%. The mean maternal age at delivery was 30.9, women aged 35years or above accounting for 23.2% of deliveries (from 17.1 to 32.8% according to the maternity, P&lt;0.001). Nullipara rate was 42.5% (from 36.6 to 50% according to the maternity, P&lt;0.001) and multiple pregnancies rate was 1.8% (from 0.3 to 3.4% according to the maternity, P&lt;0.001). Mode of onset of labor was spontaneous in 66.1% cases (from 55.5 to 72.9% according to the maternity, P&lt;0.001), induced in 21.5% cases (from 16.9 to 30.8% according to the maternity, P&lt;0.001) and a planned cesarean section was performed in 12.4% cases (from 8.4 to 19.6% according to the maternity, P&lt;0.001). The global mean rate of cesarean sections was 24.3% (from 18.4 to 29.6% according to the maternity, P&lt;0.001). The cesarean section rate was in a selected low risk group was 14.7% (from 11.4 to 20.2% [P&lt;0.001] according to the maternity). The episiotomy rate was 26.1% (from 16.3 to 43.6% [P&lt;0.001] according to the maternity). The rate of very preterm neonates born alive inside a tertiary center was 70.8%. Conclusion: This program allowed to observe a large disparity in practices, and highlighted significant shortcomings in the organization of in utero transfers to the tertiary center for very preterm births. abstract_id: PUBMED:21621896 Refusal of emergency caesarean delivery in cases of non-reassuring fetal heart rate is an independent risk factor for perinatal mortality. Objective: To assess pregnancy outcome in women who initially refused medically indicated caesarean delivery (CD) in cases of non-reassuring fetal heart rate (FHR) patterns. Study Design: A retrospective cohort study, comparing patients who refused and did not refuse caesarean delivery (CD) due to non-reassuring FHR tracings, was conducted. Deliveries occurred between the years 1988 and 2009 in a tertiary medical center. Multivariate analysis was performed to control for confounders. Results: Out of 10,944 women who were advised to undergo CD due to non-reassuring FHR patterns, 203 women initially refused CD. Women refusing medical intervention tended to be older (30.6 ± 6.9 vs. 28.29 ± 6.1, P&lt;0.001) and of higher parity (46.8% vs. 19.9% had more than 5 deliveries; P&lt;0.001) as compared to the comparison group. Refusal of CD was significantly associated with adverse perinatal outcome. Using a multiple logistic regression model controlling for confounders such as maternal age, refusal of treatment was found as an independent risk factor for perinatal mortality (adjusted OR=3.3, C.I. 95% 1.8-5.9, P&lt;0.001). A non-significant trend towards higher rates of adverse perinatal outcome was found when refusal latency time was longer than 20 min (OR=2, 95% CI 0.36-11.95; P=0.29). Conclusion: Refusal of CD in cases of non-reassuring FHR tracings is an independent risk factor for perinatal mortality. abstract_id: PUBMED:1575836 Survival prospects of extremely preterm infants: a 10-year experience in a single perinatal center. During a 10-year period, 1977 to 1986, 233 (53%) of 442 inborn live births between 23 and 28 weeks' gestation survived; their 1-year survival rate was 7% at 23 weeks, 30% at 24 weeks, 31% at 25 weeks, 55% at 26 weeks, 67% at 27 weeks, and 71% at 28 weeks. No significant change in survival rate was observed over the years. Twelve percent of pregnancies and 20% of infants were multiple gestations. Singleton births had significantly higher survival rates compared with multiple births (58% versus 41%). The obstetric intervention rate, as measured by the frequency of cesarean section, increased significantly over the years: from 15% in 1977-1978 to 33% in 1985-1986. The neonatal intervention rate, as measured by the frequency of live births offered neonatal intensive care, remained unchanged. Ten percent were not treated: 4% had major malformations and 6% were considered "nonviable." Active perinatal management, which assumed fetal-neonatal viability, accounted for better survival rates compared with centers with a more passive management policy. Information on survival based on gestational cohorts plays an important role in helping obstetricians, neonatologists, and parents make appropriate management decisions. abstract_id: PUBMED:23972375 Comparison of outcomes of perinatal care in Slovakia and the UK. Objective: To investigate whether there are differences in maternal and perinatal outcomes between Slovakia and the UK, and whether any observed variations can be attributed to differences in perinatal care. Methods: Data on outcomes of perinatal care in Slovakia and the UK between 2006 and 2010 were compared. Perinatal mortality figures included stillbirths weighing 1000g or more and early neonatal deaths. Results: In Slovakia, the perinatal mortality rate was significantly higher than that in the UK (RR 1.12; 95% CI, 1.06-1.18). Cesarean delivery was significantly more frequent in Slovakia (RR 1.05; 95% CI, 1.05-1.06); instrumental vaginal delivery was less frequent (ventouse delivery, RR 0.20; 95% CI, 0.19-0.21; forceps delivery, RR 0.09; 95% CI, 0.09-0.10). Episiotomy and peripartum hysterectomy were performed more often in Slovakia (episiotomy, RR 4.10; 95% CI, 4.07-4.12; peripartum hysterectomy, RR 2.02; 95% CI 1.65-2.47). The incidence of eclampsia was significantly higher in Slovakia (RR 1.60; 95% CI, 1.26-2.04). There were no significant differences in the rates of maternal death. Conclusion: Perinatal care outcomes and intervention rates differ between Slovakia and UK. This may be explained by differences in outcome definitions, perinatal care, and official encouragement of medical complaints. Answer: The study conducted at a perinatal center with a consistent cesarean section rate of approximately 12.5% over a 10-year period (1983 to 1992) suggests that a twelve percent cesarean section rate can be safe. The center reported good pregnancy and neonatal outcomes, with a cesarean section rate that was about half the national average during the time of the study. Maternal mortality was very low, with only one death reported in the 5-year interval. The rate of maternal admission to the intensive care unit after delivery was 0.2%, and the percent of women who received blood transfusions was 1%. Neonatal morbidity and mortality were also low, with an admission rate to the intensive care unit of less than 6%, and less than 4% of neonates had Apgar scores ≤ 3 at 1 minute; 0.5% had scores ≤ 3 at 5 minutes. The neonatal death rate was 614 per 100,000 births, and fetal mortality was 729 per 100,000 births from 1988 to 1992. The study concluded that through a vigorous prenatal care program, excellent perinatal and infertility services, a program of vaginal birth after cesarean section, and a competent nurse-midwifery service, it was possible to maintain a low cesarean section rate with good maternal and fetal outcomes (PUBMED:8817435). Therefore, based on the outcomes reported in this study, a twelve percent cesarean section rate at a perinatal center appears to be safe. However, it is important to note that the safest cesarean section rate may vary with location and patient mix, and what is safe for one center may not be applicable to another due to differences in patient demographics, healthcare practices, and available resources.
Instruction: Gonadal Shielding in Radiography: A Best Practice? Abstracts: abstract_id: PUBMED:27837123 Gonadal Shielding in Radiography: A Best Practice? Purpose: To investigate radiation dose to phantom testes with and without shielding. Methods: A male anthropomorphic pelvis phantom was imaged with thermoluminescent dosimeters (TLDs) placed in the right and left detector holes corresponding to the testes. Ten exposures were made of the pelvis with and without shielding. The exposed TLDs were packaged securely and mailed to the University of Wisconsin Calibration Laboratory for reading and analysis. Results: A t test was calculated for the 2 exposure groups (no shield and shielded) and found to be significant, F = 8.306, P &lt; .006. A 36.4% increase in exposure to the testes was calculated when no contact shield was used during pelvic imaging. Discussion: Using a flat contact shield during imaging of the adult male pelvis significantly reduces radiation dose to the testes. Conclusion: Regardless of the contradictions in the literature on gonadal shielding, the routine practice of shielding adult male gonads during radiographic imaging of the pelvis is a best practice. abstract_id: PUBMED:29046919 Female gonadal shielding with automatic exposure control increases radiation risks. Background: Gonadal shielding remains common, but current estimates of gonadal radiation risk are lower than estimated risks to colon and stomach. A female gonadal shield may attenuate active automatic exposure control (AEC) sensors, resulting in increased dose to colon and stomach as well as to ovaries outside the shielded area. Objective: We assess changes in dose-area product (DAP) and absorbed organ dose when female gonadal shielding is used with AEC for pelvis radiography. Materials And Methods: We imaged adult and 5-year-old equivalent dosimetry phantoms using pelvis radiograph technique with AEC in the presence and absence of a female gonadal shield. We recorded DAP and mAs and measured organ absorbed dose at six internal sites using film dosimetry. Results: Female gonadal shielding with AEC increased DAP 63% for the 5-year-old phantom and 147% for the adult phantom. Absorbed organ dose at unshielded locations of colon, stomach and ovaries increased 21-51% in the 5-year-old phantom and 17-100% in the adult phantom. Absorbed organ dose sampled under the shield decreased 67% in the 5-year-old phantom and 16% in the adult phantom. Conclusion: Female gonadal shielding combined with AEC during pelvic radiography increases absorbed dose to organs with greater radiation sensitivity and to unshielded ovaries. Difficulty in proper use of gonadal shields has been well described, and use of female gonadal shielding may be inadvisable given the risks of increasing radiation. abstract_id: PUBMED:30673332 Patient Shielding in Diagnostic Imaging: Discontinuing a Legacy Practice. Objective: Patient shielding is standard practice in diagnostic imaging, despite growing evidence that it provides negligible or no benefit and carries a substantial risk of increasing patient dose and compromising the diagnostic efficacy of an image. The historical rationale for patient shielding is described, and the folly of its continued use is discussed. Conclusion: Although change is difficult, it is incumbent on radiologic technologists, medical physicists, and radiologists to abandon the practice of patient shielding in radiology. abstract_id: PUBMED:37940176 Exploring Past to Present Shielding Guidelines. Purpose: To explore the data and supporting evidence for the 2019 statement by the American Association of Physicists in Medicine (AAPM) that recommends limits to the routine use of fetal and gonadal shielding in medical imaging. Methods: Three researchers searched 5 online databases, selecting articles from scholarly journals and radiology trade publications. Search results were filtered to include literature published from January 1, 2016, to August 9, 2022, to ensure relevance and provide historical background for the 2019 AAPM statement. Results: The use of patient shielding during medical imaging did not reduce dose, and in certain instances, increased dose received by patients during computed tomography, fluoroscopy, or dental imaging. The use of shielding interfered with technology designed to reduce patient dose, including automatic exposure control and dose modulation. Research showed that errors in shield placement were common and that shields can act as sources of infection or carriers of harmful lead dust. Discussion: In each article reviewed, a compelling case was made for discontinuing routine patient shielding during radiographic procedures. Serious opposition to the discontinuation of the shielding practice was not found. Opportunities exist for further study into technologists' and the public's understanding of the effects of radiation and technologists' compliance with new shielding policies. Conclusion: The challenges with properly using shielding, paired with recent technological advancements and a new understanding of radiation protection, have negated the need for contact shielding. This legacy practice can be discontinued in clinical settings, and educational materials for technologists and students should be updated to reflect these changes. abstract_id: PUBMED:32978986 Achievable dose reductions with gonadal shielding for children and adults during abdominal/pelvic radiographic examinations: A Monte Carlo simulation. Purpose: Recently, medical professionals have reconsidered the practice of routine gonadal shielding for radiographic examinations. The objective of this study was to evaluate the gonadal dose reduction achievable with gonadal shields in the primary beam during abdominal/pelvic radiographic examinations under ideal and non-ideal shielding placement. Methods: CT scans of CIRS anthropomorphic phantoms were used to perform voxelized Monte Carlo simulations of the photon transport during abdominal/pelvic radiographic examinations with standard filtration and 0.1 mm Cu + 1 mm Al added filtration to estimate gonadal doses for an adult, 5 yr old, and newborn phantom with and without gonadal shields. The reduction in dose when the shields were not placed at the ideal locations was also evaluated. The ratio of the number of scattered-to-primary photons (SPR) across the anteroposterior (AP) dimension of the phantoms was also reported. Results: The simulated dose reduction with ideal shielding placement for the testes and ovaries ranged from 80% to 90% and 55% to 70% respectively. For children, a misalignment of the shield to the gonad of 4 cm reduced the measured dose reduction to the gonads to &lt;10%. For adults, this effect did not occur until the misalignment increased to ~6 cm. Effects of dose reduction with and without the gonadal shields properly placed were similar for standard filtration and added filtration. SPR at the level of the testes was consistently &lt;1 for all phantoms. SPR for ovaries was ~1.5 for the adult and 5-yr old, and ~1 for the newborn phantom. Conclusion: Dose reduction with ideal alignment of the simulated gonadal shield to the gonads in this study was greater for the testes than the ovaries; both reductions were substantial. However, the dose reductions were greatly reduced (to &lt;10%) for both sexes with misalignment of the gonads to the shields by 4 cm for children and 6 cm for adults. abstract_id: PUBMED:30292501 A questionnaire study of radiography educator opinions about patient lead shielding during digital projection radiography. Background: In projection radiography, lead rubber shielding has long been used to protect the gonads both within and outside the collimated field. However, the relative radio-sensitivity of the gonads is considered lower than previously, and doses from digital projection radiography are reported as being lower than in previous eras. These factors, along with technical difficulties encountered in placing lead shielding effectively, lead to varied opinions on the efficacy of such shielding in peer reviewed literature. This current study has investigated what is currently being taught as good practice concerning the use of lead shielding during projection radiography. Method: An online questionnaire was distributed to a purposive sample of 44 radiography educators across 15 countries, with the aim of establishing radiography educators' opinions about patient lead shielding and its teaching. Results: From the 27 responding educators, 57% (n = 15) teach students to apply gonadal shielding across a range of radiographic examinations; only 22% (n = 6) do the same for the breast, despite respondents being aware that the breast has higher relative radio-sensitivity than the gonads. Radiation protection was the primary reason given for using shielding. Students are generally expected to apply patient lead shielding during assessments, although a small number of respondents report that students must justify whether or not to apply lead shielding. Educators generally held the opinion that no matter what they are taught, students are influenced by what they see radiographers do in clinical practice. Conclusions: The current study has not found consensus in literature or in radiography educators' opinions concerning the use of patient lead shielding. Findings suggest that a large scale empirical study to establish a specific evidence base for the appropriate use of lead shielding across a range of projection radiography examinations would be useful. abstract_id: PUBMED:35989157 Location of the ovaries in children and efficacy of gonadal shielding in hip and pelvis radiography. Background: Patients with hip disorders undergo multiple radiographic examinations, so gonadal radiation risk should be minimized. Inaccurate shield placement, including obscuring landmarks, has been widely reported, and some studies reported that covering the true pelvis was inappropriate to shield young girls' ovaries. However, no reports on ovaries in Asian patients identified on magnetic resonance imaging exist. We aimed to identify the location of the ovaries in Japanese children and assess the efficacy of gonadal shielding. Methods: Female patients aged ≤16 years who underwent magnetic resonance imaging for hip disorders that displayed at least one ovary were included. Sixty ovaries from 31 patients were classified into two age groups: &lt;2 years and &gt;2 years, and the ovaries' position was classified according to the following four zones on the anteroposterior pelvic radiograph: zone 1 (true pelvis) - area surrounded by the line of the anterior superior iliac spines, inner side walls of the ilium, and symphysis pubis; zone 2 - areas lateral to zone 1; zone 3 - sacral area superior to zone 1; and zone 4 - areas lateral to zone 3. The ovaries' position was analyzed according to age group. Results: Thirty-one ovaries in 16 patients were &lt;2 years, and 29 ovaries in 15 patients were &gt;2 years. Thirteen ovaries in the true pelvis, 18 ovaries in the false pelvis were &lt;2 years, and 27 in the true pelvis and 2 in the false pelvis were in &gt;2 years. In girls aged &lt;2 years, most ovaries in the false pelvis were located in zone 3. Conclusions: Girls aged &gt;2 years mostly have their ovaries in the true pelvis, and ovaries in infants tend to be located superior to the true pelvis. Covering the true pelvis is plausible for shielding ovaries. Shields should be placed slightly more cranially than the true pelvis for infants. abstract_id: PUBMED:23692195 Efficacy of gonadal shielding in pediatric pelvis X-rays Objectives: In this study, we evaluated the efficacy of using gonadal shielding in pediatric patients. Patients And Methods: Between October 2011 and February 2012, 1137 pelvic X-rays of 675 consecutive patients (323 boys, 352 girls; mean age 6.8 years; range 6 month to 17 years) in our hospital were evaluated in terms of gonadal shielding use by a team including an orthopedist, a gynecologist and a pediatrician. Results: Gonadal shields were used in 566 (49.8%) pelvic X-rays of 1137 patients and important anatomical landmarks were left open in 506 (44.5%) of them. In 104 (9.1%) X-rays, the shields were placed in correct position. It was observed that a total of 293 (25.7%) X-rays were partially protective, while 109 (9.6%) X-rays were placed in a totally wrong position. Nineteen X-rays (3.3%) were repeated due to malposition of the gonadal shielding. In X-rays of boys, gonadal shields were used for 193 (17%); however only 74 (6.5%) of them were placed in correct position. In X-rays of girls, gonadal shields were used for 373 (32.8%); however only 30 (2.6%) of them were protective. Conclusion: If we take into consideration that use of pelvic X-rays is essential and indispensable for the diagnosis of many pediatric pelvic diseases, we believe that technicians who are responsible for taking these X-rays should be better trained on the use of gonadal shields and designs of gonadal shields should be improved. abstract_id: PUBMED:7215698 46 XY gonadal dysgenesis and dysgerminoma (author's transl) A patient with gonadal dysgenesis, cariotype 46 XY and female biotype is presented. The histological study confirmed the presence of a dysgerminoma in the left gonad. The features of the cases are discussed. abstract_id: PUBMED:38138842 Medical-Radiation-Shielding Film Fabricated by Imitating the Layered Structure Pattern of Abalone Shell and Verification of Its Shielding Effect. Radiation-shielding clothing for medical workers must be light and thin, thus ensuring flexibility. However, controlling the thickness and weight is limited by shielding performance requirements. This study aims to improve shielding performance by considering a shielding structure that mimics the internal structure of an abalone shell. Two shields were produced: a sheet made with a carrier process using a liquid polymer and tungsten mixture, and a fillet made by compounding the same material and laminated using a heat-treatment press after the injection process. The tungsten content and thickness were the same at 85 wt% and 0.3 mm, respectively. In the high-energy region, the shielding film based on the laminated structure of abalone shells showed a shielding rate that was higher by more than 7%. Compared to that of a 0.3 mm lead plate, the shielding ratio of the shielding film was approximately 16% lower at 120 kVp, thereby confirming the radiation-shielding effect of the layered-structure shielding film. Therefore, it is concluded that the laminated structure of the shielding film, which is identical to the internal laminated structure of the abalone shell, expands the impact area of incident radiation and attenuates the energy intensity, thereby improving the medical-radiation-shielding performance. Answer: Gonadal shielding in radiography has been a common practice with the intention of reducing radiation exposure to the gonads during imaging procedures. However, recent studies and guidelines have brought this practice into question. Research has shown that using a flat contact shield during imaging of the adult male pelvis significantly reduces radiation dose to the testes, with a 36.4% increase in exposure to the testes calculated when no contact shield was used (PUBMED:27837123). This suggests that shielding can be beneficial in certain circumstances. On the other hand, for female patients, gonadal shielding with automatic exposure control (AEC) can actually increase the radiation dose to other organs, such as the colon and stomach, as well as to ovaries outside the shielded area. The use of a female gonadal shield was found to increase the dose-area product (DAP) significantly, indicating an increased absorbed organ dose at unshielded locations (PUBMED:29046919). Furthermore, the practice of patient shielding in general has been challenged by recent evidence and professional guidelines. It has been argued that patient shielding provides negligible or no benefit and can increase patient dose and compromise the diagnostic efficacy of an image (PUBMED:30673332). The American Association of Physicists in Medicine (AAPM) has recommended limits to the routine use of fetal and gonadal shielding in medical imaging, citing that shielding can interfere with technology designed to reduce patient dose and can be a source of infection or lead dust (PUBMED:37940176). A Monte Carlo simulation study evaluating the gonadal dose reduction achievable with gonadal shields found that while there can be a substantial dose reduction with ideal shield placement, misalignment of the shield significantly reduces its effectiveness, especially in children (PUBMED:32978986). Educators' opinions on the use of lead shielding during projection radiography are varied, and there is no consensus in the literature on the efficacy of such shielding (PUBMED:30292501). Additionally, the location of ovaries in children and the efficacy of gonadal shielding in hip and pelvis radiography have been studied, suggesting that shield placement should be adjusted based on the patient's age to be effective (PUBMED:35989157). In conclusion, while gonadal shielding has been traditionally considered a best practice, recent evidence and expert opinions suggest that its routine use should be re-evaluated. The benefits of shielding must be weighed against the potential risks of increased radiation dose to other organs and the challenges in shield placement. The practice may still have a role in specific situations, but it is no longer universally recommended as a best practice in radiography.
Instruction: Reduction in renal blood flow following acute increase in the portal pressure: evidence for the existence of a hepatorenal reflex in man? Abstracts: abstract_id: PUBMED:9203948 Reduction in renal blood flow following acute increase in the portal pressure: evidence for the existence of a hepatorenal reflex in man? Background: To investigate the relation between changes in portal haemodynamics and renal blood flow (RBF) in patients with cirrhosis. Patients/methods: Twenty patients with cirrhosis and transjugular intrahepatic portosystemic stent-shunts were divided into two groups which were well matched. At routine portography, either changes in unilateral RBF (group I) or changes in cardiac output (group II) before and after shunt occlusion were studied. Blood was obtained from the renal and systemic circulations for the measurement of neurohumoral factors before and after shunt occlusion in group I patients. Results: After shunt occlusion, there was a progressive reduction in unilateral RBF from a mean (SD) of 289 (32) to 155 (25) (-43.5%) (p &lt; 0.001). These changes correlated significantly with the changes in the portal atrial gradient (p &lt; 0.001). There was no significant change in heart rate, mean arterial pressure and right atrial pressure. No significant changes were found in the concentrations of the various neurohumoral factors measured. There was a less notable but significant reduction in the cardiac output (-10.9%) (p = 0.02) unaccompanied by significant reduction in the pulmonary capillary wedge pressure or mean arterial pressure. Conclusions: These results suggest the existence of hepatorenal reflex in man which is important in the regulation of RBF, although other mechanisms may also be contributory. abstract_id: PUBMED:11786973 Decreases in portal flow trigger a hepatorenal reflex to inhibit renal sodium and water excretion in rats: role of adenosine. The regulation of renal sodium and water excretion through a hepatorenal reflex activated by the changes in hemodynamics of the portal circulation has been suggested. We hypothesize that the changes in intrahepatic blood flow and flow-related intrahepatic adenosine are involved in the control of renal water and sodium excretion by triggering a hepatorenal reflex. Anesthetized rats were instrumented to monitor the systemic, hepatic, and renal circulation. A vascular shunt connecting the portal vein and central vena cava was established to allow for control of the portal venous blood flow (PVBF). Urine was collected from the bladder. The effects of decreased PVBF on renal water and sodium excretion were compared in normal and hepatic denervated rats. Decreasing intrahepatic PVBF by half for 30 minutes decreased urine flow by 38% (12.1 +/- 1.1 vs. 7.5 +/- 0.7 microL. min(-1)) and urine sodium excretion by 44% (1.11 +/- 0.30 vs. 0.62 +/- 0.17 micromol. min(-1)). Renal arterial blood flow (RABF) and creatinine clearance were also reduced by the decreases in intrahepatic PVBF. Hepatic denervation, or intrahepatic administration of an adenosine receptor antagonist, 8-phenyltheophylline (8-PT), abolished the effects of decreasing PVBF on urine flow and sodium excretion. The data suggest that the decrease in intrahepatic PVBF triggers a hepatorenal reflex through the activation of adenosine receptors within the liver, thereby inhibiting renal water and sodium excretion. The water and sodium retention commonly seen in the hepatorenal syndrome may be related to intrahepatic adenosine accumulation resulting from the associated decrease in intrahepatic portal flow. abstract_id: PUBMED:6834815 Acute portal hypertension and reduced renal blood flow: an intestinal-renal neurogenic reflex. In a series of experiments in anesthetized dogs, the origin and mechanism of reduced renal blood flow (RBF) during acute portal hypertension was investigated. With acute superior mesenteric vein (SMV) occlusion and normalization of cardiac hemodynamics RBF remained reduced at 66 +/- 14 ml/min compared to baseline of 160 +/- 25 ml/min (P less than 0.01). Neither acute superior mesenteric artery occlusion, nor acute hepatic portal hypoperfusion with normal SMV pressures maintained by SMV-caval shunt, resulted in reduction of RBF. Cross-perfusion studies failed to produce alterations of RBF in recipient dogs from donor dogs with SMV occlusion, reduced RBF, and normal cardiac outputs. Finally, splanchnic ganglionectomy prevented RBF reduction during SMV occlusion after volume restoration. We conclude that reduced RBF during acute portal hypertension is a result of an intestinal-renal neurogenic reflex initiated by intestinal venous congestion. abstract_id: PUBMED:24388293 Hepatorenal syndrome Hepatorenal syndrome is a severe complication of end-stage liver disease. The pathophysiological hallmark is severe renal vasoconstriction, resulting from peripheral and splanchnic vasodilation as well as activation of renal vasoconstrictor molecules, which induce the effective arterial volume reduction and the functional renal failure. The diagnosis of hepatorenal syndrome is currently based on the exclusion of other causes of renal failure (especially prerenal). Spontaneous bacterial peritonitis is one of the triggering factors and should be sought in all patients with severe liver disease and acute renal failure. Quickly treating patients with parental antibiotics and albumin infusion significantly decreases the risk. The combined use of intravenous albumin, splanchnic and peripheral vasoconstrictor and/or renal replacement therapy sometimes enables a delay until liver transplantation (or combined liver-kidney in selected patients). Transplantation is in fact the only way to improve the long-term prognosis. abstract_id: PUBMED:3773326 Reflex renal vasoconstriction on portal vein distension. The present experiments were designed to study effects of neural control mechanisms on renal sympathetic nerve activity during acute portal vein distension in anesthetized dogs. Following the inflation of a balloon placed into the main portal vein of animals with the neuraxis intact (intact group), portal vein pressure at a site of the splanchnic regions increased significantly. Mean blood pressure (MBP) fell significantly and then renal vascular resistance (RVR) increased significantly in parallel with changes in portal venous pressure. In animals with sinoaortic denervation (SAD group), changes in portal venous pressure during the inflation of a balloon did not differ from the intact group. However, decreases in MBP in the SAD group were greater than that in the intact group, and sinoaortic denervation did not alter increases in RVR. In animals with both sinoaortic denervation and cervical vagotomy (vagotomy group), portal vein distension produced more profound hypotension, and significant increases in RVR occurred. This increase in RVR, however, was abolished by renal nerve denervation. The results of the present study indicate that increases in RVR during the portal vein distension, which is associated with systemic hypotension, may be mediated by an activation of efferent sympathetic renal nerves and modified by at least two neural reflex mechanisms such as carotid sinus baroreceptors and cardiopulmonary baroreceptors. In addition, local reflex systems such as stretch receptors in the venous wall of the portal vein may be involved in excitatory response to renal sympathetic nerve, leading to renal vasoconstriction, during the portal vein distension. abstract_id: PUBMED:17218982 Intrahepatic adenosine-mediated activation of hepatorenal reflex is via A1 receptors in rats. Previous studies have shown that intrahepatic adenosine is involved in activation of the hepatorenal reflex that regulates renal sodium and water excretion. The present study aims to determine which subtype of adenosine receptors is implicated in the process. Mean arterial pressure, portal venous pressure and flow, and renal arterial flow were monitored in pentobarbital anesthetized rats. Urine was collected from the bladder. Intraportal administration of 8-cyclopentyl-1,3-dipropylxanthine (DPCPX), a selective adenosine A1 receptor antagonist, increased urine flow by 24%, 89%, and 143% at the dose of 0.01, 0.03, and 0.1 mg x kg(-1), respectively; in contrast, DPCPX, when administered intravenously at the same doses, only increased urine flow by 0%, 18%, and 36%. The increases in urine flow induced by intraportal administration of DPCPX were abolished in rats with liver denervation. Intrahepatic infusion of adenosine significantly decreased urine flow and this response was abolished by intraportal administration of DPCPX. Neither intraportal nor intravenous administration of 3,7-dimethyl-1-propargylxanthine, a selective adenosine A2 receptor antagonist, showed significant influence on urine flow. Systemic arterial pressure, renal blood flow and glomerular filtration rate were unaltered by the administration of any of the drugs. In conclusion, intrahepatic adenosine A1 receptors are responsible for the adenosine-mediated hepatorenal reflex that regulates renal water and sodium excretion. abstract_id: PUBMED:9860410 Acute effects of transjugular intrahepatic portosystemic stent-shunt (TIPSS) procedure on renal blood flow and cardiopulmonary hemodynamics in cirrhosis. Objective: An acute increase in portal pressure is associated with an immediate reduction in renal blood flow. It has been suggested that this supports the presence of an hepatorenal reflex. In this study, we used TIPSS placement as a model to investigate the effect of an acute reduction in portal pressure on renal blood flow and cardiopulmonary hemodynamic parameters. Methods: Eleven cirrhotic patients were studied during elective TIPSS placement for variceal hemorrhage (n = 9) or refractory ascites (n = 2). Unilateral renal blood flow (RBF) was measured before and at 5, 15, 30, 45, and 60 min after shunt insertion. Heart rate (HR), mean arterial pressure (MAP), right atrial pressure (RAP), mean pulmonary artery pressure (PAP), pulmonary capillary wedge pressure (PCWP), cardiac output (CO), and systemic vascular resistance (SVR) were also measured before and 30 min after TIPSS placement. Results: Despite significant increases in CO (p = 0.001), RAP (p &lt; 0.001), PAP (p &lt; 0.001), and PCWP (p = 0.001), and a fall in SVR (p = 0.003), no change was observed in RBF, HR, or MAP after TIPSS placement. The fall in the portoatrial pressure gradient correlated only with the rise in CO (p &lt; 0.05) and the drop in SVR (p &lt; 0.05). Conclusion: Despite the fall in portal pressure and the systemic hemodynamic changes caused by TIPSS placement, there is no immediate effect on RBF. Any improvement in renal function after TIPSS procedure does not appear to be due to an acute increase in RBF. abstract_id: PUBMED:1239725 Blood flow in mesenteric, hepatic portal and renal portal veins of chickens. Blood flows were determined by electromagnetic probes placed upon the posterior vena cava (PVC), coccygeomesenteric vein (COCMV), mesenteric vein (MV), and hepatic portal vein (PV) of white Leghorn males. Blood flow in ml/min of non-fasted, unanesthetized males were as follows: (see article). Withholding food for 24 hrs decreased flow significantly only in the MV and PVC. Anesthesia decreased flow in PVC, PV and COCMV. After ligation of PVC, blood was shunted from caudal areas and renal portal circulation to COCMV and liver. Ligation of PV caused a diversion of flow to renal portal circulation and an increase in PVC flow and a reversal of direction of flow in COCMV. abstract_id: PUBMED:12118394 Hepatorenal syndrome. The hepatorenal syndrome is defined as functional renal failure in advanced chronic or acute liver disease with portal hypertension. Morphologic abnormalities of the kidneys are frequently absent and tubular function is preserved. Patients with the hepatorenal syndrome are characterized by progressive splanchnic and systemic vasodilation and decreased effective arterial blood volume. Compensatory activation of vasoconstrictory systems maintains systemic hemodynamic stability but causes progressive afferent renal vasoconstriction, leading to reduction of glomerular filtration rate. Renal failure may be rapidly progressive (type I hepatorenal syndrome, frequently associated with spontaneous bacterial peritonitis) or may develop more slowly (type II). Orthotopic liver transplantation is the best current treatment and leads to a gradual recovery of renal function in the vast majority of patients. Because mortality of type I hepatorenal syndrome is excessive, supportive treatment by vasoconstrictor drugs, transjugular intrahepatic portosystemic shunt, and renal replacement therapy has been investigated to achieve stability until transplantation. The definite role of these promising developments, however, is still uncertain, emphasizing the need for large prospective multicentric investigations. abstract_id: PUBMED:4026043 Blood flow in the renal portal circulation of the turkey: effect of epinephrine. Control of blood flow in the renal portal circulation of turkeys given epinephrine was evaluated by determining the distribution of injected radiopaque contrast medium within the portal circulation. Radiopacity in the region of the renal parenchyma and/or the caudal renal portal vein was increased in each of the 6 birds after epinephrine (4.0 micrograms/kg of body weight) was injected into the wing vein. In 1 bird, the contrast medium also was distributed into the mesenteric vein. Thus, epinephrine not only induced flow of blood from the pelvic limb (leg) to the kidneys, but to the abdominal viscera as well. Regulation of the patency of the renal portal valve is thought to be one means by which renal portal blood flow is controlled. However, epinephrine had no consistent effect on the valve. Thus, epinephrine seemed to control renal portal blood flow and, perhaps, blood flow to other abdominal viscera at sites other than the renal portal valve. Answer: The existence of a hepatorenal reflex in humans is supported by evidence showing a reduction in renal blood flow (RBF) following an acute increase in portal pressure. In a study involving patients with cirrhosis who underwent transjugular intrahepatic portosystemic stent-shunts (TIPSS), a significant reduction in unilateral RBF was observed after shunt occlusion, which correlated with changes in the portal atrial gradient. This reduction occurred without significant changes in heart rate, mean arterial pressure, or right atrial pressure, and no significant changes were found in the concentrations of various neurohumoral factors measured. These findings suggest the presence of a hepatorenal reflex that is important in the regulation of RBF in humans, although other mechanisms may also contribute (PUBMED:9203948). Additionally, in a study with anesthetized dogs, it was found that acute portal hypertension induced by superior mesenteric vein (SMV) occlusion led to a reduced RBF, which was not observed with acute superior mesenteric artery occlusion or acute hepatic portal hypoperfusion with normal SMV pressures. This reduction in RBF was prevented by splanchnic ganglionectomy, indicating that the reduced RBF during acute portal hypertension is a result of an intestinal-renal neurogenic reflex initiated by intestinal venous congestion (PUBMED:6834815). Furthermore, experiments in anesthetized dogs showed that acute portal vein distension led to significant increases in renal vascular resistance (RVR), which were mediated by an activation of efferent sympathetic renal nerves and modified by neural reflex mechanisms such as carotid sinus baroreceptors and cardiopulmonary baroreceptors. This suggests that increases in RVR during portal vein distension may be due to a local reflex system involving stretch receptors in the venous wall of the portal vein (PUBMED:3773326). In conclusion, the observed reduction in RBF following an acute increase in portal pressure, along with the associated studies, provides evidence for the existence of a hepatorenal reflex in humans.
Instruction: Does pregnancy have an impact on the subgingival microbiota? Abstracts: abstract_id: PUBMED:23242297 Effect of periodontal therapy on the subgingival microbiota in preeclamptic patients Introduction: Few studies have described subgingival microbiota in pregnant women with mild preeclampsia. Objective: Clinical periodontal and subgingival microbiota changes were identified in pregnant women with mild preeclampsia after periodontal treatment. Materials And Methods: In a secondary analysis of a randomized clinical trial, 57 preeclamptic women were studied at Hospital Universitario del Valle in Cali, Colombia. Thirty one women were randomized to the periodontal intervention group (subgingival scaling and planing ultrasonic and manual) during pregnancy and 26 to the control group (supragingival prophylaxis). Periodontal clinical parameters and subgingival microbiota were characterized at the time of acceptance into the study and again at postpartum. Eight periodontopathic bacteria and 2 herpesviruses were assessed by polymerase chain reaction. Chi-square, McNemar or Student's t tests were used, with a significance level of p≤0.05. Results: Both groups were comparable in the clinical and microbiological variables at baseline. Periodontal treatment reduced the average pocket depth in the intervention group from 2.4±0.3 to 2.3±0.2 mm (p&lt;0.001) and in control group 2.6±0.4 to 2.44±0.4 mm, (p&lt;0.001) and bleeding index 16.4±1.5% to 7.9±0.7% in the intervention group(p&lt;0.001) and 17.1±1.8% to 10±0.9% in the control group (p=0.002). The frequency of detection of microorganisms did not differ significantly between groups. Conclusion: Scaling/root planning and supragingival prophylaxis significantly reduced the probing depth and gingival bleeding index. Periodontal treatment was not more effective than prophylaxis in reducing periodontopathic organisms or herpesvirus. abstract_id: PUBMED:35646730 Subgingival Microbiome in Pregnancy and a Potential Relationship to Early Term Birth. Background: Periodontal disease in pregnancy is considered a risk factor for adverse birth outcomes. Periodontal disease has a microbial etiology, however, the current state of knowledge about the subgingival microbiome in pregnancy is not well understood. Objective: To characterize the structure and diversity of the subgingival microbiome in early and late pregnancy and explore relationships between the subgingival microbiome and preterm birth among pregnant Black women. Methods: This longitudinal descriptive study used 16S rRNA sequencing to profile the subgingival microbiome of 59 Black women and describe microbial ecology using alpha and beta diversity metrics. We also compared microbiome features across early (8-14 weeks) and late (24-30 weeks) gestation overall and according to gestational age at birth outcomes (spontaneous preterm, spontaneous early term, full term). Results: In this sample of Black pregnant women, the top twenty bacterial taxa represented in the subgingival microbiome included a spectrum representative of various stages of biofilm progression leading to periodontal disease, including known periopathogens Porphyromonas gingivalis and Tannerella forsythia. Other organisms associated with periodontal disease reflected in the subgingival microbiome included several Prevotella spp., and Campylobacter spp. Measures of alpha or beta diversity did not distinguish the subgingival microbiome of women according to early/late gestation or full term/spontaneous preterm birth; however, alpha diversity differences in late pregnancy between women who spontaneously delivered early term and women who delivered full term were identified. Several taxa were also identified as being differentially abundant according to early/late gestation, and full term/spontaneous early term births. Conclusions: Although the composition of the subgingival microbiome is shifted toward complexes associated with periodontal disease, the diversity of the microbiome remains stable throughout pregnancy. Several taxa were identified as being associated with spontaneous early term birth. Two, in particular, are promising targets of further investigation. Depletion of the oral commensal Lautropia mirabilis in early pregnancy and elevated levels of Prevotella melaninogenica in late pregnancy were both associated with spontaneous early term birth. abstract_id: PUBMED:27833518 Sensitivity and specificity of subgingival bacteria in predicting preterm birth- a pilot cohort study. Objective: Preterm birth (PTB) increases the risk of adverse outcomes for new born infants. Subgingival bacteria are implicated in causing PTB. The aim of the present study was to assess the accuracy of some subgingival gram positive and gram negative bacteria detected by routine lab procedures in predicting PTB. Methodology: Pregnant Saudi women (n= 170) visiting King Fahad hospital, Dammam, Saudi Arabia, were included in a pilot cohort study. Plaque was collected in the 2nd trimester and screened for subgingival anaerobes using Vitek2. Pregnancy outcome (preterm/full term birth) was assessed at delivery. Sensitivity, specificity and positive and negative likelihood ratios were calculated for the identified bacteria to predict PTB. Results: Data about time of delivery was available for 94 subjects and 22 (23.4%) had PTB. Three gram negative and 4 gram positive subgingival bacteria had sensitivity ≥ 95% with two of each having negative likelihood ratios ≤0.10. Three gram positive bacteria had specificity &gt; 95% with only one having positive likelihood ratio &gt;2. Conclusion: Subgingival bacteria identified using readily available lab techniques in the plaque of pregnant Saudi women in their 2nd trimester have useful potential to rule out PTB. abstract_id: PUBMED:32508773 Impact of Microbiota Transplant on Resistome of Gut Microbiota in Gnotobiotic Piglets and Human Subjects. Microbiota transplant is becoming a popular process to restore or initiate "healthy" gut microbiota and immunity. But, the potential risks of the related practices need to be carefully evaluated. This study retrospectively examined the resistomes of donated fecal microbiota for treating intestinal disorders, vaginal microbiota of pregnant women, and infant fecal microbiota from rural and urban communities, as well as the impact of transplants on the fecal resistome of human and animal recipients. Antibiotic resistance (AR) genes were found to be abundant in all donor microbiota. An overall surge of resistomes with higher prevalence and abundance of AR genes was observed in the feces of all transplanted gnotobiotic pigs as well as in the feces of infant subjects, compared to those in donor fecal and maternal vaginal microbiota. Surprisingly, transplants using rural Amish microbiota led to more instead of less AR genes in the fecal microbiota of gnotobiotic pigs than did transplants using urban microbiota. New AR gene subtypes undetected originally also appeared in gnotobiotic pigs, in Crohn's Disease (CD) patients after transplant, and in feces of infant subjects. The data illustrated the key role of the host gastrointestinal tract system in amplifying the ever-increasing AR gene pool, even without antibiotic exposure. The data further suggest that the current approaches of microbiota transplant can introduce significant health risk factor(s) to the recipients, and newborn human and animal hosts with naïve gut microbiota were especially susceptible. Given the illustrated public health risks of microbiota transplant, minimizing massive and unnecessary damages to gut microbiota by oral antibiotics and other gut impacting drugs becomes important. Since eliminating risk factors including AR bacteria and opportunistic pathogens directly from donor microbiota is still difficult to achieve, developing microbial cocktails with defined organisms and functions has further become an urgent need, should microbiota transplantation become necessary. abstract_id: PUBMED:27461975 Smoking, pregnancy and the subgingival microbiome. The periodontal microbiome is known to be altered during pregnancy as well as by smoking. However, despite the fact that 2.1 million women in the United States smoke during their pregnancy, the potentially synergistic effects of smoking and pregnancy on the subgingival microbiome have never been studied. Subgingival plaque was collected from 44 systemically and periodontally healthy non-pregnant nonsmokers (control), non-pregnant smokers, pregnant nonsmokers and pregnant smokers and sequenced using 16S-pyrotag sequencing. 331601 classifiable sequences were compared against HOMD. Community ordination methods and co-occurrence networks were used along with non-parametric tests to identify differences between groups. Linear Discriminant Analysis revealed significant clustering based on pregnancy and smoking status. Alpha diversity was similar between groups, however, pregnant women (smokers and nonsmokers) demonstrated higher levels of gram-positive and gram-negative facultatives, and lower levels of gram-negative anaerobes when compared to smokers. Each environmental perturbation induced distinctive co-occurrence patterns between species, with unique network anchors in each group. Our study thus suggests that the impact of each environmental perturbation on the periodontal microbiome is unique, and that when they are superimposed, the sum is greater than its parts. The persistence of these effects following cessation of the environmental disruption warrants further investigation. abstract_id: PUBMED:25659048 Autochthonous microbiota, probiotics and prebiotics The autochthonous microbiota is the community of microorganisms that colonizes the skin and mucosal surfaces. The symbiosis is, generally, mutualistic but it can become parasitic due to immune response alterations. The skin microbiota includes bacteria (95%), lipophilic fungi and mites. In the digestive apparatus, each cavity presents its own microbiota, which reaches its target organ during the perinatal period, originating complex and stable communities (homeostasis). The vaginal microbiota varies with the endocrine activity, significantly increasing during the fertile and pregnancy periods, when lactobacilli are the most abundant organisms. Four are the main benefits of the autochthonous microbiota: i) delivery of essential nutrients, such as vitamins and some amino acids; ii) utilization of undigestible diet components, the colonic microbiota degrades complex glycans and fulfils almost 20% of the calories present in a normal diet; iii) development of the immune system: the continuous contact with the immune system maintains it alert and in good shape to repel pathogens efficaciously and iv) microbial antagonism, hinders colonization of our mucosal surfaces by alochthonous, potentially pathogenic, organisms. This works through three mechanisms: colonization interference, production of antimicrobials and co-aggregation with the potential pathogens. The microbiota can, sporadically, produce damages: opportunistic endogenous infections and generation of carcinogenic compounds. Probiotics are "live microorganisms that when administered in adequate amounts, confer a health benefit to the consumer". Prebiotics are undigestible glycans that enhance the growth or activity of the intestinal microbiota, thus generating a health benefit. Synbiotics are mixes of probiotics and prebiotics that exert a synergistic health effect. abstract_id: PUBMED:26267776 Microbiota in women; clinical applications of probiotics The main function of vaginal microbiota is to protect the mucosa against the colonization and growth of pathogenic microorganisms. This microbiota is modified by hormonal activity. Its maximum concentration and effectiveness occurs during the fertile period, where there is a predominance of lactobacilli. When it is reduced (microbiota dysbiosis) leads to bacterial vaginosis and candida vaginitis which are common diseases in women. Consequently, instillation of lactobacilli in the vagina has beneficial effects on the symptomatology and prognosis of these illnesses. Breast milk is one of the key factors in the development of gut microbiota of the infant. There is an enteric-breast circulation, which is higher at the end of pregnancy and during breastfeeding. This circulation could explain the modulation of the breast microbiota by using probiotics. It could have a positive impact not only for the health of the mother, who would reduce the incidence of mastitis, but also for their infant. The use of probiotics is a hopeful alternative in various gynecological pathologies. However, it's is necessary first some well-designed, randomized trials with standardized methods and with a significant number of patients in order to confirm its benefits and allow us its use in protocols. abstract_id: PUBMED:33123496 A Human Microbiota-Associated Murine Model for Assessing the Impact of the Vaginal Microbiota on Pregnancy Outcomes. Disease states are often linked to large scale changes in microbial community structure that obscure the contributions of individual microbes to disease. Establishing a mechanistic understanding of how microbial community structure contribute to certain diseases, however, remains elusive thereby limiting our ability to develop successful microbiome-based therapeutics. Human microbiota-associated (HMA) mice have emerged as a powerful approach for directly testing the influence of microbial communities on host health and disease, with the transfer of disease phenotypes from humans to germ-free recipient mice widely reported. We developed a HMA mouse model of the human vaginal microbiota to interrogate the effects of Bacterial Vaginosis (BV) on pregnancy outcomes. We collected vaginal swabs from 19 pregnant African American women with and without BV (diagnosed per Nugent score) to colonize female germ-free mice and measure its impact on birth outcomes. There was considerable variability in the microbes that colonized each mouse, with no association to the BV status of the microbiota donor. Although some of the women in the study had adverse birth outcomes, the vaginal microbiota was not predictive of adverse birth outcomes in mice. However, elevated levels of pro-inflammatory cytokines in the uterus of HMA mice were detected during pregnancy. Together, these data outline the potential uses and limitations of HMA mice to elucidate the influence of the vaginal microbiota on health and disease. abstract_id: PUBMED:24902044 Intestinal microbiota during early life - impact on health and disease. In the first years after birth, the intestinal microbiota develops rapidly both in diversity and complexity while being relatively stable in healthy adults. Different life-style-related factors as well as medical practices have an influence on the early-life intestinal colonisation. We address the impact of some of these factors on the consecutive microbiota development and later health. An overview is presented of the microbial colonisation steps and the role of the host in that process. Moreover, new early biomarkers are discussed with examples that include the association of microbiota and atopic diseases, the correlation of colic and early development and the impact of the use of antibiotics in early life. Our understanding of the development and function of the intestinal microbiota is constantly improving but the long-term influence of early-life microbiota on later life health deserves careful clinical studies. abstract_id: PUBMED:19228092 Does pregnancy have an impact on the subgingival microbiota? Background: We investigated clinical and subgingival microbiologic changes during pregnancy in 20 consecutive pregnant women &gt; or =18 years not receiving dental care. Methods: Bacterial samples from weeks 12, 28, and 36 of pregnancy and at 4 to 6 weeks postpartum were processed for 37 species by checkerboard DNA-DNA hybridization. Clinical periodontal data were collected at week 12 and at 4 to 6 weeks postpartum, and bleeding on probing (BOP) was recorded at sites sampled at the four time points. Results: The mean BOP at week 12 and postpartum was 40.1% +/- 18.2% and 27.4% +/- 12.5%, respectively. The corresponding mean BOP at microbiologic test sites was 15% (week 12) and 21% (postpartum; not statistically significant). Total bacterial counts decreased between week 12 and postpartum (P &lt;0.01). Increased bacterial counts over time were found for Neisseria mucosa (P &lt;0.001). Lower counts (P &lt;0.001) were found for Capnocytophaga ochracea, Capnocytophaga sputigena, Eubacterium saburreum, Fusobacterium nucleatum naviforme, Fusobacterium nucleatum polymorphum, Leptotrichia buccalis, Parvimonas micra (previously Peptostreptococcus micros or Micromonas micros), Prevotella intermedia, Prevotella melaninogenica, Staphylococcus aureus, Streptococcus anginosus, Streptococcus intermedius, Streptococcus mutans, Streptococcus oralis, Streptococcus sanguinis, Selenomonas noxia, and Veillonella parvula. No changes occurred between weeks 12 and 28 of pregnancy. Counts of Aggregatibacter actinomycetemcomitans (previously Actinobacillus actinomycetemcomitans), Porphyromonas gingivalis, Tannerella forsythia (previously T. forsythensis), and Treponema denticola did not change. Counts of P. gingivalis and T. forsythia at week 12 were associated with gingivitis (P &lt;0.001). Conclusions: Subgingival levels of bacteria associated with periodontitis did not change. P. gingivalis and T. forsythia counts were associated with BOP at week 12. A decrease was found in 17 of 37 species from week 12 to postpartum. Only counts of N. mucosa increased. Answer: Yes, pregnancy does have an impact on the subgingival microbiota. Studies have shown that the periodontal microbiome is altered during pregnancy, which can affect the subgingival microbiota. For instance, a study found that pregnant women demonstrated higher levels of gram-positive and gram-negative facultatives, and lower levels of gram-negative anaerobes when compared to non-pregnant individuals (PUBMED:27461975). Another study reported that the mean bleeding on probing (BOP) at week 12 of pregnancy and postpartum was significantly different, and while the total bacterial counts decreased from week 12 to postpartum, the counts of Neisseria mucosa increased over time. However, no changes were observed in the counts of bacteria associated with periodontitis such as Aggregatibacter actinomycetemcomitans, Porphyromonas gingivalis, Tannerella forsythia, and Treponema denticola (PUBMED:19228092). Moreover, the subgingival microbiota during pregnancy has been linked to adverse birth outcomes such as preterm birth. For example, a study aimed to characterize the subgingival microbiome in early and late pregnancy found that while the diversity of the microbiome remained stable throughout pregnancy, certain taxa were associated with spontaneous early term birth (PUBMED:35646730). Another study assessing the accuracy of subgingival bacteria in predicting preterm birth found that certain subgingival gram-positive and gram-negative bacteria had high sensitivity and specificity in predicting preterm birth (PUBMED:27833518). These findings suggest that pregnancy can lead to changes in the subgingival microbiota, which may have implications for both maternal periodontal health and pregnancy outcomes.
Instruction: Are history and physical examination a good screening test for sleep apnea? Abstracts: abstract_id: PUBMED:1863025 Are history and physical examination a good screening test for sleep apnea? Objective: To determine whether presenting clinical history, pharyngeal examination, and the overall subjective impression of the clinician could serve as a sensitive screening test for sleep apnea. Design: Blinded comparison of history and physical examination with results of nocturnal polysomnography. Setting: Sleep clinic of a tertiary referral center. Patients: A total of 410 patients referred for suspected sleep apnea syndrome. Most patients reported snoring. Measurements: All patients were asked standard questions and given an examination relevant to the diagnosis of the sleep apnea syndrome, and all had full nocturnal polysomnography. Patients with more than ten episodes of apnea or hypopnea per hour of sleep were classified as having sleep apnea. Stepwise linear logistic regression was used to develop two predictive models of sleep apnea: one based on the presence of characteristic clinical features, age, sex, and body mass index; and one based on subjective clinical impression. Results: The prevalence of sleep apnea in our patients was 46%. Only age, body mass index, male sex, and snoring were found to be predictors of sleep apnea. The logistic rule discriminated between patients with and without sleep apnea (receiver operating characteristic [ROC] area, 0.77 [95% Cl, 0.73 to 0.82]). For patients with a predicted probability of apnea of less than 20%, the clinical model had 94% sensitivity and 28% specificity. Subjective impression alone identified correctly only 52% of patients with sleep apnea and had a specificity of 70%. Conclusions: In patients with a high predicted probability of the sleep apnea syndrome, subjective impression alone or any combination of clinical features cannot serve as a reliable screening test. However, in patients with a low predicted probability of sleep apnea, the model based on clinical data was sufficiently sensitive to permit about a 30% reduction in the number of unnecessary sleep studies. abstract_id: PUBMED:29747936 Pulmonary Screening in Subjects after the Fontan Procedure. Objectives: To review the pulmonary findings of the first 51 patients who presented to our interdisciplinary single-ventricle clinic after undergoing the Fontan procedure. Study Design: We performed an Institutional Review Board-approved retrospective review of 51 patients evaluated following the Fontan procedure. Evaluation included history, physical examination, pulmonary function testing, and 6-minute walk. Descriptive statistics were used to describe the population and testing data. Results: Sixty-one percent of the patients had a pulmonary concern raised during the visit. Three patients had plastic bronchitis. Abnormal lung function testing was present in 46% of patients. Two-thirds (66%) of the patients had significant desaturation during the 6-minute walk test. Patients who underwent a fenestrated Fontan procedure and those who underwent unfenestrated Fontan were compared in terms of saturation and 6-minute walk test results. Sleep concerns were present in 45% of the patients. Conclusions: Pulmonary morbidities are common in patients after Fontan surgery and include plastic bronchitis, abnormal lung function, desaturations with walking, and sleep concerns. Abnormal lung function and obstructive sleep apnea may stress the Fontan circuit and may have implications for cognitive and emotional functioning. A pulmonologist involved in the care of patients after Fontan surgery can assist in screening for comorbidities and recommend interventions. abstract_id: PUBMED:24487608 Diagnostic capability of questionnaires and clinical examinations to assess sleep-disordered breathing in children: a systematic review and meta-analysis. Background: The reference standard for the diagnosis of pediatric sleep-disorder breathing (SDB) is a full polysomnography (PSG) (an overnight sleep study). There are many obstacles to children being able to undergo a full PSG; therefore, the authors evaluated the diagnostic value of alternative diagnostic methods (clinical history and physical examination) for pediatric SDB. Types Of Studies Reviewed: The authors selected articles in which the investigators' primary objective was to evaluate the diagnostic capability of physical evaluations and questionnaires compared with the current reference standard (that is, a full PSG) to diagnose SDB in children younger than 18 years. The authors searched several electronic databases without limitations. Results: Using a two-step selection process, the authors identified 24 articles and used them to conduct a qualitative analysis. They conducted a meta-analysis on 11 of these articles. Among these articles, only one involved a test that had diagnostic accuracy good enough to warrant its use as a screening method for pediatric SDB, but its diagnostic accuracy was not sufficient to be considered a true diagnostic tool (that is, a replacement for full PSG) for pediatric SDB. Practical Implications. The involvement of dentists in the screening process for pediatric SDB can contribute significantly to children's health. The identified questionnaire could be considered an acceptable screening test to determine which children to refer to a sleep medicine specialist. abstract_id: PUBMED:21354492 Clinical diagnosis and physical examination In children, medical history and meticulous examination are essential to the diagnosis and future treatment of all the alterations contributing to sleep breathing disorders. Examination of the oropharynx aids assessment of hypertrophy of the palatine tonsils, while fiberoptic endoscopy assists in the diagnosis of adenoid hypertrophy. Among radiological examinations, only cephalometry has proved to be useful in the study of the facial skeleton. Lateral radiography of the nasopharynx to study adenoid vegetations has been surpassed by fiberoptic endoscopy in terms of diagnostic performance. All examinations facilitate an etiological and topographical diagnosis of patients with sleep breathing disorders. The diagnosis of respiratory problems that affect children's dentofacial development can begin at a very early age, since early detection is essential to preventing the effects of these alterations on orofacial morphology and function. This article reviews the basic and additional dental examinations that should be conducted in children with upper airway obstruction and a medical history of sleep breathing disorders. abstract_id: PUBMED:10502893 Who needs a sleep test? The value of the history in the diagnosis of obstructive sleep apnea. Many experts believe that a polysomnogram to screen for obstructive sleep apnea should be performed on every patient who has a history of loud snoring and sleepiness. In contrast, the author believes that with a careful history and physical examination, there is no need to study all such patients, at least not until home polysomnography units become as convenient and economical as pulse oximetry. abstract_id: PUBMED:12782807 Association of systematic head and neck physical examination with severity of obstructive sleep apnea-hypopnea syndrome. Objectives/hypothesis: To identify upper airway and craniofacial abnormalities is the principal goal of clinical examination in patients with obstructive sleep apnea-hypopnea syndrome. The aim was to identify anatomical abnormalities that could be seen during a simple physical examination and determine their correlation with apnea-hypopnea index (AHI). Study Design: Consecutive patients with obstructive sleep apnea-hypopnea syndrome who were evaluated in a public otorhinolaryngology center were studied. Methods: Adult patients evaluated previously with polysomnography met the inclusion criteria. All subjects underwent clinical history and otolaryngological examination and filled out a sleepiness scale. Physical examination included evaluation of pharyngeal soft tissue, facial skeletal development, and anterior rhinoscopy. Results: Two hundred twenty-three patients (142 men and 81 women) were included (mean age, 48 +/- 12 y; body mass index, 29 +/- 5 kg/m2; AHI, 23.8 +/- 24.8 events per hour). Patients were distributed into two groups according to the AHI: snorers (18.4%) and patients with sleep apnea (81.7%). Sleepiness and nasal obstruction were reported by approximately half of patients, but the most common complaint was snoring. There was a statistically significant correlation between AHI and body mass index (P &lt;.000), modified Mallampati classification (P =.002), and ogivale-palate (P &lt;.001). The retrognathia was not correlated to AHI, but the presence of this anatomical alteration was much more frequent in patients with severe apnea when compared with the snorers (P =.05). Other correlations with AHI were performed considering multiple factors divided into two groups of anatomical abnormalities: pharyngeal (three or more) and craniofacial (two or more) abnormalities. There was a statistically significant correlation between pharyngeal landmarks and AHI (correlation coefficient [r] = 0.147, P =.027), but not between craniofacial landmarks and AHI. The combination of pharyngeal anatomical abnormalities, modified Mallampati classification, and body mass index were also predictive of apnea severity. Conclusions: Systematic physical examination that was used in the present study indicated that, in combination, body mass index, modified Mallampati classification, and pharyngeal anatomical abnormalities are related to both presence and severity of obstructive sleep apnea-hypopnea syndrome. Hypertrophied tonsils were observed in only a small portion of the patients. The frequency of symptoms of nasal obstruction was high in sleep apnea patients. Further studies are needed to find the best combination of anatomical and other clinical landmarks that are related to obstructive sleep apnea. abstract_id: PUBMED:22980685 Evidence-based practice: pediatric obstructive sleep apnea. Diagnosis of sleep-disordered breathing (SDB) is most accurately obtained with a nocturnal polysomnogram. However, limitations on availability make alternative screening tools necessary. Nocturnal oximetry studies or nap polysomnography can be useful if positive; however, further testing is necessary to if these tests are negative. History and physical examination have insufficient sensitivity and specificity for diagnosingpediatric SDB. Adenotonsillectomy remains first-line therapy for pediatric SDB and obstructive sleep apnea (OSA). Additional study of limited therapies for mild OSA are necessary to determine if these are reasonable primary methods of treatment or if they should be reserved for children with persistent OSA. abstract_id: PUBMED:25921055 Detection of pediatric obstructive sleep apnea syndrome: history or anatomical findings? Objective: To assess how history and/or anatomical findings differ in diagnosing pediatric obstructive sleep apnea (OSA). Methods: Children aged 2-18 years were recruited and assessed for anatomical (ie, tonsil size, adenoid size, and obesity) and historical findings (ie, symptoms) using a standard sheet. History and anatomical findings, as well as those measures significantly correlated with OSA, were identified to establish the historical, anatomical, and the combined model. OSA was diagnosed by polysomnography. The effectiveness of those models in detecting OSA was analyzed by model fit, discrimination (C-index), calibration (Hosmer-Lemeshow test), and reclassification properties. Results: A total of 222 children were enrolled. The anatomical model included tonsil hypertrophy, adenoid hypertrophy, and obesity, whereas the historical model included snoring frequency, snoring duration, awakening, and breathing pause. The C-index was 0.84 for the combined model, which significantly differed from that in the anatomical (0.78, p = 0.003) and historical models (0.72, p &lt; 0.001). The Hosmer-Lemeshow test revealed an adequate fit for all of the models. Additionally, the combined model more accurately reclassified 10.3% (p = 0.044) and 21.9% (p = 0.003) of all of the subjects than either the anatomical or historical model. Internal validation of the combined model by the bootstrapping method showed a fair model performance. Conclusion: Overall performance of combined anatomical and historical findings offers incremental utility in detecting OSA. Results of this study suggest integrating both history and anatomical findings for a screening scheme of pediatric OSA. abstract_id: PUBMED:15933515 Head and neck physical examination: comparison between nonapneic and obstructive sleep apnea patients. Study Objectives: The purpose of this study was to apply a systematic physical examination, used to evaluate obstructive sleep apnea (OSA) patients, in nonapneic patients. Design: Study was prospective. Setting: Patients were seen in the sleep laboratory and department of otorhinolaryngology. Patients Or Participants: Nonapneic patients (n = 100) were involved in the study. Interventions: Physical examination to evaluate facial skeleton, pharyngeal soft tissue, rhinoscopy, and body mass index. Data were compared with a previously published study (2003) on a group of OSA patients (n = 223). Measurements And Results: Skeletal examination detected retrognathism in 6%, class II occlusion in 12%, and high-arched hard palate in 11%. The modified Mallampati classification showed 54% in class I to II and 46% in class III to IV. Only 1% of nonapneic patients had tonsils of degree III to IV. Oropharynx evaluation showed web palate in 38%, posterior palate in 19%, thick palate in 10%, thick uvula in 10%, long uvula 15%, voluminous lateral wall in 11%, and tongue edge crenations in 28%. Anterior rhinoscopy detected significant septal deviation in 1% and turbinate hypertrophy in 31% of patients. Conclusions: The head and neck physical examination, considering both skeletal and soft tissue alterations, illustrated significant differences between nonapneic and OSA patients. Body mass index, modified Mallampati classification, tonsils hypertrophy, and high-arched hard palate previously related to the presence of sleep apnea in the literature showed different outcomes in nonapneic patients. Nonapneic patients had less alterations in nasal anatomy (severe septal deviation and enlarged turbinate). Skeletal parameters, such as retropositioned mandible and angle class II occlusion, were less frequent in nonapneic patients. abstract_id: PUBMED:16895257 Physical examination: Mallampati score as an independent predictor of obstructive sleep apnea. Study Objective: To assess the clinical usefulness of the Mallampati score in patients with obstructive sleep apnea. Mallampati scoring of the orophyarynx is a simple noninvasive method used to assess the difficulty of endotracheal intubation, but its clinical usefulness has not been validated in patients with sleep-disordered breathing. Design: Prospective multivariate assessment of a predictor variable. Setting: The UCSF Sleep Disorders Center. Patients Or Participants: One hundred thirty-seven adult patients who were evaluated for possible obstructive sleep apnea. Interventions: Prospective determination of the Mallampati score, assessment of other variables for multivariate analysis, and subsequent overnight polysomnography. Measurements And Results: The Mallampati score was an independent predictor of both the presence and severity of obstructive sleep apnea. On average, for every 1-point increase in the Mallampati score, the odds of having obstructive sleep apnea (apnea-hypopnea index&gt; or = 5) increased more than 2-fold (odds ratio [per 1-point increase] = 2.5; 95% confidence interval: 1.2-5.0; p = .01), and the apnea-hypopnea index increased by more than 5 events per hour (coefficient = 5.2; 95% confidence interval: 0.2-10; p = .04). These results were independent of more than 30 variables that reflected airway anatomy, body habitus, symptoms, and medical history. Conclusion: Our results indicate that Mallampati scoring is a useful part of the physical examination of patients prior to polysomnography. The independent association between Mallampati score and presence and severity of obstructive sleep apnea suggests that this scoring system will have practical value in clinical settings and prospective studies of sleep-disordered breathing. Answer: The effectiveness of history and physical examination as a screening test for sleep apnea varies according to different studies and contexts. One study found that in patients with a high predicted probability of sleep apnea syndrome, subjective impression alone or any combination of clinical features cannot serve as a reliable screening test. However, in patients with a low predicted probability of sleep apnea, a model based on clinical data was sufficiently sensitive to permit about a 30% reduction in the number of unnecessary sleep studies (PUBMED:1863025). In pediatric populations, the diagnostic capability of questionnaires and clinical examinations compared to full polysomnography (PSG) was evaluated. It was found that only one test had diagnostic accuracy good enough to warrant its use as a screening method for pediatric sleep-disordered breathing (SDB), but not sufficient to replace full PSG (PUBMED:24487608). Another study suggested that integrating both history and anatomical findings offers incremental utility in detecting pediatric obstructive sleep apnea (OSA) (PUBMED:25921055). For patients after the Fontan procedure, pulmonary morbidities including sleep concerns were common, indicating the importance of screening for comorbidities such as obstructive sleep apnea (PUBMED:29747936). A systematic physical examination was found to be useful in identifying anatomical abnormalities related to both the presence and severity of obstructive sleep apnea-hypopnea syndrome (OSA-HS), with body mass index, modified Mallampati classification, and pharyngeal anatomical abnormalities being predictive of apnea severity (PUBMED:12782807). The Mallampati score was also identified as an independent predictor of both the presence and severity of OSA (PUBMED:16895257). In contrast, another study suggested that history and physical examination have insufficient sensitivity and specificity for diagnosing pediatric SDB, and adenotonsillectomy remains the first-line therapy for pediatric SDB and OSA (PUBMED:22980685). Overall, while history and physical examination can provide some indication of sleep apnea, they are not consistently reliable as standalone screening tests, and their effectiveness can be improved when combined with other clinical data or used in specific patient populations. Full nocturnal polysomnography remains the gold standard for diagnosis.
Instruction: Can we rely on computed tomographic scanning to diagnose pulmonary embolism in critically ill surgical patients? Abstracts: abstract_id: PUBMED:15128121 Can we rely on computed tomographic scanning to diagnose pulmonary embolism in critically ill surgical patients? Background: Spiral computed tomographic pulmonary angiography (CTPA) is gaining an increasing role in pulmonary embolism (PE) diagnosis because it is more convenient and less invasive than conventional pulmonary angiography (PA). Encouraging reports on the reliability of CTPA for medical patients have prompted widespread use despite the fact that its value in critically ill surgical patients has been inadequately explored. Hemodynamic and respiratory issues of critical illness may interfere with CTPA's diagnostic accuracy. The objective of this study was to compare CTPA with PA for the diagnosis of PE in critically ill surgical patients. Methods: Over 30 months (August 1999-February 2002), 37 critically ill surgical patients (28 trauma and 9 non-trauma patients) wiith clinical suspicion of PE were enrolled prospectively. CTPA and PA were independently interpreted by four radiologists (two for each test) blinded to each other's interpretation. Clinical suspicion for PE was classified as high, intermediate,or low on the basis of predetermined criteria. PA was considered as the standard of reference for the diagnosis of PE. Results: PE was found in 15 (40%) patients by: central PE in 8 and peripheral PE in 7. CTPA and PA findings were different in 11 patients (30%): CTPA was false-negative in 9 patients and false-positive in 2. Its sensitivity and specificity were PE 50% and 100%, respectively, for central PE; 28% and 93% for peripheral PE; and 40% and 91% for all PE. There were no differences in risk factors or clinical characteristics between patients with and without PE. The level of clinical suspicion was identical in the two groups. The independent reviewers disagreed on CTPA or PA interpretations in 11% and 16% of the readings, respectively. Conclusion: PA remains the "gold standard" for diagnosis of PE in critically ill surgical patients. CTPA should be explored further before being universally accepted. Clinical criteria are unreliable for detecting PE in this population and therefore a high index of suspicion should be maintained. abstract_id: PUBMED:11343539 Spiral computed tomography for the diagnosis of pulmonary embolism in critically ill surgical patients: a comparison with pulmonary angiography. Hypothesis: Spiral computed tomographic pulmonary angiography (CTPA) is sensitive and specific in diagnosing pulmonary embolism (PE) in critically ill surgical patients. Design: Prospective study comparing CTPA with the criterion standard, pulmonary angiography (PA). Setting: Surgical intensive care unit of an academic hospital. Patients: Twenty-two critically ill surgical patients with clinical suspicion of PE. The CTPAs and PAs were independently read by 4 radiologists (2 for each test) blinded to each other's interpretation. Clinical suspicion was classified as high, intermediate, or low according to predetermined criteria. All but 2 patients had marked pulmonary parenchymal disease at the time of the event that triggered evaluation for PE. Interventions: Computed tomographic pulmonary angiography and PA in 22 patients, venous duplex scan in 19. Results: Eleven patients (50%) had evidence of PE on PA, 5 in central and 6 in peripheral pulmonary arteries. The sensitivity and specificity of CTPA was, respectively, 45% and 82% for all PEs, 60% and 100% for central PEs, and 33% and 82% for peripheral PEs. Duplex scanning was 40% sensitive and 100% specific in diagnosing PE. The independent reviewers disagreed only in 14% of CTPA and 14% of PA interpretations. There were no differences in risk factors or clinical characteristics between patients with and without PE. The level of clinical suspicion was identical in the 2 groups. Conclusions: Pulmonary angiography remains the gold standard for the diagnosis of PE in critically ill surgical patients. Computed tomographic pulmonary angiography needs further evaluation in this population. abstract_id: PUBMED:37883056 Diagnostic Yield, Radiation Exposure, and the Role of Clinical Decision Rules to Limit Computed Tomographic Pulmonary Angiography-Associated Complications. Objectives: Computed tomographic pulmonary angiography (CT-PA) is associated with significant cost, contrast, and radiation exposure. Clinical decision rules (CDRs) reduce the need for diagnostic imaging; however, their utility in the medical intensive care unit (MICU) remains unknown. We explored the diagnostic yield and complications associated with CT-PA (radiation exposure and contrast-induced acute kidney injury [AKI]) while investigating the efficacy of CDRs to reduce unnecessary testing. Methods: All CT-PAs performed in an academic MICU for 4 years were retrospectively reviewed. The Wells and revised Geneva scores (CDRs) and radiation dose per CT-PA were calculated, and the incidence of post-CT-PA AKI was recorded. Results: A total of 439 studies were analyzed; the diagnostic yield was 11% (48 PEs). Positive CT-PAs were associated with a higher Wells score (5.8 versus 3.2, P &lt; 0.001), but similar revised Geneva scores (6.4 versus 6.0, P = 0.32). A Wells score of ≥4 had a positive likelihood ratio of 2.1 with a negative predictive value of 98.2. More than half (88.9%) of patients with a Wells score of ≤4 developed an AKI, with 55.6% of those having recovery of renal function. Conclusions: There is overutilization of CT-PA in the MICU. The Wells score retains its negative predictive value in critically ill adult patients and may aid to limit radiation exposure and contrast-induced AKI in MICU. abstract_id: PUBMED:26661080 Impact of unsuspected subsegmental pulmonary embolism in ICU patients. Background: Critically ill patients in intensive-care units are at high risk for pulmonary embolism (PE). As a result of modern multi-detector computed tomographic angiography (MDCT) increased visualization of peripheral pulmonary arteries, isolated subsegmental pulmonary embolisms (ISSPE) are increasingly being detected. Aim: The aim of this study was to investigate the rate, impact on treatment, and outcome of unsuspected ISSPE in critically ill patients receiving MDCT. The secondary aim was to investigate the potential impact of contrast media-induced nephropathy (CIN) in our cohort. Methods: We conducted a retrospective single-centre analysis on critically ill adult patients treated between January 2009 and December 2012 who underwent a contrast-enhanced chest MDCT. We excluded patients with clinical suspicion of PE/ISSPE prior to CT and patients with MDCT confirmed central PE. Clinical findings, laboratory parameters, and outcome data were recorded. Results: We identified 240 ICU patients not suspected for PE receiving MDCT. A total of 12 Patients (5%) showed unexpected ISSPE representing increased 24 h mortality (16.7 vs. 3.5%; p = 0.026) compared to non-ISPPE/non-PE patients. A 30-days mortality did not differ between the groups (33.3 vs. 33.8%; p = 0.53). Highest mean creatinine serum level in our cohort (n = 240) was found before MDCT with a significant decrease to day 5 (1.4 ± 1.1 vs. 1.1 ± 0.9 mg/dl: p &lt; 0.0001) after contrast media administration. Conclusion: Critically ill patients are at relevant risk for ISSPE. ISSPE was associated with a poor 24 h outcome. In addition, in our cohort, contrast media application was not associated with increased serum creatinine. abstract_id: PUBMED:30556446 Wells and Geneva Scores Are Not Reliable Predictors of Pulmonary Embolism in Critically Ill Patients: A Retrospective Study. Background: Critically ill patients are at high risk for pulmonary embolism (PE). Specific PE prediction rules have not been validated in this population. The present study assessed the Wells and revised Geneva scoring systems as predictors of PE in critically ill patients. Methods: Pulmonary computed tomographic angiograms (CTAs) performed for suspected PE in critically ill adult patients were retrospectively identified. Wells and revised Geneva scores were calculated based on information from medical records. The reliability of both scores as predictors of PE was determined using receiver operating characteristic (ROC) curve analysis. Results: Of 138 patients, 42 (30.4%) were positive for PE based on pulmonary CTA. Mean Wells score was 4.3 (3.5) in patients with PE versus 2.7 (1.9) in patients without PE (P &lt; .001). Revised Geneva score was 5.8 (3.3) versus 5.1 (2.5) in patients with versus without PE (P = .194). According to the Wells and revised Geneva scores, 56 (40.6%) patients and 49 (35.5%) patients, respectively, were considered as low probability for PE. Of those considered as low risk by the Wells score, 15 (26.8%) had filling defects on CTA, including 2 patients with main pulmonary artery embolism. The area under the ROC curve was 0.634 for the Wells score and 0.546 for the revised Geneva score. Wells score &gt;4 had a sensitivity of 40%, specificity of 87%, positive predictive value of 59%, and negative predictive value of 77% to predict risk of PE. Conclusions: In this population of critically ill patients, Wells and revised Geneva scores were not reliable predictors of PE. abstract_id: PUBMED:1702557 The efficacy of palliative and definitive percutaneous versus surgical drainage of pancreatic abscesses and pseudocysts: a prospective study of 85 patients. We compared the efficacy of percutaneous to surgical drainage in a prospective study in 85 patients with pancreatic abscesses and pseudocysts. Percutaneous drainage of pancreatic abscesses in 18 patients cured three and palliated 12 who were eventually cured by elective surgical ablation; three patients died. This compares well to our 15 surgical patients, of whom four were cured by surgery alone and six were palliated. All were subsequently cured by additional computerized tomography-guided or ultrasound-guided percutaneous drainage and medical management or surgery. Five of the 15 died. Percutaneous drainage cured 11 of 14 infected pseudocysts and palliated two, which were subsequently cured by surgery; one was palliated but the patient was lost to follow-up. Surgical drainage cured six of 12 infected pseudocysts and palliated the other six, of which four were cured by further surgery and the other two were cured by secondary percutaneous drainage. Nine of 12 noninfected pseudocysts were cured by percutaneous aspiration, and two were palliated and later cured. In one patient, disease progressed, and he was ultimately lost to follow-up. Thirteen of 14 noninfected pseudocysts were cured by surgical drainage. The other patient died of pulmonary embolus. In patients treated by percutaneous techniques, there were four major complications. Our study established distinct advantages of percutaneous drainage under computerized tomographic and ultrasonic guidance: (1) the procedures can be carried out under ultrasonic guidance in an intensive care unit on critically ill patients, (2) the technique proved highly effective for initial palliation, with defervescence and stabilization occurring in most critically ill patients within 48 hours, (3) findings from fine needle aspiration provided valuable information as to microorganisms and antibiotic sensitivities and differed in 29 of 85 patients from those of concomitant blood cultures, and (4) definitive eradication of the process (surgical ablation of residual necrotic material) can be elected after the patient's clinical condition stabilizes. abstract_id: PUBMED:32393725 Surgical Pulmonary Embolectomy for Acute Massive Pulmonary Embolism Using a Surgical Endoscope;Report of a Case Despite advances in medical and surgical therapeutic techniques, acute massive pulmonary embolism has a high mortality rate. Complete clot extraction without arterial wall injury is essential to save critically ill patients. Herein, we present a case of a 72-year-old woman who was treated by surgical pulmonary embolectomy using a surgical fiberscope. The patient was admitted to our hospital with a complaint of dyspnea. Computed tomography demonstrated a massive pulmonary embolism, and echocardiography revealed a floating thrombus in the right atrium and severe right heart failure. As she suffered from circulatory collapse, percutaneous cardiopulmonary support was immediately introduced and emergency surgical embolectomy was performed. Surgery was performed under circulatory arrest, and complete clot extraction was achieved using a surgical endoscope. The patient recovered well and was discharged from the hospital on day 48, with good health. abstract_id: PUBMED:36256666 Diagnostic accuracy of multiorgan point-of-care ultrasound compared with pulmonary computed tomographic angiogram in critically ill patients with suspected pulmonary embolism. Background: Critically ill patients have a higher incidence of pulmonary embolism (PE) than non-critically ill patients, yet no diagnostic algorithm has been validated in this population, leading to the overuse of pulmonary artery computed tomographic angiogram (CTA). This study aimed to comparatively evaluate the diagnostic accuracy of point-of-care ultrasound (POCUS) combined with laboratory data versus CTA in predicting PE in critically ill patients. Methods: A prospective diagnostic accuracy study. Critically ill patients with suspected acute PE undergoing CTA were prospectively enrolled. Demographic and clinical data were collected from electronic medical records. Blood samples were collected, and the Wells and revised Geneva scores were calculated. Standardized multiorgan POCUS and CTA were performed. The discriminatory power of multiorgan POCUS combined with biochemical markers was tested using ROC curves, and multivariate analysis was performed. Results: A total of 88 patients were included, and 37 (42%) had PE. Multivariate analysis showed a relative risk (RR) of PE of 2.79 (95% CI, 1.61-4.84) for the presence of right ventricular (RV) dysfunction, of 2.54 (95% CI, 0.89-7.20) for D-dimer levels &gt;1000 ng/mL, and of 1.69 (95% CI, 1.12-2.63) for the absence of an alternative diagnosis to PE on lung POCUS or chest radiograph. The combination with the highest diagnostic accuracy for PE included the following variables: 1- POCUS transthoracic echocardiography with evidence of RV dysfunction; 2- lung POCUS or chest radiograph without an alternative diagnosis to PE; and 3- plasma D-dimer levels &gt;1000 ng/mL. Combining these three findings resulted in an area under the curve of 0.85 (95% CI, 0.77-0.94), with 50% sensitivity and 96% specificity. Conclusions: Multiorgan POCUS combined with laboratory data has acceptable diagnostic accuracy for PE compared with CTA. The combined use of these methods might reduce CTA overuse in critically ill patients. abstract_id: PUBMED:32797661 Prospective Longitudinal Evaluation of Point-of-Care Lung Ultrasound in Critically Ill Patients With Severe COVID-19 Pneumonia. Objectives: To perform a prospective longitudinal analysis of lung ultrasound findings in critically ill patients with coronavirus disease 2019 (COVID-19). Methods: Eighty-nine intensive care unit (ICU) patients with confirmed COVID-19 were prospectively enrolled and tracked. Point-of-care ultrasound (POCUS) examinations were performed with phased array, convex, and linear transducers using portable machines. The thorax was scanned in 12 lung areas: anterior, lateral, and posterior (superior/inferior) bilaterally. Lower limbs were scanned for deep venous thrombosis and chest computed tomographic angiography was performed to exclude suspected pulmonary embolism (PE). Follow-up POCUS was performed weekly and before hospital discharge. Results: Patients were predominantly male (84.2%), with a median age of 43 years. The median duration of mechanical ventilation was 17 (interquartile range, 10-22) days; the ICU length of stay was 22 (interquartile range, 20.2-25.2) days; and the 28-day mortality rate was 28.1%. On ICU admission, POCUS detected bilateral irregular pleural lines (78.6%) with accompanying confluent and separate B-lines (100%), variable consolidations (61.7%), and pleural and cardiac effusions (22.4% and 13.4%, respectively). These findings appeared to signify a late stage of COVID-19 pneumonia. Deep venous thrombosis was identified in 16.8% of patients, whereas chest computed tomographic angiography confirmed PE in 24.7% of patients. Five to six weeks after ICU admission, follow-up POCUS examinations detected significantly lower rates (P &lt; .05) of lung abnormalities in survivors. Conclusions: Point-of-care ultrasound depicted B-lines, pleural line irregularities, and variable consolidations. Lung ultrasound findings were significantly decreased by ICU discharge, suggesting persistent but slow resolution of at least some COVID-19 lung lesions. Although POCUS identified deep venous thrombosis in less than 20% of patients at the bedside, nearly one-fourth of all patients were found to have computed tomography-proven PE. abstract_id: PUBMED:23164766 Pulmonary embolism in mechanically ventilated patients requiring computed tomography: Prevalence, risk factors, and outcome. Objective: To estimate the rate of pulmonary embolism among mechanically ventilated patients and its association with deep venous thrombosis. Design: Prospective cohort study. Setting: Medical intensive care unit of a university-affiliated teaching hospital. Patients: Inclusion Criteria: mechanically ventilated patients requiring a thoracic contrast-enhanced computed tomography scan for any medical reason. Exclusion Criteria: a diagnosis of pulmonary embolism before intensive care unit admission, an allergy to contrast agents, and age younger than 18 yrs. Interventions: All the mechanically ventilated patients requiring a thoracic computed tomography underwent the standard imaging protocol for pulmonary embolism detection. Therapeutic anticoagulation was given immediately after pulmonary embolism diagnosis. All the included patients underwent a compression ultrasound of the four limbs within 48 hrs after the computed tomography scan to detect deep venous thrombosis. Results: Of 176 included patients, 33 (18.7%) had pulmonary embolism diagnosed by computed tomography, including 20 (61%) with no clinical suspicion of pulmonary embolism. By multiple logistic regression, independent risk factors for pulmonary embolism were male gender, high body mass index, history of cancer, past medical history of deep venous thrombosis, coma, and high platelet count. Previous prophylactic anticoagulant use was not a risk factor for pulmonary embolism. Of the 176 patients, 35 (19.9%) had deep venous thrombosis by compression ultrasonography, including 20 (57.1%) in the lower limbs and 24 (68.6%) related to central venous catheters. Of the 33 pulmonary embolisms, 11 (33.3%) were associated with deep venous thrombosis. The pulmonary embolism risk was increased by lower-limb deep venous thrombosis (odds ratio 4.0; 95% confidence interval 1.6-10) but not upper-limb deep venous thrombosis (odds ratio 0.6; 95% confidence interval 0.1-2.9). Crude comparison of patients with and without pulmonary embolism shows no difference in length of stay or mortality. Conclusions: In mechanically ventilated patients who needed a computed tomography, pulmonary embolism was more common than expected. Patients diagnosed with pulmonary embolism were all treated with therapeutic anticoagulation, and their intensive care unit or hospital mortality was not impacted by the pulmonary embolism occurrence. These results invite further research into early screening and therapeutic anticoagulation of pulmonary embolism in critically ill patients. Answer: The use of computed tomographic scanning, specifically computed tomographic pulmonary angiography (CTPA), to diagnose pulmonary embolism (PE) in critically ill surgical patients has been a subject of investigation. According to the studies, CTPA is gaining popularity due to its convenience and less invasive nature compared to conventional pulmonary angiography (PA). However, its reliability in critically ill surgical patients has been questioned. One study found that CTPA had a sensitivity of 50% and specificity of 100% for central PE, and 28% sensitivity and 93% specificity for peripheral PE, indicating that CTPA may miss a significant number of PEs, especially peripheral ones (PUBMED:15128121). Another study reported similar findings, with CTPA having a sensitivity of 45% and specificity of 82% for all PEs, 60% sensitivity and 100% specificity for central PEs, and 33% sensitivity and 82% specificity for peripheral PEs (PUBMED:11343539). These studies suggest that while CTPA may be effective in identifying central PEs, it is less reliable for detecting peripheral PEs in critically ill surgical patients. Moreover, clinical decision rules like the Wells and revised Geneva scores have been found to be unreliable predictors of PE in critically ill patients, which further complicates the use of CTPA based on clinical suspicion alone (PUBMED:30556446). The diagnostic yield of CT-PA in the medical intensive care unit (MICU) was only 11%, indicating a potential overutilization of this imaging modality (PUBMED:37883056). In summary, while CTPA is a less invasive and convenient diagnostic tool, its sensitivity, particularly for peripheral PEs, is not optimal in critically ill surgical patients. PA remains the gold standard for the diagnosis of PE in this population, and a high index of suspicion should be maintained due to the unreliability of clinical criteria for detecting PE (PUBMED:15128121). Therefore, CTPA should be used with caution and further evaluated before being universally accepted as a reliable diagnostic tool for PE in critically ill surgical patients.
Instruction: Can quantitative dynamic contrast-enhanced MRI independently characterize an ovarian mass? Abstracts: abstract_id: PUBMED:20419493 Can quantitative dynamic contrast-enhanced MRI independently characterize an ovarian mass? Objectives: Our aim was to establish threshold criteria based on quantitative DCE-MRI data as independent predictors of malignancy in a complex (solid, solid/cystic) ovarian mass. Methods: The MRI of 26 lesions in 25 patients with a complex ovarian mass (age range, 17-80 years; mean 43 years) was retrospectively reviewed and correlated with histology following resection. Cases with solid tumour components, definitive histology and relevant dynamic imaging were included. These were categorised into two groups, benign (N = 14) and malignant (N = 12). Following dynamic contrast-enhanced imaging, regions of interest were drawn around the solid tumour component. Maximum actual enhancement (SImax), maximum relative enhancement (SIrel), wash-in rate (WIR) and SImax (tumour)/SImax (psoas) ratio were analysed. Threshold criteria for malignancy were established. Results: There was a significant difference in SImax (p &lt; 0.001), SIrel (p &lt; 0.05), WIR (p &lt; 0.001) and SImax (tumour)/SImax (psoas) between the two groups. Optimal threshold criteria for malignancy were established; SImax &gt; or = 250 or SImax (tumour)/SImax (psoas) &gt; or = 2.35 divided the two groups with 100% sensitivity, specificity and accuracy. Conclusion: Threshold criteria established in this preliminary study using quantitative DCE-MRI provide an accurate method for the prediction of malignancy, particularly in preoperative indeterminate cases. abstract_id: PUBMED:28235128 Technical Note: Quantitative dynamic contrast-enhanced MRI of a 3-dimensional artificial capillary network. Purpose: Variability across devices, patients, and time still hinders widespread recognition of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) as quantitative biomarker. The purpose of this work was to introduce and characterize a dedicated microchannel phantom as a model for quantitative DCE-MRI measurements. Methods: A perfusable, MR-compatible microchannel network was constructed on the basis of sacrificial melt-spun sugar fibers embedded in a block of epoxy resin. Structural analysis was performed on the basis of light microscopy images before DCE-MRI experiments. During dynamic acquisition the capillary network was perfused with a standard contrast agent injection system. Flow-dependency, as well as inter- and intrascanner reproducibility of the computed DCE parameters were evaluated using a 3.0 T whole-body MRI. Results: Semi-quantitative and quantitative flow-related parameters exhibited the expected proportionality to the set flow rate (mean Pearson correlation coefficient: 0.991, P &lt; 2.5e-5). The volume fraction was approximately independent from changes of the applied flow rate through the phantom. Repeatability and reproducibility experiments yielded maximum intrascanner coefficients of variation (CV) of 4.6% for quantitative parameters. All evaluated parameters were well in the range of known in vivo results for the applied flow rates. Conclusion: The constructed phantom enables reproducible, flow-dependent, contrast-enhanced MR measurements with the potential to facilitate standardization and comparability of DCE-MRI examinations. abstract_id: PUBMED:32914921 Enhanced Masses on Contrast-Enhanced Breast: Differentiation Using a Combination of Dynamic Contrast-Enhanced MRI and Quantitative Evaluation with Synthetic MRI. Background: The addition of synthetic MRI might improve the diagnostic performance of dynamic contrast-enhanced MRI (DCE-MRI) in patients with breast cancer. Purpose: To evaluate the diagnostic value of a combination of DCE-MRI and quantitative evaluation using synthetic MRI for differentiation between benign and malignant breast masses. Study Type: Retrospective, observational. Population: In all, 121 patients with 131 breast masses who underwent DCE-MRI with additional synthetic MRI were enrolled. Field Strength/sequence: 3.0 Tesla, T1 -weighted DCE-MRI and synthetic MRI acquired by a multiple-dynamic, multiple-echo sequence. Assessment: All lesions were differentiated as benign or malignant using the following three diagnostic methods: DCE-MRI type based on the Breast Imaging-Reporting and Data System; synthetic MRI type using quantitative evaluation values calculated by synthetic MRI; and a combination of the DCE-MRI + Synthetic MRI types. The diagnostic performance of the three methods were compared. Statistical Tests: Univariate (Mann-Whitney U-test) and multivariate (binomial logistic regression) analyses were performed, followed by receiver-operating characteristic curve (AUC) analysis. Results: Univariate and multivariate analyses showed that the mean T1 relaxation time in a breast mass obtained by synthetic MRI prior to injection of contrast agent (pre-T1 ) was the only significant quantitative value acquired by synthetic MRI that could independently differentiate between malignant and benign breast masses. The AUC for all enrolled breast masses assessed by DCE-MRI + Synthetic MRI type (0.83) was significantly greater than that for the DCE-MRI type (0.70, P &lt; 0.05) or synthetic MRI type (0.73, P &lt; 0.05). The AUC for category 4 masses assessed by the DCE-MRI + Synthetic MRI type was significantly greater than that for those assessed by the DCE-MRI type (0.74 vs. 0.50, P &lt; 0.05). Data Conclusion: A combination of synthetic MRI and DCE-MRI improves the accuracy of diagnosis of benign and malignant breast masses, especially category 4 masses. Level of Evidence 4 Technical Efficacy Stage 2 J. MAGN. RESON. IMAGING 2021;53:381-391. abstract_id: PUBMED:32415476 Dynamic contrast-enhanced MRI in oncology: how we do it. Magnetic resonance imaging (MRI) is particularly attractive for clinical application in perfusion imaging thanks to the absence of ionizing radiation and limited volumes of contrast agent (CA) necessary. Dynamic contrast-enhanced MRI (DCE-MRI) involves sequentially acquiring T1-weighted images through an organ of interest during the passage of a bolus administration of CA. It is a particularly flexible approach to perfusion imaging as the signal intensity time course allows not only rapid qualitative assessment, but also quantitative measures of intrinsic perfusion and permeability parameters. We examine aspects of the T1-weighted image series acquisition, CA administration, post-processing that constitute a DCE-MRI study in clinical practice, before considering some heuristics that may aid in interpreting the resulting contrast enhancement time series. While qualitative DCE-MRI has a well-established role in the diagnostic assessment of a range of tumours, and a central role in MR mammography, clinical use of quantitative DCE-MRI remains limited outside of clinical trials. The recent publication of proposals for standardized acquisition and analysis protocols for DCE-MRI by the Quantitative Imaging Biomarker Alliance may be an opportunity to consolidate and advance clinical practice. abstract_id: PUBMED:24404443 Atherosclerotic plaque inflammation quantification using dynamic contrast-enhanced (DCE) MRI. Inflammation plays an important role in atherosclerosis. Given the increasing interest in using in-vivo imaging methods to study the physiology and treatment effects in atherosclerosis, noninvasive intraplaque inflammation quantitative method is needed. Dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) has been proposed and validated to quantitatively characterize atherosclerotic plaque inflammation. Recent studies have optimized the imaging protocol, pharmacokinetic modeling techniques. All of these technical advances further promoted DCE-MRI to clinical investigations in plaque risk assessment and therapeutic response monitor. Although larger clinical studies are still needed, DCE-MRI has been proven to be a promising tool to reveal more about intraplaque inflammation by in vivo quantitative inflammation imaging. abstract_id: PUBMED:38007282 Dynamic Contrast-Enhanced (DCE) MRI. The non-invasive dynamic contrast-enhanced MRI (DCE-MRI) method provides valuable insights into tissue perfusion and vascularity. Primarily used in oncology, DCE-MRI is typically utilized to assess morphology and contrast agent (CA) kinetics in the tissue of interest. Interpretation of the temporal signatures of DCE-MRI data includes qualitative, semi-quantitative, and quantitative approaches. Recent advances in MRI technology allow simultaneous high spatial and temporal resolutions in DCE-MRI data acquisition on most vendor platforms, enabling the more desirable approach of quantitative data analysis using pharmacokinetic (PK) modeling. Many technical factors, including signal-to-noise ratio, temporal resolution, quantifications of arterial input function and native tissue T1, and PK model selection, need to be carefully considered when performing quantitative DCE-MRI. Standardization in data acquisition and analysis is especially important in multi-center studies. abstract_id: PUBMED:28088245 The value of dynamic contrast-enhanced MRI in characterizing complex ovarian tumors. Background: The study aimed to investigate the utility of dynamic contrast enhanced MRI (DCE-MRI) in the differentiation of malignant, borderline, and benign complex ovarian tumors. Methods: DCE-MRI data of 102 consecutive complex ovarian tumors (benign 15, borderline 16, and malignant 71), confirmed by surgery and histopathology, were analyzed retrospectively. The patterns (I, II, and III) of time-signal intensity curve (TIC) and three semi-quantitative parameters, including enhancement amplitude (EA), maximal slope (MS), and time of half rising (THR), were evaluated and compared among benign, borderline, and malignant ovarian tumors. The types of TIC were compared by Pearson Chi-square χ 2 between malignant and benign, borderline tumors. The mean values of EA, MS, and THR were compared using one-way ANOVA or nonparametric Kruskal-Wallis test. Results: Fifty-nine of 71 (83%) malignant tumors showed a type-III TIC; 9 of 16 (56%) borderline tumors showed a type-II TIC, and 10 of 15 (67%) benign tumors showed a type-II TIC, with a statistically significant difference between malignant and benign tumors (P &lt; 0.001) and between malignant and borderline tumors (P &lt; 0.001). MS was significantly higher in malignant tumors than in benign tumors and in borderline than in benign tumors (P &lt; 0.001, P = 0.013, respectively). THR was significantly lower in malignant tumors than in benign tumors and in borderline than in benign tumors (P &lt; 0.001, P = 0.007, respectively). There was no statistically significant difference between malignant and borderline tumors in MS and THR (P = 0.19, 0.153) or among malignant, borderline, and benign tumors in EA (all P &gt; 0.05). Conclusions: DCE-MRI is helpful for characterizing complex ovarian tumors; however, semi-quantitative parameters perform poorly when distinguishing malignant from borderline tumors. abstract_id: PUBMED:33408526 Comparison of Diagnostic Efficacy Between Contrast-Enhanced Ultrasound and DCE-MRI for Mass- and Non-Mass-Like Enhancement Types in Breast Lesions. Background: Contrast-enhanced ultrasound (CEUS) can provide angiogenesis information about breast lesions; however, its diagnostic performance in comparison with that of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has not been systematically investigated. This study aimed to evaluate the diagnostic efficacy of CEUS and DCE-MRI in mass-like and non-mass-like enhancement types of breast lesions. Material And Methods: A retrospective study was conducted on 252 patients with breast lesions who underwent CEUS and DCE-MRI before surgery between January 2016 and February 2020. Histopathological results were used as reference standards. All patients were classified into mass-like and non-mass-like enhancement lesion groups. The mass-like lesion group was further divided into three categories according to different sizes (group 1: &lt;10 mm, group 2: 10-20 mm, and group 3: &gt;20 mm). Sensitivity, specificity, positive predictive value, negative predictive value, and receiver operating characteristic curve were analyzed to assess the diagnostic performance of these two modalities. Results: For mass-like breast lesions, DCE-MRI (Az=0.981) manifested better diagnostic performance than CEUS (Az=0.940) in medium-sized (10-20 mm) tumors (Z=2.018, P=0.043), but both had similar diagnostic performance in smaller (&lt;10 mm) and larger (&gt;20 mm) tumors (P=0.717, P=0.394). For non-mass-like enhancement lesions, CEUS and DCE-MRI showed no significant difference (Z=1.590, P=0.119) and revealed good diagnostic performance (Az=0.859, Az=0.947) in differentiating the two groups. Conclusion: For mass-like breast lesions, DCE-MRI showed better diagnostic performance than CEUS in differentiating benign and malignant tumors of medium-sizes (10-20mm) but not of smaller (&lt;10mm) and larger (&gt;20 mm) sizes. For non-mass-like lesions, both modalities showed similar diagnostic performance. abstract_id: PUBMED:38132384 Quantitative Analysis of Prostate MRI: Correlation between Contrast-Enhanced Magnetic Resonance Fingerprinting and Dynamic Contrast-Enhanced MRI Parameters. This research aimed to assess the relationship between contrast-enhanced (CE) magnetic resonance fingerprinting (MRF) values and dynamic contrast-enhanced (DCE) MRI parameters including (Ktrans, Kep, Ve, and iAUC). To evaluate the correlation between the MRF-derived values (T1 and T2 values, CE T1 and T2 values, T1 and T2 change) and DCE-MRI parameters and the differences in the parameters between prostate cancer and noncancer lesions in 68 patients, two radiologists independently drew regions-of-interest (ROIs) at the focal prostate lesions. Prostate cancer was identified in 75% (51/68) of patients. The CE T2 value was significantly lower in prostate cancer than in noncancer lesions in the peripheral zone and transition zone. Ktrans, Kep, and iAUC were significantly higher in prostate cancer than noncancer lesions in the peripheral zone (p &lt; 0.05), but not in the transition zone. The CE T1 value was significantly correlated with Ktrans, Ve, and iAUC in prostate cancer, and the CE T2 value was correlated to Ve in noncancer. Some CE MRF values are different between prostate cancer and noncancer tissues and correlate with DCE-MRI parameters. Prostate cancer and noncancer tissues may have different characteristics regarding contrast enhancement. abstract_id: PUBMED:37712359 Improved reliability of perfusion estimation in dynamic susceptibility contrast MRI by using the arterial input function from dynamic contrast enhanced MRI. The arterial input function (AIF) plays a crucial role in estimating quantitative perfusion properties from dynamic susceptibility contrast (DSC) MRI. An important issue, however, is that measuring the AIF in absolute contrast-agent concentrations is challenging, due to uncertainty in relation to the measured R2∗ -weighted signal, signal depletion at high concentration, and partial-volume effects. A potential solution could be to derive the AIF from separately acquired dynamic contrast enhanced (DCE) MRI data. We aim to compare the AIF determined from DCE MRI with the AIF from DSC MRI, and estimated perfusion coefficients derived from DSC data using a DCE-driven AIF with perfusion coefficients determined using a DSC-based AIF. AIFs were manually selected in branches of the middle cerebral artery (MCA) in both DCE and DSC data in each patient. In addition, a semi-automatic AIF-selection algorithm was applied to the DSC data. The amplitude and full width at half-maximum of the AIFs were compared statistically using the Wilcoxon rank-sum test, applying a 0.05 significance level. Cerebral blood flow (CBF) was derived with different AIF approaches and compared further. The results showed that the AIFs extracted from DSC scans yielded highly variable peaks across arteries within the same patient. The semi-automatic DSC-AIF had significantly narrower width compared with the manual AIFs, and a significantly larger peak than the manual DSC-AIF. Additionally, the DCE-based AIF provided a more stable measurement of relative CBF and absolute CBF values estimated with DCE-AIFs that were compatible with previously reported values. In conclusion, DCE-based AIFs were reproduced significantly better across vessels, showed more realistic profiles, and delivered more stable and reasonable CBF measurements. The DCE-AIF can, therefore, be considered as an alternative AIF source for quantitative perfusion estimations in DSC MRI. Answer: Yes, quantitative dynamic contrast-enhanced MRI (DCE-MRI) can independently characterize an ovarian mass. A study aimed to establish threshold criteria based on quantitative DCE-MRI data as independent predictors of malignancy in complex ovarian masses. The study included 26 lesions in 25 patients, categorized into benign and malignant groups. Quantitative parameters such as maximum actual enhancement (SImax), maximum relative enhancement (SIrel), wash-in rate (WIR), and SImax (tumor)/SImax (psoas) ratio were analyzed. The results showed significant differences in these parameters between benign and malignant groups. Optimal threshold criteria for malignancy were established with SImax ≥ 250 or SImax (tumor)/SImax (psoas) ≥ 2.35, which divided the two groups with 100% sensitivity, specificity, and accuracy. The study concluded that the established threshold criteria using quantitative DCE-MRI provide an accurate method for predicting malignancy, particularly in preoperative indeterminate cases (PUBMED:20419493).
Instruction: Does hepatic steatosis have an impact on the short term hepatic response after complete attenuation of congenital extrahepatic portosystemic shunts? Abstracts: abstract_id: PUBMED:24819233 Does hepatic steatosis have an impact on the short term hepatic response after complete attenuation of congenital extrahepatic portosystemic shunts? A prospective study of 20 dogs. Objective: To evaluate the relationship between hepatic steatosis and increase in liver size and resolution of shunting after surgical attenuation of congenital extrahepatic portosystemic shunts in dogs. Study Design: Prospective study. Animals: Dogs (n = 20) with congenital extrahepatic portosystemic shunts. Methods: Shunts were attenuated using ameroid ring constrictors. Portal blood flow and liver volume were evaluated using computed tomography before and ≥8 weeks after surgery. Hepatic steatosis was quantified by stereological point counting of lipid droplets and lipogranulomas (LG) in liver biopsies stained with Oil-red-O. Associations between steatosis and preoperative liver volume, liver growth after surgery, and development of acquired shunts were evaluated. Results: Acquired shunts developed in 2 dogs (10%). Dogs with larger preoperative liver volumes relative to bodyweight had fewer lipid droplets per tissue point (P = .019). LG per tissue point were significantly associated with age: 0.019 ± 0.06 for dogs &lt;12 months versus 0.25 ± 0.49 for dogs &gt;12 months (P = .007). There was a significant positive association between liver growth after surgery and the number of LG/month of age in dogs &gt;12 months (P = .003). There was no association between steatosis, presence of macrosteatosis, the number of LG or development of acquired shunts. Conclusions: This preliminary study suggests that the presence of hepatic lipidosis and LG has no demonstrable effect on development of acquired shunts or the magnitude of increase in liver volume after attenuation of congenital extrahepatic portosystemic shunts in dogs. abstract_id: PUBMED:21879964 Association between hepatic histopathologic lesions and clinical findings in dogs undergoing surgical attenuation of a congenital portosystemic shunt: 38 cases (2000-2004). Objective: To review hepatic histopathologic lesions in dogs undergoing surgical attenuation of a congenital portosystemic shunt (CPSS) in relation to clinical findings and tolerance of complete surgical attenuation. Design: Retrospective case series. Animals: 38 dogs that underwent surgical attenuation of a CPSS. Procedures: Hepatic histologic examination findings and medical records of dogs undergoing surgical attenuation of a single CPSS between August 2000 and July 2004 were reviewed. Liver biopsy specimens were obtained from 38 dogs during surgery prior to complete (n = 16) or partial (22) attenuation of a CPSS and from 13 of the same dogs a median of 3 months following surgical attenuation. Results: Portal tracts were inadequate for interpretation in 2 liver biopsy specimens. Liver biopsy specimens obtained prior to surgical attenuation of a CPSS had a lack of identifiable portal veins (13/36 dogs), hepatic arteriolar proliferation (25/36), ductular reaction (5/36), steatosis (16/38), and iron accumulation (32/38). Lack of identifiable portal veins on histologic examination was associated with increased hepatic arteriolar proliferation, decreased tolerance to complete surgical CPSS attenuation, and decreased opacification of intrahepatic portal vessels on portovenography. Ductular reaction was always associated with failure to tolerate complete surgical attenuation of a CPSS. Surgical CPSS attenuation resulted in significant clinical, serum biochemical, and portovenographic changes indicative of improved liver function, but only subtle changes in hepatic histologic examination findings. Conclusions And Clinical Relevance: Dogs without identifiable intrahepatic portal veins that had a ductular reaction on hepatic histologic examination were less likely to tolerate complete attenuation of a CPSS. abstract_id: PUBMED:23528942 Evaluation of hepatic steatosis in dogs with congenital portosystemic shunts using Oil Red O staining. The aims of this prospective study were to quantify steatosis in dogs with congenital portosystemic shunts (CPS) using a fat-specific stain, to compare the amount of steatosis in different lobes of the liver, and to evaluate intra- and interobserver variability in lipid point counting. Computer-assisted point counting of lipid droplets was undertaken following Oil Red O staining in 21 dogs with congenital portosystemic shunts and 9 control dogs. Dogs with congenital portosystemic shunts had significantly more small lipid droplets (&lt;6 μ) than control dogs (P = .0013 and .0002, respectively). There was no significant difference in steatosis between liver lobes for either control dogs and CPS dogs. Significant differences were seen between observers for the number of large lipid droplets (&gt;9 μ) and lipogranulomas per tissue point (P = .023 and .01, respectively). In conclusion, computer-assisted counting of lipid droplets following Oil Red O staining of liver biopsy samples allows objective measurement and detection of significant differences between dogs with CPS and normal dogs. This method will allow future evaluation of the relationship between different presentations of CPS (anatomy, age, breed) and lipidosis, as well as the impact of hepatic lipidosis on outcomes following surgical shunt attenuation. abstract_id: PUBMED:29049355 Aberrant hepatic lipid storage and metabolism in canine portosystemic shunts. Non-alcoholic fatty liver disease (NAFLD) is a poorly understood multifactorial pandemic disorder. One of the hallmarks of NAFLD, hepatic steatosis, is a common feature in canine congenital portosystemic shunts. The aim of this study was to gain detailed insight into the pathogenesis of steatosis in this large animal model. Hepatic lipid accumulation, gene-expression analysis and HPLC-MS of neutral lipids and phospholipids in extrahepatic (EHPSS) and intrahepatic portosystemic shunts (IHPSS) was compared to healthy control dogs. Liver organoids of diseased dogs and healthy control dogs were incubated with palmitic- and oleic-acid, and lipid accumulation was quantified using LD540. In histological slides of shunt livers, a 12-fold increase of lipid content was detected compared to the control dogs (EHPSS P&lt;0.01; IHPSS P = 0.042). Involvement of lipid-related genes to steatosis in portosystemic shunting was corroborated using gene-expression profiling. Lipid analysis demonstrated different triglyceride composition and a shift towards short chain and omega-3 fatty acids in shunt versus healthy dogs, with no difference in lipid species composition between shunt types. All organoids showed a similar increase in triacylglycerols after free fatty acids enrichment. This study demonstrates that steatosis is probably secondary to canine portosystemic shunts. Unravelling the pathogenesis of this hepatic steatosis might contribute to a better understanding of steatosis in NAFLD. abstract_id: PUBMED:16423574 Histopathological and immunohistochemical investigations of hepatic lesions associated with congenital portosystemic shunt in dogs. Canine livers with congenital portosystemic shunt were investigated histopathologically and immunohistochemically before and 8-272 days after partial ligation of the shunt. Lesions included hypoplasia of portal veins, arteriolar and ductular proliferation, lymphangiectasis, mild to moderate fibrosis, fatty cysts, and mostly mild hepatocellular damage with frequent atrophy and steatosis, regardless of the location of the shunting vessel. Perisinusoidal hepatic stellate cells (HSCs) in normal canine liver expressed alpha-smooth muscle actin (alpha-SMA), but no desmin. In altered livers, however, raised expression of alpha-SMA was detected, together with expression of desmin, in varying numbers of HSCs. This was interpreted as a sign of cellular proliferation and transformation to myofibroblast-like cells. Additionally, there was an obvious perisinusoidal increase of several extracellular matrix components. Postoperative biopsy samples showed basically the same lesions as those of pre-operative samples, except that signs of resolution of hepatic changes were apparent. abstract_id: PUBMED:17352108 Congenital portosystemic shunt. The Abernethy malformation Background: Congenital portosystemic shunt (CEPS) is a rare condition that was first reported by John Abernethy in 1793. Two types of CEPS are described: type I (side to end anastomosis) or congenital absence of the portal vein, and type II (side to side anastomosis) with portal vein supply partially conserved. Type I CEPS is usually seen in girls and associates multiple malformations as polysplenia, malrotation, and cardiac anomalies. Type II is even rarer with no sex preference and no malformations associated. Hepatic encephalopathy is a common complication of both types in adulthood. Liver transplantation is the only effective treatment for symptomatic type I CEPS. A therapeutic approach for type II could be surgical closure of the shunt. Objective: To analyse our experience in diagnosis and management of portosystemic shunts. Methods: We report 4 cases of CEPS (3 type I and 1 type II) diagnosed between January-1997 and March-2005 in our department. Results: We present 4 patients with ages at diagnosis ranging from 0 to 28 months, 3 type I CEPS (2 boys and 1 girl) and 1 boy type II. The type I girl was prenatally diagnosed at 12 weeks of gestation. Initial clinical signs in type 1 boys were splenomegaly and hypersplenism, both with normal pondo-statural growth. No polysplenia or cardiac anomalies were assessed. One of them presented mild developmental delay, dismorphic features and facial telangiectasias. He had normal coagulation tests with chronic hepatic dysfunction (high transaminases) and regenerative nodular lesions were seen by imaging techniques. The other type I patient had hypoprothrombinemia, tendency to capillary bleeding (haematomas and epistaxis) with preserved liver function. Both patients have developed mild portal hypertension and present steatosis signs at liver biopsy. The type I girl presents a 21 trisomy and associates a cardiac anomaly (interauricular communication). Her hepatic function test are normal but liver calcifications can be seen by ultrasound. Type II child associates hypospadias but he has no clinical sigh or symptom related to the shunt. In our three cases diagnosis was suggested by conventional and Doppler ultrasound and confirmed by angio-resonance imaging. All our patients are included in a meticulous clinical and radiological follow-up with no need of surgical treatment for the shunt until now. Conclusions: Although diagnosis of these malformations could be casual we have to think about CEPS in children presenting unspecific liver disease. Magnetic angio-resonance imaging is actually the best diagnosis methods for CEPS. These patients have a high risk for developing hepatic encephalopathy and portal hypertension, so a careful follow-up is required although surgery is not usually needed until adulthood. abstract_id: PUBMED:2530818 Radionuclide hepatic perfusion index and ultrasonography: assessment of portal hypertension in clinical practice. The final value of portal blood flow pressure depends on the degree of vascular obstruction, then on the resistance in collateral vessels and, last, on splanchnic blood flow. The iniciating cause of portal hypertension most often lies in advancing anatomical damage leading to increased resistance and, consequently, to a reduction of portal blood flow, and simultaneous reciprocal development of extrahepatic collaterals. The determination of a true portal flow is a necessity particularly when deciding about a shunt surgery and its type, but it also supplies valuable information on the degree of portal flow restriction and, in this way, on the progress of pathophysiological changes, their extent and advance. The technique of radionuclide angiography and determination of the hepatic perfusion index (HPI) proposed by Sarper appears to be a profitable noninvasive method supplying well reproducible information on portal blood flow. Sarper proved it to be correlated with the degree of portal hypertension established by angiography. Ultrasonographic criteria of portal hypertension include dilatation of the portal vein in the region of the hilus hepatis exceeding 15 mm, and a more than 10 mm dilatation of the splenic vein above the spine. The mean HPI value obtained from the examination of 19 subjects without liver involvement was 0.6956 +/- 0.0583. The group of chronic hepatopathies included 19 patients with bioptically verified chronic hepatitis without reconstruction and/or steatosis, and 32 patients with liver cirrhosis likewise confirmed by biopsy: portosystemic shunts could be demonstrated in 14 of the latter. (ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:21937094 Liver impairment after portacaval shunt in the rat: the loss of protective role of mast cells? Mast cells are involved in various liver diseases and appear to play a broader pathogenic role than originally thought. They may participate in the splanchnic alterations related to a porto-systemic shunt. To verify this hypothesis we studied the serum and hepatic histological changes in rats four weeks after an end-to-side portacaval shunt. In this experimental model of chronic liver insufficiency we also assessed the mucosal mast cells (MMC) and connective tissue mast cells (CTMC) in the liver, mesenteric lymph nodes and small intestine, as well as the serum levels of rat mast cell protease-II (RMCP-II). The results show liver and testes atrophy, with hypoalbuminemia (p=0.0001), hyperbilirubinemia (p=0.0001) and increase in aspartate aminotransferase (p=0.004) and alanine aminotransferase (p=0.0001). Hepatic histopathology demonstrates hepatocytic necrosis and apoptosis, portal inflammation, biliary proliferation, steatosis and fibrosis. There is a decrease of MMCs and CTMCs in the liver, while in the ileum CTMCs increase and MMCs decrease. These results suggest the involvement of mast cells in the pathophysiological splanchnic impairments in this experimental model. In particular, the decreased number of liver mast cells may be associated with the hepatic atrophy. If this is the case, we propose that the disruption of the hepato-intestinal axis after a portocaval shunt in the rat could inhibit the ability of the liver to developing an appropriate repair response mediated by mast cells. abstract_id: PUBMED:19243503 Hepatic steatosis and congenital portosystemic shunts: a three-dimensional transmission electron microscopic view. N/A abstract_id: PUBMED:2640042 Serum concentration of bile acids and portal hypertension in cirrhotic patients. Possible correlations For the detection of mild liver disease (acute viral hepatitis, chronic persistent hepatitis, fatty liver) serum bile acids levels have not proved to be superior to transaminases or other common liver tests with almost similar sensitivity and/or specificity. Indeed it has been possible to show in patients with compensated cirrhosis of the liver that the serum bile acids concentration is related to the degree of intrahepatic shunts and that there was a significant relationship between the fasting serum bile acids and the intrinsic clearance of ICG. Measurement of serum bile acids appear to be more sensitive for detection of cirrhosis than commonly used tests. The elevation of bile acids concentration in cirrhotic patients is thought to result from a reduced hepatic clearance and/or from portosystemic shunting. In order to determine the role of serum bile acid estimation in the indirect assessment of portal hypertension, fasting and two-hour postprandial serum bile acids concentration were measured in 36 patients with liver cirrhosis, classified according to Child-Pugh's criteria. Real time ultrasonography, esophagogastroscopy and static liver scintigraphy of the liver were carried out in all patients. The size of esophageal varices, the portal vein and its related structure, the nuclear criteria were graded according to the common methods. Between the clinical findings, splenomegaly, was noted and graded, though the size of spleen does not correlate well with the level of portal pressure. In our patients a good correlation (p less than 0.001) existed between the two hour postprandial bile acids concentration and ultrasonographic findings of portal hypertension. Fasting serum bile acids (SBA) were significantly higher in severe than in mild liver cirrhosis according to Pugh's criteria (p less than 0.001). In conclusion we think that SBA concentrations have a great prognostic value in assessment of major complications (upper gastrointestinal hemorrhage particularly). The reduced liver blood flow, for intra-and extrahepatic porto-systemic shuntings, is probably the main cause of reduced hepatic clearance of bile acids. Answer: Based on the available abstracts, hepatic steatosis does not appear to have a demonstrable impact on the short-term hepatic response after complete attenuation of congenital extrahepatic portosystemic shunts in dogs. A prospective study of 20 dogs found no association between steatosis, presence of macrosteatosis, the number of lipogranulomas, or the development of acquired shunts after surgical attenuation of congenital extrahepatic portosystemic shunts (PUBMED:24819233). This suggests that the presence of hepatic lipidosis and lipogranulomas does not affect the development of acquired shunts or the magnitude of increase in liver volume post-surgery. However, it is important to note that other studies have identified various hepatic histopathologic lesions in dogs undergoing surgical attenuation of a congenital portosystemic shunt, such as lack of identifiable portal veins, hepatic arteriolar proliferation, ductular reaction, steatosis, and iron accumulation (PUBMED:21879964). These findings indicate that while hepatic steatosis may not directly impact the short-term hepatic response to shunt attenuation, there are other histopathologic changes that occur in the liver which could potentially influence the overall outcome and tolerance to surgical attenuation. Additionally, the evaluation of hepatic steatosis in dogs with congenital portosystemic shunts using Oil Red O staining has shown that dogs with these shunts have significantly more small lipid droplets compared to control dogs, suggesting that steatosis is a common feature in canine congenital portosystemic shunts (PUBMED:23528942). However, the impact of this steatosis on surgical outcomes was not directly addressed in this study. In summary, while hepatic steatosis is a feature observed in dogs with congenital portosystemic shunts, the evidence from the provided abstract (PUBMED:24819233) indicates that it does not have a significant impact on the short-term hepatic response following complete attenuation of these shunts.
Instruction: Do patients benefit from tourniquet in arthroscopic surgeries of the knee? Abstracts: abstract_id: PUBMED:22699853 Do patients benefit from tourniquet in arthroscopic surgeries of the knee? Purpose: To undertake a meta-analysis of randomized controlled trials to determine whether routine use of a tourniquet is a better choice for knee arthroscopic procedures. Methods: Randomized controlled trials which evaluated the application of a tourniquet were selected, gathering information about arthroscopic visualization and operative time. The random-effects meta-analysis was performed using relative risk calculated from the raw data. Results: A total of five eligible studies were selected in this meta-analysis with 471 participants. There was no significant difference in visualization or operative time between the tourniquet and the non-tourniquet group. Conclusions: There is insufficient evidence to support the hypothesis that patients would benefit from routinely applying a tourniquet. The use of a tourniquet did not show any advantage to arthroscopic procedures. Level Of Evidence: Therapeutic randomized controlled trials, Level I. abstract_id: PUBMED:30625755 Effects of the tourniquet deflation on bispectral index during knee arthroscopic surgery under the general anesthesia. Background: Tourniquet deflation during lower extremity surgery affects the hemodynamics and metabolism of the patient, which can affect brain activity. This study examined the changes in brain activity during tourniquet deflation by measuring the bispectral index (BIS). Methods: The BIS was measured during surgery in forty patients who had received knee arthroscopic surgery under general anaesthesia. The BIS was measured 5 minutes before deflation (DB5) and 5 minutes after deflation (DA5). Results: The BIS at DB5 and DA5 was 50.2 +/- 9.9 and 44.4 +/- 10.4, respectively. The BIS of DA5 was significantly lower than that of DB5 (P &lt; 0.05). Conclusions: Tourniquet deflation during lower extremity surgery decreases the BIS associated with hemodynamic and metabolic changes. However, its clinical significance in neurologically critical patients, such as geriatric or neurologically disabled patients, remains to be clarified. abstract_id: PUBMED:36494737 Measurement of tissue oxygen saturation during arthroscopic surgery of knee with a tourniquet. Background: Tourniquets provide better tissue visibility during arthroscopic surgery. However, multiple postoperative adverse events associated with ischemia may be caused by excessive inflation pressure and duration. We aimed to evaluate the degree of tourniquet-induced ischemia using a noninvasive continuous real-time monitoring method and the relationship between changes in tissue oxygen saturation (StO2) and blood biochemical markers of ischemic injuries in patients undergoing arthroscopic knee surgery. Methods: This was a prospective observational study using near-infrared spectroscopy (NIRS). Data were collected from 29 consecutive patients who underwent arthroscopic procedures. Twenty-five patients underwent anterior cruciate ligament reconstruction, and four underwent meniscal repair. We investigated tourniquet-induced changes in StO2, monitored using NIRS, and blood biochemical markers of ischemic injuries. Results: A significant decrease in the mean StO2 from the baseline was observed during tourniquet inflation in the operative legs. The average decrease in the mean StO2 was 58%. A comparison of mean StO2 between the nonoperative and operative legs before tourniquet deflation showed that mean values of StO2 in the operative legs were significantly lower than those in the nonoperative legs. No significant clinical relationships were observed between changes in StO2 and blood biochemical markers of ischemic injuries (creatine kinase) (p = 0.04, r = 0.38) or tourniquet duration (p = 0.05, r = 0.366). Conclusions: Our results demonstrated that StO2 could be used to evaluate tissue perfusion in real time but did not support the hypothesis that StO2 is a useful method for predicting the degree of tourniquet-induced injury during arthroscopic knee surgery. abstract_id: PUBMED:11524355 The relationship between pneumatic tourniquet time and the amount of pulmonary emboli in patients undergoing knee arthroscopic surgeries. Near-fatal pulmonary embolism can occur immediately after tourniquet release after orthopedic surgeries. In this study, we determined the relationship between tourniquet time and the occurrence of pulmonary emboli in 30 patients undergoing arthroscopic knee surgeries, by using transesophageal echocardiography. The right atrium (RA) was continuously monitored by transesophageal echocardiography, and the number of emboli present was assessed with the following formula: Amount of emboli = 100 x [(total embolic area in the RA after tourniquet release) - (total area of emboli or artifact in the RA before tourniquet release)]/(RA area). The area was assessed 0-300 s after tourniquet release by using image-analysis software. The peak amount of emboli appeared approximately 50 s after tourniquet release. In addition, there was a significant correlation between amount of emboli (Ae [%]) and tourniquet time (Ttq [min]): (Ae = 0.1 x Ttq - 1.0, r = 0.795, P &lt; 0.01). This study suggests that acute pulmonary embolism may occur within 1 min of tourniquet release and that the number of emboli is dependent on Ttq. abstract_id: PUBMED:11294428 The pneumatic tourniquet in arthroscopic surgery of the knee. In a randomized study 56 patients undergoing arthroscopic surgery of the knee were randomly allocated to one of 2 groups: surgery with a tourniquet and surgery without a tourniquet. No significant difference was found between the 2 groups with regard to operating times, technical intraoperative difficulties, identification of intraarticular structures, postoperative pain or postoperative complications. In neither group was the procedure abandoned due to technical difficulties. The pain scores in the non-tourniquet group were lower than those in the group of patients operated on with the use of a pneumatic tourniquet. The study suggests that the use of a tourniquet in arthroscopic surgery of the knee is unnecessary. abstract_id: PUBMED:24837461 Tourniquet in knee surgery. Introduction: The tourniquet is a surgical device composed of a round pneumatic cuff in which air at high pressure can be inflated with an automatic programmable pump to avoid bleeding and technical impediment. Sources Of Data: Comprehensive searches of Medline, Cochrane and Google Scholar databases were performed for studies regarding tourniquet application in arthroscopic and open surgery of the knee. The methodological quality of each study was evaluated using the Coleman methodology score (CMS). Areas Of Agreement: The use of a tourniquet does not lead to significant increase in the risk of major complications, and there is no difference in clinical outcome in the medium term. The inflated cuff does prevent intraoperative blood loss, but hidden blood loss is not avoided completely. There is a statistically significantly higher occurrence of deep vein thrombosis in patients who undergo surgery with tourniquet, but the clinical relevance of this finding is uncertain. Areas Of Controversy: The heterogeneity in terms of inflating pressure and duration of application of tourniquet in the single studies makes it very difficult to compare the outcomes of different investigations to draw definitive conclusions. Growing Points: Standardization of pressure and application time of the cuff could allow a comparison of the data reported by the trials. Better study methodology should be also implemented since the mean CMS considering all the reviewed articles was 57.6 of 100. Research: More and better designed studies are needed to produce clear guidelines to standardize the use of tourniquet in knee procedures. abstract_id: PUBMED:24444988 The effects of a small-dose ketamine-propofol combination on tourniquet-induced ischemia-reperfusion injury during arthroscopic knee surgery. Study Objective: To determine the effects of a small-dose ketamine-propofol combination used for sedation during spinal anesthesia on tourniquet-induced ischemia-reperfusion injury. Study Design: Prospective randomized study. Setting: Training and research hospital. Patients: 60 adult, ASA physical status 1 and 2 patients, ages 20-60 years, scheduled for elective arthroscopic knee surgery for meniscal and chondral lesions. Interventions: The initial hemodynamic parameters were recorded and blood samples were collected at baseline (T1); then spinal anesthesia was performed. In Group I (n=30), a combination of 0.5 mg/kg/hr of ketamine and 2 mg/kg/hr of propofol was administered; Group II (n=30) received an equivalent volume of saline as an infusion. A pneumatic tourniquet was applied. Measurements: Malondialdehyde (MDA), superoxide dismutase (SOD), and catalase levels were measured one minute before tournique deflation in the ischemic period (T2), then 5 (T3) and 30 (T4) minutes following tourniquet deflation in the reperfusion period. Main Results: No differences were noted between groups in hemodynamic data (P &gt; 0.05) or SOD levels (P &gt; 0.05). In Group I, MDA levels at T2 were lower than in Group II (P &lt; 0.05). In Group I, catalase levels were lower at T2 and T4 than they were in Group II (P &lt; 0.05). Conclusion: Small-dose ketamine-propofol combination may be useful in reducing tourniquet-induced ischemia-reperfusion injury in arthroscopic knee surgery. abstract_id: PUBMED:19239987 A meta-analysis of tourniquet assisted arthroscopic knee surgery. The purpose was to compare the intra- and post-operative outcomes of tourniquet assisted to non-tourniquet-assisted surgery during arthroscopic knee procedures. A systematic review was undertaken of the electronic databases MEDLINE, CINAHL, AMED and EMBASE, in addition to a review of unpublished material and a hand search of pertinent orthopaedic journals. The evidence-base was critically appraised using the Cochrane Bone, Joint and Muscle Trauma Group quality assessment tool. Study heterogeneity was statistically measured using the Chi(2) and I(2) statistical tests. When appropriate, a random-effect meta-analysis was undertaken to pool the results of the primary studies assessing the mean difference of each outcome. Nine studies were identified evaluating seven outcome measures and parameters. Arthroscopic ACL reconstruction knee surgery with a tourniquet experienced less operative visualisation difficulties compared to surgery without a tourniquet. There was no significant difference between tourniquet and non-tourniquet arthroscopic knee surgery for all other outcomes. The evidence-base exhibited a number of methodological limitations. There is limited evidence to suggest that a tourniquet assists in arthroscopic knee surgery. The methodological quality of the present evidence-base remains weak. Further study is required to answer this research question. abstract_id: PUBMED:30625688 Effect of fentanyl on hemodynamic changes connected with a thigh tourniquet during knee arthroscopic surgery. Background: The use of a tourniquet can produce pain and increase in blood pressure. It is known that fentanyl reduces central sensitization, however its effect on blood pressure increase due to tourniquet is unknown. So we investigated the effect of fentanyl on tourniquet-induced changes of mean arterial blood pressure (MBP), heart rate (HR), and cardiac index (CI). Methods: ASA physical status I and II, who were scheduled for knee arthroscopic surgery using a tourniquet, were assigned into control (n = 30) and fentanyl group (n = 30). Anesthesia was maintained with enflurane, N2O and O2. Fentanyl was injected 1.5 ug/kg at 10 min before inflation of the tourniquet in the fentanyl group. Changes of the MBP, HR, CI were measured before and 10, 20, 30, 40, 50, 60 min after inflation of the tourniquet. Results: There were no differences in the baseline values. MBP was increased at 40, 50, 60 min in the control group. At 60 min, MBP was lower in the fentanyl than the control group. HR was decreased at 10 min in the fentanyl group. CI was decreased in all groups after tourniquet inflation. At 60 min, CI was more decreased in the control than the fentanyl group. Conclusions: Fentanyl injection prior to tourniquet inflation can attenuate the tourniquet induced hemodynamic changes in the knee arthroscopic surgery patients. abstract_id: PUBMED:38059212 The optimized tourniquet versus no tourniquet in total knee arthroplasty. Analysis of muscle injury, functional recovery, and knee strength. Background: Tourniquet is widely used in total knee replacement surgery because it reduces intraoperative hemorrhage and provides a comfortable surgical area for the surgeon. It's possible that its use could lead to impaired postoperative functional and motor recovery, as well as local and systemic complications. Our goal was to compare the outcomes of total knee replacement without ischemia using an optimized protocol, consisting of tourniquet inflation before skin incision and deflation after cementing, with a pressure of one hundred millimeters above systolic blood pressure and without postoperative articular suction drains.). We believed that tourniquet effectively would result in no additional muscle damage and no functional or knee strength impairment compared to no tourniquet. Methods: In a prospective and randomized study, 60 patients with osteoarthritis were evaluated for total knee replacement, divided in two groups: 'without tourniquet' and 'optimized tourniquet'. Outcomes were mean creatine phosphokinase levels, Knee Society Score and knee isokinetic strength. Data were considered significant when p &lt; 0.05. Results: Creatine phosphokinase levels and functional score were similar between groups. There were no differences between groups regarding knee extension strength on the operated limbs, although the knee flexors' peak torque in the operated limb in the optimized tourniquet group was significantly higher at 6 months relative to preoperative and 3 months assessments. Conclusions: The optimized tourniquet protocol use in total knee replacement combines the benefits of tourniquet use without compromising functional recovery and without additional muscle damage and strength deficits compared to surgery without its use. Answer: Based on the provided abstracts, there is mixed evidence regarding the benefits of using a tourniquet in arthroscopic surgeries of the knee. Some studies suggest that there is no significant difference in visualization or operative time between the tourniquet and non-tourniquet groups, indicating that patients may not benefit from the routine application of a tourniquet during these procedures (PUBMED:22699853). Additionally, a randomized study found no significant difference in operating times, technical intraoperative difficulties, identification of intraarticular structures, postoperative pain, or complications between surgeries performed with and without a tourniquet, suggesting that its use may be unnecessary (PUBMED:11294428). However, other studies have highlighted potential concerns with tourniquet use. For instance, tourniquet deflation during surgery can decrease the bispectral index (BIS), which is associated with hemodynamic and metabolic changes, although the clinical significance of this finding is not fully understood (PUBMED:30625755). Tourniquet-induced ischemia has been observed through a significant decrease in tissue oxygen saturation (StO2) during inflation, but no significant clinical relationships were found between changes in StO2 and blood biochemical markers of ischemic injuries (PUBMED:36494737). Moreover, there is a significant correlation between the amount of pulmonary emboli and tourniquet time, suggesting that longer tourniquet times may increase the risk of acute pulmonary embolism (PUBMED:11524355). Some studies have also investigated the effects of medications on tourniquet-induced ischemia-reperfusion injury, with one study finding that a small-dose ketamine-propofol combination may be useful in reducing this injury during arthroscopic knee surgery (PUBMED:24444988). Another study found that fentanyl injection prior to tourniquet inflation can attenuate tourniquet-induced hemodynamic changes (PUBMED:30625688). In terms of functional outcomes, one study found that an optimized tourniquet protocol in total knee replacement did not result in additional muscle damage or functional or knee strength impairment compared to surgery without a tourniquet (PUBMED:38059212). However, this study was on total knee arthroplasty, not arthroscopic surgery, so the findings may not be directly applicable.
Instruction: Association of cognitive function and serum uric acid: Are cardiovascular diseases a mediator among women? Abstracts: abstract_id: PUBMED:27114200 Association of cognitive function and serum uric acid: Are cardiovascular diseases a mediator among women? Background: Several studies reported an association between concentrations of serum uric acid and cognitive function, but the evidence is contradictory. It is known that uric acid is associated with cardiovascular diseases, especially among women. Stratifying by sex and history of cardiovascular disease may clarify whether uric acid is an independent risk factor for cognitive dysfunction. Methods: A population-based study was conducted in the German State of Saarland. A subgroup of participants aged ≥70years underwent a comprehensive assessment of cognitive function. Linear regression models and restricted cubic spline functions were used to assess association of uric acid with cognitive performance in 1144 study participants. Results: High levels of uric acid were associated with worse cognitive performance among women (-0.57; 95% CI: -1.10 to -0.04) but not among men (-0.12; 95% CI: -0.64 to 0.39). The association was much stronger among the subgroup of women with cardiovascular diseases (-1.91; 95% CI: -3.15 to -0.67) and also revealed a dose-response relationship in this subgroup. Conclusions: Serum uric acid showed an inverse association with cognitive function among women and the association was amplified by the presence of cardiovascular disease. These results highlight the importance of stratifying by sex and cardiovascular disease in future studies on uric acid and cognition. abstract_id: PUBMED:19036766 Serum uric acid and cognitive function and dementia. Uric acid is a risk factor of cardiovascular disease, as well as a major natural antioxidant, prohibiting the occurrence of cellular damage. The relation between uric acid and cognitive decline, in which both vascular mechanisms and oxidative stress are thought to play a role, is unknown. Therefore we assessed the relation between serum uric acid levels and the risk of subsequent dementia in a prospective population-based cohort study among 4618 participants aged 55 years and over. Additionally, we investigated the relation between serum uric acid and cognitive function later in life (on average 11.1 years later) in a subsample of 1724 participants who remained free of dementia during follow-up. All analyses were adjusted for age, sex and cardiovascular risk factors. Our data showed that only after correcting for several cardiovascular risk factors, higher serum uric acid levels were associated with a decreased risk of dementia (HR, adjusted for age, sex and cardiovascular risk factors, 0.89 [95% confidence interval (CI) 0.80-0.99] per standard deviation (SD) increase in uric acid). In participants who remained free of dementia, higher serum uric acid levels at baseline were associated with better cognitive function later in life, for all cognitive domains that were assessed [adjusted difference in Z-score (95% CI) per SD increase in uric acid 0.04 (0.00-0.07) for global cognitive function; 0.02 (-0.02 to 0.06) for executive function; and 0.06 (0.02-0.11) for memory function], but again only after correcting for cardiovascular risk factors. We conclude that notwithstanding the associated increased risk of cardiovascular disease, higher levels of uric acid are associated with a decreased risk of dementia and better cognitive function later in life. abstract_id: PUBMED:33762656 Detrimental effects of long-term elevated serum uric acid on cognitive function in rats. Uric acid is a powerful antioxidant. However, its elevated levels in association with cardiovascular diseases predispose individuals to cognitive impairment. Uric acid's effects on cognition may be related to its concentration and exposure period. We aimed to explore the effects of long-term elevated serum uric acid on cognitive function and hippocampus. Rats were randomly divided into four groups: NC, M1, M2 and M3 groups. Hyperuricemia was established in rats at week 6 and maintained until week 48 in groups M1, M2 and M3. The rats' spatial learning and memory abilities were assessed by the Morris Water Maze test at weeks 0, 6, 16, 32, and 48. After week 48, we observed pathological changes in right hippocampal CA1 and CA3 regions, and measured levels of oxidative stress, inflammatory cytokines, and β-amyloid peptide of left hippocampus. Starting from week 6, the serum uric acid level of M3 group &gt; M2 group, the serum uric acid level of M2 group &gt; M1 group, and the serum uric acid level of M1 group &gt; NC group. The rats in M3 and M2 groups had longer escape latencies, longer mean distances to the platform, more extensive pathological damage, stronger inflammation response, higher oxidative stress and β-amyloid peptide levels than those in NC group. No significant differences were observed between M1 and NC groups. In addition, we also found that oxidative stress significantly correlated with tumour necrosis factor-α and β-amyloid peptide. Long-term elevated serum uric acid was significantly associated with cognitive impairment risk. Oxidative stress, tumour necrosis factor-α and β-amyloid peptide may mediate the pathogenesis of the cognitive impairment induced by uric acid. The detrimental effect of elevated serum uric acid on cognitive function was probably expressed when the serum uric acid concentration reached a certain level. abstract_id: PUBMED:23170844 Association of serum uric acid level with muscle strength and cognitive function among Chinese aged 50-74 years. Aim: Previous studies have shown that uric acid (UA) has strong anti-oxidant properties, and that high circulating levels of UA are prospectively associated with improved muscle function and cognitive performances in elderly Caucasians. We carried out a replication study in elderly Chinese using cross-sectional design. Methods: Data from 2006 individuals aged 50-74 years who participated in a population-based cross-sectional survey in Qingdao, China, were analyzed. Hand grip strength was measured in kilograms by using an electronic dynamometer. The sit-to-stand (STS) test time was used to represent lower limb strength. The Mini-Mental State Examination (MMSE) was used to estimate the participants' cognitive function. Lifestyle, comorbidities and laboratory measures were considered as potential confounders. Multiple linear regression models and binary logistic regression were fitted to find the association of UA with strength measures and cognitive performances. Results: Participants were grouped according to UA tertiles (&lt;257.75 mmol/L, ≥ 257.75 and ≤ 359.00 mmol/L, &gt;359.00 mmol/L). Hand grip strength significantly increased across UA tertiles (26.4 ± 8.5 kg; 30.1 ± 10.5 kg; 35.0 ± 11.4 kg; P&lt;0.001), and prevalence of cognitive disorder declined across UA tertiles (7.9%, 4.9%, 3.1%; P=0.012). After adjusting for potential confounders, high UA level remained significantly associated with high grip strength (P=0.023) and decreased risk of cognitive disorder with an OR of 1.002 (95% CI 1.000-1.004; P=0.022). However, UA level was not significantly associated with STS time (P=0.780). Conclusions: Our findings suggested that notwithstanding the associated increased risk of cardiovascular disease, UA might play a protective role in aging-associated decline in muscle strength and cognitive function. abstract_id: PUBMED:17620953 Association between serum uric acid and prehypertension among US adults. Background: Experimental evidence supports a causal role of serum uric acid in hypertension development. Previous epidemiologic studies demonstrated an association between uric acid and hypertension; however, data from non-Caucasian ethnicities are limited. Currently there are few data available on the association between serum uric acid level and clinically relevant blood pressure (BP) categories earlier in the disease continuum, when hypertension prevention efforts may be applicable. We examined the association between serum uric acid and prehypertension in a nationally representative sample of US adults. Methods: Cross-sectional study among 4,817 National Health and Nutrition Examination Survey 1999-2002 participants aged &gt;or=18 years without hypertension. The main outcome of interest was the presence of prehypertension (systolic BP 120-139 mmHg or diastolic BP 80-89 mmHg) (n = 1913). Results: Higher serum uric acid levels were positively associated with prehypertension, independent of smoking, body mass index (BMI), diabetes, kidney function and other confounders. The multivariable odds ratio (OR) [95% confidence intervals (CI)] comparing quartile 4 of uric acid (&gt;356.9 micromol/l) to quartile 1 (&lt;237.9 micromol/l) was 1.96 (1.38-2.79), P trend = 0.0016. This association persisted in separate analysis among men and women. The results were consistent in subgroup analyses by categories of race-ethnicity, education, age, smoking and BMI. In nonparametric models, the positive association between serum uric acid and prehypertension appeared to be present across the full range of uric acid, without any threshold effect. Conclusions: Higher serum uric acid levels are associated with prehypertension in a nationally representative sample of US adults, free of cardiovascular disease (CVD) and hypertension. abstract_id: PUBMED:21165292 Association of renal manifestations with serum uric acid in Korean adults with normal uric acid levels. Several studies have reported that hyperuricemia is associated with the development of hypertension and cardiovascular disease. Increasing evidences also suggest that hyperuricemia may have a pathogenic role in the progression of renal disease. Paradoxically, uric acid is also widely accepted to have antioxidant activity in experimental studies. We aimed to investigate the association between glomerular filtration rate (GFR) and uric acid in healthy individuals with a normal serum level of uric acid. We examined renal function determined by GFR and uric acid in 3,376 subjects (1,896 men; 1,480 women; aged 20-80 yr) who underwent medical examinations at Gangnam Severance Hospital from November 2006 to June 2007. Determinants for renal function and uric acid levels were also investigated. In both men and women, GFR was negatively correlated with systolic and diastolic blood pressures, fasting plasma glucose, total cholesterol, uric acid, log transformed C reactive protein, and log transformed triglycerides. In multivariate regression analysis, total uric acid was found to be an independent factor associated with estimated GFR in both men and women. This result suggests that uric acid appears to contribute to renal impairment in subjects with normal serum level of uric acid. abstract_id: PUBMED:32387846 Associations of serum uric acid with incident dementia and cognitive decline in the ARIC-NCS cohort. Introduction: Elevated serum uric acid (SUA) is associated with cardiovascular risk factors, which often contribute to dementia and dementia-like morbidity, yet several cross-sectional studies have shown protective associations with cognition, which would be consistent with other work showing benefits of elevated SUA through its antioxidant properties. Methods: We studied 11,169 participants free of dementia and cardiovascular disease from the Atherosclerosis Risk in Communities (ARIC) cohort. SUA was measured in blood samples collected in 1990-92, baseline for this study (age range 47-70 years). Incident dementia was ascertained based on clinical assessments in 2011-13 and 2016-17, surveillance based on dementia screeners conducted over telephone interviews, hospitalization discharge codes, and death certificates. Cognitive function was assessed up to four times between 1990 and 92 and 2016-17. We estimated the association of SUA, categorized into quartiles, with incidence of dementia using Cox regression models adjusting for potential confounders. The association between cognitive decline and SUA was assessed using generalized estimating equations. Results: Over a median follow-up period of 24.1 years, 2005 cases of dementia were identified. High baseline SUA was associated with incident dementia (HR, 1.29; 95% CI, 1.12, 1.47) when adjusted for sociodemographic variables. However, after further adjustment including cardiovascular risk factors, this relationship disappeared (HR, 1.03; 95% CI, 0.88, 1.21). Elevated baseline SUA was associated with faster cognitive decline even after further adjustment (25-year global z-score difference, -0.149; 95% CI, -0.246, -0.052). Conclusion: Higher levels of mid-life SUA were associated with faster cognitive decline, but not necessarily with higher risk of dementia. abstract_id: PUBMED:26232927 Effects of serum uric acid levels on the arginase pathway in women with metabolic syndrome. Background: Elevated serum uric acid levels and increased arginase activity are risk factors for cardiovascular diseases (CVD). The aim of the present study was to investigate effects of serum uric acid levels on the arginase pathway in women with metabolic syndrome (MetS). Methods: Serum arginase activity, and nitrite and uric acid levels were measured in 48 women with MetS and in 20 healthy controls. The correlation of these parameters with components of MetS was also evaluated. Results: Our data show statistically higher arginase activity and uric acid levels but lower nitrite levels in women with MetS compared to controls. Serum uric acid levels were negatively correlated with HDL cholesterol, nitrite levels and positively with Body Mass Index, waist to hip ratio, triglyceride and total cholesterol levels, systolic blood pressure, Homeostasis Model Assessment-Insulin Resistance-Index, serum arginase activity, and LDL-cholesterol levels in women with MetS. Conclusion: Results of the present study suggest that serum uric acid levels may contribute to the pathogenesis of MetS through a process mediated by arginase pathway, and serum arginase activity and nitrite and uric acid levels can be used as indicators of CVD in women with MetS. abstract_id: PUBMED:17275005 Association between serum uric acid level and peripheral arterial disease. Background: Higher serum uric acid levels have been implicated in the development and progression of atherosclerotic cardiovascular disease. However, it is not clear whether serum uric acid levels are related to subclinical measures of cardiovascular disease, including peripheral arterial disease (PAD). We examined the association between increasing serum uric acid levels and PAD in the US general population. Methods: A cross-sectional study was conducted among 3987 National Health and Nutrition Examination Survey 1999-2002 participants aged &gt; or =40 years, without clinical history of cardiovascular disease. Main outcome-of-interest was PAD defined as ankle-brachial index &lt;0.9 (n=229). Results: Higher serum uric acid levels were positively associated with PAD, independent of smoking, body mass index (BMI), hypertension, diabetes, serum total cholesterol, serum creatinine, and other confounders. Multivariable odds ratio (OR) [95 percent confidence intervals (CI)] comparing serum uric acid levels &gt; or =75th percentile (&gt; or =380.8 micromol/L) to uric acid levels &lt;50th percentile (&lt;315.6 micromol/L) was 1.62 (1.08-2.44), p-trend=0.015. This association persisted in separate analysis among men and women. Further, the results were consistent in subgroup analyses by categories of age, current smoking, BMI, and diabetes mellitus. Conclusions: Higher serum uric acid levels are associated with PAD in the US general population. These results suggest that PAD may be an important indicator of the reported association between higher serum uric acid levels and clinical cardiovascular disease. Future prospective studies are required to clarify the temporal nature of this relationship. abstract_id: PUBMED:22916059 Serum uric acid as an index of impaired renal function in congestive heart failure. Background: Hyperuricemia is frequently present in patients with heart failure. Many pathological conditions, such as tissue ischemia, renal function impairment, cardiac function impairment, metabolic syndrome, and inflammatory status, may impact uric acid (UA) metabolism. This study was to assess their potential relations to UA metabolism in heart failure. Methods: We retrospectively assessed clinical characteristics, echocardiological, renal, metabolic and inflammatory variables selected on the basis of previous evidence of their involvement in cardiovascular diseases and UA metabolism in a large cohort of randomly selected adults with congestive heart failure (n = 553). By clustering of indices, those variables were explored using factor analysis. Results: In factor analysis, serum uric acid (SUA) formed part of a principal cluster of renal functional variables which included serum creatinine (SCr) and blood urea nitrogen (BUN). Univariate correlation coefficients between variables of patients with congestive heart failure showed that the strongest correlations for SUA were with BUN (r = 0.48, P &lt; 0.001) and SCr (r = 0.47, P &lt; 0.001). Conclusions: There was an inverse relationship between SUA levels and measures of renal function in patients with congestive heart failure. The strong correlation between SUA and SCr and BUN levels suggests that elevated SUA concentrations reflect an impairment of renal function in heart failure. Answer: The association between cognitive function and serum uric acid (SUA) levels appears to be influenced by the presence of cardiovascular diseases (CVD), particularly among women. High levels of SUA have been associated with worse cognitive performance in women, and this association is much stronger among women with CVD, suggesting a dose-response relationship in this subgroup (PUBMED:27114200). This indicates that CVD may act as a mediator in the relationship between SUA and cognitive function among women. Furthermore, while uric acid is a known risk factor for CVD and a major natural antioxidant, studies have shown contradictory evidence regarding its relationship with cognitive decline. Some studies suggest that higher SUA levels are associated with a decreased risk of dementia and better cognitive function later in life, but only after adjusting for cardiovascular risk factors (PUBMED:19036766). This implies that the protective effects of SUA on cognition may be overshadowed by its association with increased cardiovascular risk. In animal studies, long-term elevated SUA levels have been linked to cognitive impairment, with oxidative stress, inflammation, and β-amyloid peptide potentially mediating the pathogenesis of cognitive impairment induced by uric acid (PUBMED:33762656). This suggests that the detrimental effects of elevated SUA on cognitive function may become apparent when SUA concentrations reach a certain level. In contrast, some cross-sectional studies have found that high SUA levels are associated with improved muscle function and cognitive performances in elderly populations, suggesting a protective role of UA in aging-associated decline in muscle strength and cognitive function (PUBMED:23170844). Overall, the relationship between SUA and cognitive function is complex and may be influenced by the presence of CVD, particularly among women. While SUA has antioxidant properties that could confer cognitive benefits, its association with CVD risk factors may negate these benefits and even contribute to cognitive decline (PUBMED:27114200; PUBMED:19036766; PUBMED:33762656; PUBMED:23170844).
Instruction: Micro vs. macrodiscectomy: Does use of the microscope reduce complication rates? Abstracts: abstract_id: PUBMED:27866034 Micro vs. macrodiscectomy: Does use of the microscope reduce complication rates? Objective: A single level discectomy is one of the most common procedures performed by spine surgeons. While some practitioners utilize the microscope, others do not. We postulate improved visualization with an intraoperative microscope decreases complications and inferior outcomes. Methods: A multicenter surgical registry was utilized for this retrospective cohort analysis. Patients with degenerative spinal diagnoses undergoing elective single level discectomies from 2010 to 2014 were included. Univariate analysis was performed comparing demographics, patient characteristics, operative data, and outcomes for discectomies performed with and without a microscope. Multivariable logistic regression analysis was then applied to compare outcomes of micro- and macrodiscectomies. Results: Query of the registry yielded 23,583 patients meeting inclusion criteria. On univariate analysis the microscope was used in a greater proportion of the oldest age group as well as Hispanic white patients. Patients with any functional dependency, history of congestive heart failure, chronic corticosteroid use, or anemia (hematocrit&lt;35%) also had greater proportions of microdiscectomies. Thoracic region discectomies more frequently involved use of the microscope than cervical or lumbar discectomies (25.0% vs. 16.4% and 13.0%, respectively, p&lt;0.001). Median operative time (IQR) was increased in microscope cases [80min (60, 108) vs. 74min (54, 102), p&lt;0.001]. Of the patients that required reoperation within 30days, 2.5% of them had undergone a microdiscectomy compared to 1.9% who had undergone a macrodiscectomy, p=0.044. On multivariable analysis, microdiscectomies were more likely to have an operative time in the top quartile of discectomy operative times, ≥103min (OR 1.256, 95% CI 1.151-1.371, p&lt;0.001). In regards to other multivariable outcome models for any complication, surgical site infection, dural tears, reoperation, and readmission, no significant association with microdiscectomy was found. Conclusions: The use of the microscope was found to significantly increase the odds of longer operative time, but not influence rates of postoperative complications. Thus, without evidence from this study that the microscope decreases complications, the use of the microscope should be at the surgeon's discretion, validating the use of both macro and micro approaches to discectomy as acceptable standards of care. abstract_id: PUBMED:33608938 Cellulose-based micro-fibrous materials imaged with a home-built smartphone microscope. Micro-fibrous materials are one of the highly explored materials and form a major component of composite materials. In resource-limited settings, an affordable and easy to implement method that can characterize such material would be important. In this study, we report on a smartphone microscopic system capable of imaging a sample in transmission mode. As a proof of concept, we implemented the method to image handmade paper samples-cellulosic micro-fibrous material of different thickness. With 1 mm diameter ball lens, individual cellulose fibers, fiber web, and micro-porous regions were resolved in the samples. Imaging performance of the microscopic system was also compared with a commercial bright field microscope. For thin samples, we found the image quality comparable to commercial system. Also, the diameter of cellulose fiber measured from both methods was found to be similar. We also used the system to image surfaces of a three ply surgical facemask. Finally, we explored the application of the system in the study of chemical induced fiber damage. This study suggested that the smartphone microscope system can be an affordable alternative in imaging thin micro-fibrous material in resource limited setting. abstract_id: PUBMED:29576906 Lower complication and reoperation rates for laminectomy rather than MI TLIF/other fusions for degenerative lumbar disease/spondylolisthesis: A review. Background: Utilizing the spine literature, we compared the complication and reoperation rates for laminectomy alone vs. instrumented fusions including minimally invasive (MI) transforaminal lumbar interbody fusion (TLIF) for the surgical management of multilevel degenerative lumbar disease with/without degenerative spondylolisthesis (DS). Methods: Epstein compared complication and reoperation rates over 2 years for 137 patients undergoing laminectomy alone undergoing 2-3 level (58 patients) and 4-6 level (79 patients) Procedures for lumbar stenosis with/without DS. Results showed no new postoperative neurological deficits, no infections, no surgery for adjacent segment disease (ASD), 4 patients (2.9%) who developed intraoperative cerebrospinal fluid (CSF) fistulas, no readmissions, and just 1 reopereation for a (postoperative day 7). These rates were compared to other literature for lumbar laminectomies vs. fusions (e.g. particularly MI TLIF) addressing pathology comparable to that listed above. Results: Some studies in the literature revealed an average 4.8% complication rate for laminectomy alone vs. 8.3% for decompressions/fusion; at 5 postoperative years, reoperation rates were 10.6% vs. 18.4%, respectively. Specifically, the MI TLIF literature complication rates ranged from 7.7% to 23.0% and included up to an 8.3% incidence of wound infections, 6.1% durotomies, 9.7% permanent neurological deficits, and 20.2% incidence of new sensory deficits. Reoperation rates (1.6-6%) for MI TLIF addressed instrumentation failure (2.3%), cage migration (1.26-2.4%), cage extrusions (0.8%), and misplaced screws (1.6%). The learning curve (e.g. number of cases required by a surgeon to become proficient) for MI TLIF was the first 33-44 cases. Furthermore, hospital costs for lumbar fusions were 2.6 fold greater than those for laminectomy alone, with overall neurosurgeon reimbursement quoted in one study as high as $142,075 per year. Conclusions: The spinal literature revealed lower complication and reoperation rates for lumbar laminectomy alone vs. higher rates for instrumented fusion, including MI TLIF, for degenerative lumbar disease with/without DS. abstract_id: PUBMED:26328211 A case of micro-percutaneous nephrolithotomy with macro complication. Percutaneous nephrolithotomy is accepted as the standard management approach for kidney stones that are either refractory to extracorporeal shock wave lithotripsy or are &gt;2 cm in diameter. The recently developed micro-percutaneous nephrolithotomy (microperc) technique provides intrarenal access under full vision using an optic instrument with a smaller calibration. A lesser amount of bleeding has been reported with the use of this method. Here we present a case of a bleeding complication on postoperative day 15 after a microperc procedure used to treat a left kidney stone. The complication led to retention of bloody urine in the bladder and required transfusion of 5 units of whole blood. abstract_id: PUBMED:33793021 Micro-morphological identification study on Cordyceps sinensis (Berk.) Sacc. and its adulterants based on stereo microscope and desktop scanning electron microscope. The Chinese Materia Medica, Cordyceps sinensis (called "Dongchongxiacao" in Chinese), used as a tonic for nearly 600 years by Traditional Chinese Medicine, which has been recorded by Chinese Pharmacopoeia. This drug is rare and precious, which in turn lead to the emergence of adulterants derived from the same genus of Cordyceps. The adulterants which can be commonly found in the market are Cordyceps gunnii (called "Gunichongcao" in Chinese), Cordyceps liangshanensis (called "Liangshanchongcao" in Chinese), and Cordyceps gracilis (called "Xinjiangchongcao" in Chinese). This study combined a desktop scanning electron microscope and stereo microscope to distinguish C. sinensis from the above three adulterants especially on their different characters of caterpillar parts. Referring to the professional entomological literature, the micro-morphological features including the cuticle of the abdomen and the planta of abdomen prolegs were observed, photographed, and expressed based on the description of macroscopic characters. The identification method studied in this article is more convenient, quick, and environmental friendly. abstract_id: PUBMED:24578204 Scanning electron microscope image signal-to-noise ratio monitoring for micro-nanomanipulation. As an imaging system, scanning electron microscope (SEM) performs an important role in autonomous micro-nanomanipulation applications. When it comes to the sub micrometer range and at high scanning speeds, the images produced by the SEM are noisy and need to be evaluated or corrected beforehand. In this article, the quality of images produced by a tungsten gun SEM has been evaluated by quantifying the level of image signal-to-noise ratio (SNR). In order to determine the SNR, an efficient and online monitoring method is developed based on the nonlinear filtering using a single image. Using this method, the quality of images produced by a tungsten gun SEM is monitored at different experimental conditions. The derived results demonstrate the developed method's efficiency in SNR quantification and illustrate the imaging quality evolution in SEM. abstract_id: PUBMED:32787784 Adequacy and complication rates of percutaneous renal biopsy with 18- vs. 16-gauge needles in native kidneys in Chinese individuals. Background: Percutaneous renal biopsy (PRB) is the primary biopsy technique and it was used by 16G needles or 18G needles in China, but there is controversy about the effect and safety of the two different diameters. The study aims to compare the adequacy, complication rate and pathological classification when using 18G vs. 16G needles to perform renal biopsy with ultrasound-guidedance on native kidneys in Chinese individuals. Methods: We retrospectively analyzed the number of glomeruli, adequate sample rates, complication rates and pathological classification in 270 patients with the use of 18G or 16G needles from January 2011 to May 2017 and verified whether the needle gauge affected the disease diagnosis. Results: A total of 270 kidney biopsies were performed. Among them,72 were performed with 18G needles, and 198 were performed with 16G needles. There was no difference in the number of glomeruli under light microscope using 18G relative to 16G needles (24 ± 11 vs. 25 ± 11, p = 0.265), whereas more glomeruli were found in the 16G group than in the 16G group using immunofluorescence microscopy (3 ± 2 vs. 5 ± 3, p &lt; 0.05). There was no significant difference in the adequate sample rates between the 18G group and the 16G group (90.28% vs. 93.94%, p = 0.298). Minor complications including the incidence of lumbar or abdominal pain (4.17% vs. 7.07%, p = 0.57), gross hematuria (4.17% vs. 3.54%, p = 0.729), and perinephric hematoma without symptoms (4.17% vs. 1.52%, p = 0.195), were not significantly different between the 18G and 16G groups. In the 16G group, 2 cases of serious complications occurred: severe gross hematuria requiring blood transfusion and retroperitoneal hematoma requiring surgery. No serious complications were observed in the 18G group, although there was no significant difference in serious complications rates between the 18G and 16G groups (0% vs. 1.02%, p = 1). Conclusion: There was no significant difference in the number of glomeruli, adequate sample rates, or complication rates when using 18G or 16G needles to perform renal biopsy, and the use of an 18G needle with a smaller diameter did not affect the pathological diagnosis or classification of IgA nephropathy and lupus nephritis. abstract_id: PUBMED:26103045 Atomic force microscope caliper for critical dimension measurements of micro and nanostructures through sidewall scanning. A novel atomic force microscope (AFM) dual-probe caliper for critical dimension (CD) metrology has been developed. The caliper is equipped with two facing tilted optical fiber probes (OFPs) wherein each can be used independently to scan either sidewall of micro and nanostructures. The OFP tip with length up to 500 μm (aspect ratio 10:1, apex diameter ⩾10 nm) has unique features of scanning deep trenches and imaging sidewalls of relatively high steps with exclusive profiling possibilities. The caliper arms-OFPs can be accurately aligned with a well calibrated opening distance. The line width, line edge roughness, line width roughness, groove width and CD angles can be measured through serial scan of adjacent or opposite sidewalls with each probe. Capabilities of the presented AFM caliper have been validated through experimental CD measurement results of comb microstructures and AFM calibration grating TGZ3. abstract_id: PUBMED:26924646 Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope. We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. abstract_id: PUBMED:29871281 Treatment of hemangioma of larynx by injecting bleomycin with a modified laryngeal micro-injector under the suspension laryngoscope and operating microscope Objective:To investigate the efficacy and safety of treating laryngeal hemangioma by injecting bleomycin with self-made laryngeal microinjector under the suspension laryngoscope and operating microscope.Method:Thirteen patients with hemangioma of larynx(diameters of tumors range from 1.2-3.8 cm) were admitted and treated by injecting bleomycin with a modified laryngeal micro-injector under the suspension laryngoscope and operating microscope from January 2012 to September 2016. According to the tumor size, 4.5-7.5 mg bleomycin was injected and the patients received treatment every four weeks until there were no evident tumor residues. Result:Four of the 13 patients were cured after the first injection, 6 were cured after the second injection, and 3 cured after the third injection. All cases were followed up without recurrence. Conclusion:Bleomycin injection with a modified laryngeal micro injector under the suspension laryngoscope and operating microscope is proved to be a effective measure with the advantage of precise, minimal- invasive, and efficient therapeutic effect in treatment of laryngeal hemangioma. Answer: The study reported in abstract PUBMED:27866034 investigated whether the use of an intraoperative microscope during single-level discectomies reduces complication rates compared to macrodiscectomy, where a microscope is not used. The retrospective cohort analysis utilized a multicenter surgical registry and included patients with degenerative spinal diagnoses undergoing elective single-level discectomies from 2010 to 2014. The results showed that the use of the microscope significantly increased the odds of longer operative time. However, the study found no significant association between microdiscectomy and rates of postoperative complications, surgical site infection, dural tears, reoperation, and readmission. Therefore, the study concluded that the use of the microscope does not influence rates of postoperative complications and that the choice to use a microscope should be at the surgeon's discretion. Both macro and micro approaches to discectomy are considered acceptable standards of care based on the evidence from this study (PUBMED:27866034).
Instruction: Is there still a role for measuring serum urea in an age of eGFR? Abstracts: abstract_id: PUBMED:33633199 Epidermal growth factor alleviates the negative impact of urea on frozen-thawed bovine sperm, but the subsequent developmental competence is compromised. Upon insemination, sperm cells are exposed to components of the female reproductive tract (FRT) fluids, such as urea and epidermal growth factor (EGF). It has been shown that both urea and EGF use EGF receptor signaling and produce reactive oxygen species (ROS) that are required at certain levels for sperm capacitation and acrosome reaction. We therefore hypothesized that during bovine sperm capacitation, a high level of urea and EGF could interfere with sperm function through overproduction of ROS. High-level urea (40 mg/dl urea is equal to 18.8 mg/dl of blood urea nitrogen) significantly increased ROS production and TUNEL-positive sperm (sperm DNA fragmentation, sDF) percentage, but decreased HOS test score, progressive motility, acrosome reaction and capacitation. The EGF reversed the negative effects of urea on all sperm parameters, with the exception of ROS production and DNA fragmentation, which were higher in urea-EGF-incubated sperm than in control-sperm. The developmental competence of oocytes inseminated with urea-EGF-incubated sperm was significantly reduced compared to the control. A close association of ROS production or sDF with 0-pronuclear and sperm non-capacitation rates was found in the network analysis. In conclusion, EGF enhanced urea-reduced sperm motility; however, it failed to reduce urea-increased sperm ROS or sDF levels and to enhance subsequent oocyte competence. The data suggests that any study to improve sperm quality should be followed by a follow-up assessment of the fertilization outcome. abstract_id: PUBMED:15561974 EphA2: expression in the renal medulla and regulation by hypertonicity and urea stress in vitro and in vivo. EphA2, a member of the large family of Eph receptor tyrosine kinases, is highly expressed in epithelial tissue and has been implicated in cell-cell and cell-matrix interactions, as well as cell growth and survival. Expression of EphA2 mRNA and protein was markedly upregulated by both hypertonic stress and by elevated urea concentrations in cells derived from the murine inner medullary collecting duct. This upregulation likely required transactivation of the epidermal growth factor (EGF) receptor tyrosine kinase and metalloproteinase-dependent ectodomain cleavage of an EGF receptor ligand, based on pharmacological inhibitor studies. A human EphA2 promoter fragment spanning nucleotides -4030 to +21 relative to the putative EphA2 transcriptional start site was responsive to tonicity but insensitive to urea. A promoter fragment spanning -1890 to +128 recapitulated both tonicity- and urea-dependent upregulation of expression, consistent with transcriptional activation. Neither the bona fide p53 response element at approximately -1.5 kb nor a pair of putative TonE elements at approximately -3 kb conferred the tonicity responsiveness. EphA2 mRNA and protein were expressed at low levels in rat renal cortex but at high levels in the collecting ducts of the renal medulla and papilla. Water deprivation in rats increased EphA2 expression in renal papilla, whereas dietary supplementation with 20% urea increased EphA2 expression in outer medulla. These data indicate that transcription and expression of the EphA2 receptor tyrosine kinase are regulated by tonicity and urea in vitro and suggest that this phenomenon is also operative in vivo. Renal medullary EphA2 expression may represent an adaptive response to medullary hypertonicity or urea exposure. abstract_id: PUBMED:27869742 Design, Synthesis and Structure-Activity Relationships of Novel Diaryl Urea Derivatives as Potential EGFR Inhibitors. Two novel series of diaryl urea derivatives 5a-i and 13a-l were synthesized and evaluated for their cytotoxicity against H-460, HT-29, A549, and MDA-MB-231 cancer cell lines in vitro. Therein, 4-aminoquinazolinyl-diaryl urea derivatives 5a-i demonstrated significant activity, and seven of them are more active than sorafenib, with IC50 values ranging from 0.089 to 5.46 μM. Especially, compound 5a exhibited the most active potency both in cellular (IC50 = 0.15, 0.089, 0.36, and 0.75 μM, respectively) and enzymatic assay (IC50 = 56 nM against EGFR), representing a promising lead for further optimization. abstract_id: PUBMED:38231064 Downregulation of Serum miR-133b and miR-206 Associate with Clinical Outcomes of Progression as Monitoring Biomarkers for Metastasis Colorectal Cancer Patients. Background: Colorectal cancer (CRC) is the third most common cancer in the world. Non-coding RNAs or microRNAs (miRNAs; miRs) biomarkers can play a role in cancer carcinogenesis and progression. Specific KRAS and EGFR mutation are associated with CRC development playing a role in controlling the cellular process as epigenetic events. Circulating serum miRs can serve for early diagnosis, monitoring, and prognosis of CRC as biomarkers but it is still unclear, clinically. Objective: To determine potential biomarkers of circulating serum miR-133b and miR-206 in CRC patients Methods: Bioinformatic prediction of microRNA was screened followed by TargetScanHu-man7.2, miRTar2GO, miRDB, MiRanda, and DIANA-microT-CDS. Forty-four CRC serum (19 locally advanced, 23 distant advanced CRC) and 12 normal serum samples were subsequently extracted for RNA isolation, cDNA synthesis, and miR validation. The candidate circulating serum miR-133b and miR-206 were validated resulting in a relative expression via quantitative RT-PCR. Relative expression was normalized to the spike-internal control and compared to normal samples as 1 using the 2-ΔΔCt method in principle. Results: Our results represented 9 miRs of miR-206, miR-155-5p, miR-143-3p, miR-193a-3p, miR-30a-5p, miR-30d-5p, miR-30e-5p, miR-543, miR-877-5p relate to KRAS-specific miRs, whereas, 9 miRs of miR-133b, miR-302a-3p, miR-302b-3p, miR-302d-3p, miR-302e, miR-520a-3p, miR-520b, miR-520c-3p and miR-7-5p relevance to EGFR-specific miRs by using the bioinformatic prediction tools. Our results showed a decreased expression level of circulating serum miR-133b as well as miR-206 associating with CRC patients (local and advanced metastasis) when compared to normal (P &lt; 0.05), significantly. Conclusion: The circulating serum miR-133b and miR-206 can serve as significant biomarkers for monitoring the clinical outcome of progression with metastatic CRC patients. Increased drug-responsive CRC patients associated with crucial molecular intervention should be further explored, clinically. abstract_id: PUBMED:17689975 Dynamic alteration of soluble serum biomarkers in healthy aging. Dysbalanced production of inflammatory cytokines is involved in immunosenescence in aging. The age-related changes of the levels of circulating inflammatory mediators and their clinical importance have not been investigated until recently. Still, little is known about the influence of aging on circulating levels of many cytokines, chemokines, growth factors, and angiogenic factors. In the present study, we evaluated the effect of aging on 30 different serum biomarkers involved in pro- and anti-inflammatory responses using multianalyte LabMAP Luminex technology. The simultaneous measurement of serological markers has been done in 397 healthy subjects between 40 and 80 years old. We demonstrated an increase in serum interferon-gamma-inducible chemokines (MIG and IP-10), eotaxin, chemoattractant for eosinophils, and soluble TNFR-II with advancing age. Serum levels of EGFR and EGF, important regulators of cell growth and differentiation, were decreased with age in healthy donors. These data suggest novel pathways, which may be involved in age-associated immunosenescence. abstract_id: PUBMED:27688180 Design and discovery of 4-anilinoquinazoline-urea derivatives as dual TK inhibitors of EGFR and VEGFR-2. EGFR and VEGFR-2 are involved in pathological disorders and the progression of different kinds of tumors, the combined blockade of EGFR and VEGFR signaling pathways appears to be an attractive approach to cancer therapy. In this work, a series of 4-anilinoquinazoline derivatives containing substituted diaryl urea or glycine methyl ester moiety were designed and identified as EGFR and VEGFR-2 dual inhibitors. Compounds 19i, 19j and 19l exhibited the most potent inhibitory activities against EGFR (IC50 = 1 nM, 78 nM and 51 nM, respectively) and VEGFR-2 (IC50 = 79 nM, 14 nM and 14 nM, respectively), they showed good antiproliferative activities as well. Molecular docking established the interaction of 19i with the DFG-out conformation of VEGFR-2, suggesting that they might be type II kinase inhibitors. abstract_id: PUBMED:25259789 Clinical significance of serum epidermal growth factor receptor (EGFR) levels in patients with breast cancer. Epidermal growth factor receptor (EGFR) plays an important role in the pathogenesis of multiple malignancies and its expression strongly also affects the outcomes of cancer patients. The objective of this study was to determine the clinical significance of the serum levels of EGFR in breast cancer (BC) patients. A total of 96 patients with a pathologically confirmed diagnosis of BC were enrolled into this study. Serum EGFR levels were determined by the solid-phase sandwich ELISA method. Age and sex matched 30 healthy controls were included in the analysis. Median age of diagnosis was 48years old (range: 29-80). Thirty-seven (39%) consisted of metastatic disease. The baseline serum EGFR levels were significantly higher than in the healthy control group (p&lt;0.001). The serum EGFR concentrations were also significantly higher only in patients with ER-negative and triple-negative tumor (p=0.05 and p=0.04, respectively). The other known clinical variables, including grade of histology, stage of disease, serum CA 15.3 levels, and response to chemotherapy were not found to be correlated with serum EGFR concentrations (p&gt;0.05). Likewise, serum EGFR levels were found to play no prognostic role for survival (p=0.35). In conclusion, while serum EGFR levels were elevated in BC patients, EGFR level has no predictive and prognostic value in these patients. abstract_id: PUBMED:30808730 Urea Cycle Sustains Cellular Energetics upon EGFR Inhibition in EGFR-Mutant NSCLC. Mutations in oncogenes and tumor suppressor genes engender unique metabolic phenotypes crucial to the survival of tumor cells. EGFR signaling has been linked to the rewiring of tumor metabolism in non-small cell lung cancer (NSCLC). We have integrated the use of a functional genomics screen and metabolomics to identify metabolic vulnerabilities induced by EGFR inhibition. These studies reveal that following EGFR inhibition, EGFR-driven NSCLC cells become dependent on the urea cycle and, in particular, the urea cycle enzyme CPS1. Combining knockdown of CPS1 with EGFR inhibition further reduces cell proliferation and impedes cell-cycle progression. Profiling of the metabolome demonstrates that suppression of CPS1 potentiates the effects of EGFR inhibition on central carbon metabolism, pyrimidine biosynthesis, and arginine metabolism, coinciding with reduced glycolysis and mitochondrial respiration. We show that EGFR inhibition and CPS1 knockdown lead to a decrease in arginine levels and pyrimidine derivatives, and the addition of exogenous pyrimidines partially rescues the impairment in cell growth. Finally, we show that high expression of CPS1 in lung adenocarcinomas correlated with worse patient prognosis in publicly available databases. These data collectively reveal that NSCLC cells have a greater dependency on the urea cycle to sustain central carbon metabolism, pyrimidine biosynthesis, and arginine metabolism to meet cellular energetics upon inhibition of EGFR. IMPLICATIONS: Our results reveal that the urea cycle may be a novel metabolic vulnerability in the context of EGFR inhibition, providing an opportunity to develop rational combination therapies with EGFR inhibitors for the treatment of EGFR-driven NSCLC. abstract_id: PUBMED:29866023 Cytotoxic and Apoptotic Effects of Novel Pyrrolo[2,3-d]Pyrimidine Derivatives Containing Urea Moieties on Cancer Cell Lines. Background: Pyrrolo[2,3-d]pyrimidines have been recently reported to have anticancer activities through inhibition of different targets such as, Epidermal Growth Factor Receptor (EGFR) tyrosine kinase, Janus Kinase (JAK), mitotic checkpoint protein kinase (Mps1), carbonic anhydrase, MDM-2. On the other hand, aryl urea moieties which are found in some tyrosine kinase inhibitors such as Sorafenib and Linifanib have aroused recent attention as responsible for anticancer activities. The aims of this paper are to synthesize pyrrolo[ 2,3-d]pyrimidine derivatives containing urea moiety and evaluate their anti-cancer activity against human lung cancer cell line (A549), prostate cancer cell line (PC3), human colon cancer cell line (SW480) and human breast cancer cell line (MCF-7). Methods: A series of new pyrrolo[2,3-d]pyrimidines containing urea moieties have been synthesized as Scheme 1. In vitro cytotoxicity of target compounds were evaluated against, SW480, PC3, A549 and MCF-7 human cancer cell lines using a MTT assay. In order to evaluate the mechanism of cytotoxic activity of compounds 9e, 10a and 10b, having the best cytotoxic activity, Annexin V binding assay, cell cycle analysis and western blot analysis were performed. Results: Among the target compounds, 10a (IC50 = 0.19 µM) was found to be the most potent derivative against PC3 cells. Compound 10b and 9e showed the strong cytotoxic activity against MCF-7 and A549 cells with IC50 value of 1.66 µM and 4.55 µM, respectively. Flow cytometry data suggest that the cytotoxic activity of the compounds on cancer cells might be mediated by apoptosis revealing a significant increase in the percentage of late apoptotic cells and causing a cell cycle arrest at different stages. Western blot analysis of apoptosis marker demonstrated that these compounds induce apoptosis through the intrinsic pathway. Conclusion: Compound 9e displayed the strongest cytotoxicity against A549 cancer cell line, and induced late apoptosis in A549, as confirmed by cell cycle arrest in G0/G1 phase. In addition, compound 9e reduced expression of the anti-apoptotic protein Bcl-2 and enhanced expression of the pro-apoptotic protein Bax, besides increased caspase-9 and caspase-3, as well as cleavage of PARP levels. These results suggest that compound 9e showed a cytotoxic effect in A549 cells through activation of the mitochondrial apoptotic pathway. Further studies will be undertaken in our laboratory to improve cytotoxic activity of compound 9e and to identify the biological targets of 9e which are responsible for anticancer activity. abstract_id: PUBMED:12466022 Urea signalling to immediate-early gene transcription in renal medullary cells requires transactivation of the epidermal growth factor receptor. Signalling by physiological levels of urea (e.g. 200 mM) in cells of the mammalian renal medulla is reminiscent of activation of a receptor tyrosine kinase. The epidermal growth factor (EGF) receptor may be transactivated by a variety of G-protein-coupled receptors, primarily through metalloproteinase-dependent cleavage of a membrane-anchored EGF precursor. In the murine inner medullary collecting duct (mIMCD3) cell line, urea (200 mM) induced prompt (1-5 min) tyrosine phosphorylation of the EGF receptor. Pharmacological inhibition of EGF receptor kinase activity with AG1478 or PD153035 blocked urea-inducible transcription and expression of the immediate-early gene, Egr-1. AG1478 blocked, either fully or partially, other hallmarks of urea signalling including Elk-1 activation and extracellular signal-regulated kinase phosphorylation. EGF receptor kinase inhibition also blocked the cytoprotective effect of urea observed in the context of hypertonicity-inducible apoptosis. EGF receptor transactivation was likely to be attributable to metalloproteinase-dependent ectodomain shedding of an EGF receptor agonist because both specific and non-specific inhibitors of metalloproteinases blocked the urea effect. Heparin-binding EGF (HB-EGF), in particular, was implicated because the diphtheria toxin analogue and highly specific antagonist of HB-EGF, CRM197, also blocked urea-inducible transcription. In aggregate, these data indicate that signalling in response to urea in renal medullary cells requires EGF receptor transactivation, probably through autocrine action of HB-EGF. Answer: The provided abstracts do not directly address the clinical utility of measuring serum urea in the context of estimated glomerular filtration rate (eGFR) assessments. However, they do provide insights into the biological interactions between urea, epidermal growth factor (EGF), and the EGF receptor (EGFR) in various cellular and physiological contexts, which may have implications for the relevance of serum urea measurements. For instance, urea has been shown to induce reactive oxygen species (ROS) production and negatively impact sperm parameters, effects that can be partially reversed by EGF, although the developmental competence of oocytes is still compromised (PUBMED:33633199). EphA2 expression, which is regulated by hypertonicity and urea, suggests that urea can influence cellular responses in the renal medulla (PUBMED:15561974). Additionally, urea cycle enzymes, such as CPS1, have been implicated in sustaining cellular energetics upon EGFR inhibition in EGFR-mutant non-small cell lung cancer (NSCLC), indicating a metabolic role for urea in certain cancer contexts (PUBMED:30808730). While these abstracts highlight the complex interplay between urea, EGF, and EGFR in various biological systems, they do not provide direct evidence for or against the clinical measurement of serum urea in the presence of eGFR calculations. eGFR is a widely used clinical tool to assess kidney function and is calculated based on serum creatinine levels, age, sex, and race. Serum urea (or blood urea nitrogen, BUN) is another marker of kidney function that can be affected by factors such as hydration status, protein intake, and catabolic state. In clinical practice, both serum urea and eGFR can provide valuable information about kidney function. Serum urea may still be useful in certain clinical scenarios, such as evaluating the hydration status of a patient, assessing for urea cycle disorders, or in conjunction with eGFR to provide a more comprehensive picture of renal function. However, the decision to measure serum urea should be based on the specific clinical context and the information that the clinician seeks to obtain.
Instruction: Endovascular treatment of ruptured abdominal aortic aneurysms: is now EVAR the first choice of treatment? Abstracts: abstract_id: PUBMED:21991869 Considerations about the endovascular treatment of infrarenal aortic aneurysms The introduction of the endovascular repair of aortic aneurysm (EVAR) in the clinical practice by Parodi was a milestone in the development of aortic surgery. This minimal invasive procedure promised a significant lower postoperative mortality and complications rate. Despite numerous controversies, the endovascular technique was wide accepted and fast implemented in the clinical practice. But the first prospective randomized trials showed that the benefit of the patients was limited and this innovative approach leads to particular postoperative problems. Due to a continuous improvement of the endovascular devices and increasing experience in endovascular treatment a lot of early problems have been resolved. The new generation of endografts makes possible the endovascular approach of aneurysm with shorter or angulated proximal neck. However, the anatomical suitability is still the most important limited factor of the endovascular treatment. An accurate analysis of suitability, the proper choice of the device and correct implantation allows good postoperative results. The significance of specific complications must be recognized in order to make the right surgical decision. abstract_id: PUBMED:34980332 Clinical Analysis of the Treatment of Iliac Limb Occlusion Following Endovascular Abdominal Aortic Aneurysm Repair Objective To explore the cause and the treatment strategies of iliac limb occlusion after endovascular abdominal aortic aneurysm repair(EVAR). Methods The patients receiving EVAR in PUMC Hospital from January 2015 to December 2020 were retrospectively analyzed.Sixteen(2.7%)cases of iliac limb occlusion were identified,among which 6,9,and 1 cases underwent surgical bypass,endovascular or hybrid procedure,and conservative treatment,respectively. Results Fifteen cases were successfully treated.During the 10.6-month follow-up,2 cases receiving hybrid treatment underwent femoral-femoral bypass due to re-occlusion of the iliac limb. Conclusions Iliac limb occlusion mostly occurs in the acute phase after EVAR,and endovascular or hybrid treatment can be the first choice for iliac limb occlusion.It is suggested to focus on the risk factors for prevention. abstract_id: PUBMED:24232039 Endovascular treatment of ruptured abdominal aortic aneurysms: is now EVAR the first choice of treatment? Objective: This study was designed to evaluate the effectiveness of endovascular treatment (EVAR) for ruptured abdominal aortic aneurysms (rAAAs). Methods: Between September 2005 and December 2012, 44 patients with rAAA suitable for endovascular repair underwent emergency EVAR. We did not consider hemodynamic instability to be a contraindication for EVAR. Results: Successful stent-graft deployment was achieved in 42 patients, whereas 2 required open surgical conversion. The overall 30-day mortality was 10 of 44 patients (5/34 in stable patients, 5/10 in unstable patients). Postoperative complications were observed in 7 of 44 patients (16 %): 5 patients developed abdominal compartment syndrome requiring decompressive laparotomy; 1 patient developed bowel ischemia; 1 patient had limb ischemia, and 1 had hemodynamic shock. Mean length of intensive care unit stay was 2.9 (range 2–8) days, and mean length of hospital stay was 8.6 (range 0–18) days. At a mean follow-up of 22.2 (range 1–84) months, the overall incidence of endoleak was 23.5 %: 1 type I and 7 type II endoleaks. Conclusions: Our study demonstrates that EVAR of rAAA is associated with acceptable mortality and morbidity rates in dedicated centers. abstract_id: PUBMED:35737000 Abdominal aortic aneurysms-open vs. endovascular treatment : Decision-making from the perspective of the vascular surgeon Clinical/methodical Issue: In the last 20 years, the treatment of abdominal aortic aneurysms has essentially evolved from surgical to minimally invasive endovascular treatment. Achievements: There are still a number of clinical situations that make surgical intervention useful or even necessary. This underlines the importance of interdisciplinary vascular centers for the treatment of complex aortic pathologies and their sequelae. Practical Recommendations: In the following article, the arguments for the choice of procedure for the treatment of infrarenal aortic aneurysms are discussed and the recommendations of various guidelines are compared. abstract_id: PUBMED:30321148 State of the art of the problem concerning endovascular treatment of abdominal aortic aneurysms of infrarenal localization The problem concerning diagnosis and treatment of abdominal aortic aneurysms (AAA) is important today because of a high proportion of this pathology within the structure of population morbidity and mortality, with a tendency of these indices to increase, as well as high lethality rates in development of complications. Endovascular treatment of aortic aneurysms is one of the most rapidly developing methods of treatment in vascular surgery. Over the last two decades this type of treatment has been playing an important part in the armamentarium of the vascular surgeon and is often considered as primary treatment of patients with AAA of infrarenal localization. Nevertheless, long-term efficacy and reliability of this method have been argued. These argues are based on the fact that according to the findings of various studies the advantages of endovascular treatment of aneurysms over open surgical treatment are completely leveled after 6-8 years. The main disadvantage of endovascular treatment is the necessity of repeat interventions in the long-term period of follow-up. However, in a series of studies repeat interventions in groups of both surgical and endovascular treatment were either not taken into account or not specially studied. It should also be taken into consideration that first European studies were carried out with the use of grafts of first generations, and some of them are not used any more. Therefore, the necessity of carrying out further studies still remains. Perhaps, new generations will be able to decrease the frequency of repeat interventions and thereby improve the overall results of endovascular treatment. The possibilities of endovascular treatment of AAAs will constantly be extended, including due to the development of X-ray equipment and software, as well as at the expense of various auxiliary technologies. abstract_id: PUBMED:28115748 Acute aorta, overview of acute CT findings and endovascular treatment options. Acute aortic pathologies include acute aortic syndrome (aortic dissection, intramural hematoma, penetrating aortic ulcer), impending rupture, aortic aneurysm rupture and aortic trauma. Acute aortic syndrome, aortic aneurysm rupture and aortic trauma are life-threatening conditions requiring prompt diagnosis and treatment. The basic imaging modality for "acute aorta" is CT angiography with typical findings for these aortic pathologies. Based on the CT, it is possible to classify aortic diseases and anatomical classifications are essential for the planning of treatment. Currently, endovascular treatment is the method of choice for acute diseases of the descending thoracic aorta and is increasingly indicated for patients with ruptured abdominal aortic aneurysms. abstract_id: PUBMED:21420829 Endovascular treatment of abdominal aortic aneurysm after previous left pneumonectomy: a sound choice. Surgical treatment of abdominal aortic aneurysm after previous pneumonectomy is a challenge because of the impaired respiratory function and increased surgical risks. Endovascular aneurysm repair in anatomically suited high-surgical-risk patients offers excellent short-term results and provides good protection from aneurysm-related death. In this article, we report a successful endovascular aneurysm repair of an infrarenal aortic aneurysm in a patient with past left pneumonectomy. abstract_id: PUBMED:28835057 Comparative analysis of open surgical repair and endovascular aortic repair in the treatment of abdominal aortic aneurysm implicated visceral arteries Objective: To evaluate the value of open surgery repair and endovascular aortic repair in the treatment of abdominal aortic aneurysm which implicate visceral arteries. Methods: From January 2012 to October 2016, 26 patients were reviewed. According to the treatment, they were divided into open surgery group (n=7) and endovascular repair group(n=19). Then, the characteristics and relative data of follow-up of the two groups were analysis. Results: In open surgery group, there were 7 patients. All of them were men, median age 58(41-62) years and 1 patient were older than 75 years. And in endovascular repair group, there were 19 patients(14 men) of median age 72(66-76) years, 8 patients were older than 75 years. Patients in endovascular repair group were significantly older than who were in open surgery group (Z=-13.06, P&lt;0.05). Complications of open surgery group and endovascular repair group were 57.14% and 15.79% respectively. There was no statistically significant difference (χ(2)=3.13, P&gt;0.05). During follow-up, the diameters of aneurysm which planted stents were smaller than before. Conclusions: In the treatment of abdominal aortic aneurysm implicated visceral arteries, open surgery is mainly suitable for the patients with low-risk. But the rate of complications was high. By contrast, endovascular aortic repair with small trauma and high safety, has becoming the first choice. abstract_id: PUBMED:28273223 Endovascular therapeutic options for the treatment of aortoiliac aneurysms. About 20% of patients with abdominal aortic aneurysms have associated iliac aneurysms. Distal sealing during the endovascular treatment of aortic-iliac aneurysms is a challenge that has led to the emergence of several technical options to achieve this goal over the years. Internal iliac artery embolization is associated with the risk of ischemic complications, such as gluteal necrosis, lower limb neurological deficit, colonic ischemia, impotence and gluteal claudication. This article summarizes the technical options for endovascular treatment of aortoiliac aneurysms with different approaches to preserving the patency of internal iliac arteries. abstract_id: PUBMED:34093683 Immediate outcome of endovascular treatment of ruptured juxtarenal aneurysm with parallel stents. Rupture of an abdominal aortic aneurysm is an event with a high mortality rate and treatment is a medical emergency. Endovascular treatment of these aneurysms has become established as a minimally invasive alternative to classical open surgery and is now the first-choice option. However, 20 to 50% of patients with abdominal aortic aneurysms do not have anatomy favorable for endovascular treatment because of a short aneurysm neck or because visceral branches are involved by the aneurysm. We report the case of a 70-year-old patient who underwent endovascular repair of a ruptured juxtarenal aneurysm with deployment of parallel stents in the renal arteries (in a chimney technique). Clinical data and details of the procedure are reported. Technical success was achieved and there were no postoperative complications. Answer: Endovascular treatment of ruptured abdominal aortic aneurysms (rAAAs) has become increasingly favored in clinical practice. According to a study evaluating the effectiveness of endovascular treatment (EVAR) for rAAAs, EVAR is associated with acceptable mortality and morbidity rates in dedicated centers, suggesting that it could be considered as the first choice of treatment for rAAAs (PUBMED:24232039). Furthermore, endovascular treatment has been established as a minimally invasive alternative to classical open surgery and is now the first-choice option for the treatment of ruptured juxtarenal aneurysms, as reported in a case of successful endovascular repair with parallel stents (PUBMED:34093683). The shift towards endovascular treatment is supported by the fact that it is the method of choice for acute diseases of the descending thoracic aorta and is increasingly indicated for patients with ruptured abdominal aortic aneurysms (PUBMED:28115748). Additionally, endovascular aortic repair (EVAR) has been shown to be a sound choice for patients with high surgical risk, such as those with a history of pneumonectomy, offering excellent short-term results and protection from aneurysm-related death (PUBMED:21420829). Comparative analyses have also indicated that endovascular aortic repair is highly suitable for the treatment of abdominal aortic aneurysms implicated with visceral arteries, especially in older patients, due to its small trauma and high safety profile, making it a preferred option over open surgery (PUBMED:28835057). In conclusion, the evidence suggests that EVAR has become the first choice of treatment for ruptured abdominal aortic aneurysms, particularly in centers with the necessary expertise and in patients with suitable anatomy for the procedure. This preference is due to the minimally invasive nature of the procedure, lower morbidity and mortality rates, and the increasing experience and improvement in endovascular devices and techniques (PUBMED:21991869, PUBMED:24232039, PUBMED:34093683, PUBMED:28115748, PUBMED:21420829, PUBMED:28835057).
Instruction: Long-term outcomes of pancreas after kidney transplantation in small centers: is it justified? Abstracts: abstract_id: PUBMED:25131071 Long-term outcomes of pancreas after kidney transplantation in small centers: is it justified? Background: Currently, the long-term advantages of having a pancreas transplantation (PT) are debated, particularly in patients receiving pancreas after kidney (PAK) allografts. The United Network for Organ Sharing (UNOS) requires that a transplant center perform a minimum number of PT per year to remain an active PT center. The long-term outcomes and challenges of PAK in small pancreas transplant centers are not well studied. Methods: In this retrospective analysis, we report short- and long-term outcomes in a small center performing 2-9 PT annually. Results: Forty-eight PT (25 simultaneous pancreas and kidney transplantation [SPK], 23 PAK) were performed in our center. Donor and recipient demographics were similar in both groups. All suitable local donors were used for SPK. All organs for PAK transplantation were imported from other UNOS regions. Mean follow-up was 61 ± 46 and 74 ± 46 months for SPK and PAK, respectively. Patient and graft survival rates were similar in SPK and PAK groups and better than the reported national average. Four patients (11%) died (1 due to trauma, 1 brain lymphoma, 1 ruptured aneurysm; and 1 unknown cause). Two patients (4%; 1 SPK, 1 PAK) lost their grafts because of thrombosis on postoperative days 3 and 5 in 2002. No graft thrombosis occurred since 2002. Seven patients (15%) required reoperation (4 for bleeding, 2 anastomotic leaks, 1 small bowel perforation). Two patients (4%) developed post-transplantation lymphoproliferative disease. Five patients (11%) experienced cytomegalovirus antigenemia which responded well to antiviral therapy. Conclusions: Compared with outcomes for diabetic patients on dialysis, current SPK and PAK short- and long-term results are favorable even in a small PT center. Therefore, unless there is a contraindication, PT should be offered to all type 1 diabetic patients with end-stage renal disease at the time of kidney transplantation or afterward. abstract_id: PUBMED:32493667 Long-term outcomes of adult-size and size-matched kidney transplants in small pediatric recipients. Introduction: Adult-size kidneys are usually used for kidney transplantation in small pediatric recipients, but the influence of graft size in transplant outcome remains controversial. Our aim is to compare long-term transplant outcomes of using adult-size and size-matched kidneys in small pediatric recipients. Materials And Methods: Since 1999, 61 of 226 kidney transplants were achieved in recipients weighing &lt;20 kg with 5 years of follow-up. Patients were analyzed according to the graft size received: (group-A) adult-size (n = 32), (group-B) size-matched (n = 29). Kidney size (KS), glomerular filtration rate (GFR) proteinuria and rejection were compared between groups at transplant time (T0), at one (T1), two (T2), five years (T5), and at the end of the follow-up (TF) (median follow-up 8.47(0-17) years). Graft and patient survival were determined and compared between groups. Results: Mean KS was significantly different between groups at T0 (A:11.3 ± 1.1 cm, B:8.8 ± 0.9 cm), (pT0&lt;0.01), group-B evidenced graft growth, reaching similar sizes to group-A at T5 (A:11.7±1 cm, B:11.2±1 cm; pT5 = 0.13) and TF (A:12.2 ± 1.1 cm, B:12.4 ± 1.2 cm; pTF = 0.63), and group-A had a slight graft growth at TF (pT0-TF&lt;0.01). Mean Schwartz-GFR at T0 was greater in group-A (138 ± 33 ml/min/1.73 m2) than group-B (109 ± 34 mL/min/1.73 m2) (pT0 = 0.01); during follow-up, it evidenced a reduction in group-A (T5:90 ± 27, TF:71 ± 24 mL/min/1.73 m2; pT0-T5&lt;0.01; pT0-TF&lt;0.01), meanwhile in group-B was stable until T5 (104 ± 33 mL/min/1.73 m2; pT0-T5 = 0.54), declining at TF (76 ± 31 mL/min/1.73 m2; pT0-TF&lt;0.01); with no significant differences at T1, T2, T5, and TF between groups. Similar results were observed in mean Filler-GFR of both groups (Figure). Proteinuria and episodes of rejection were no significantly different between groups during the follow-up (p &gt; 0.01; p = 0.23). Graft and patient survival at 5 and 10 years did not show significant differences (p = 0.45; p = 0.10). Discussion: Despite the initial kidney size difference between groups, we have demonstrated that they tended to the same size during the follow-up. Adult-size kidneys presented a slight size increase in the long-term, suggesting that they have some growth potential in small recipients, in contrast to previous literature. Mean GFR between groups showed no significant differences in the long-term, suggesting that optimal graft perfusion and function can be achieved despite the size of the graft. We have demonstrated that there were no significant differences in long-term graft and patient survival; this results were similar to the most recent literature about this topic and different from the 90-2000s decades literature. Conclusions: Adult-size kidneys may be transplanted to small recipients (&lt;20 kg) with comparable outcomes to size-matched kidneys, with no significant differences in long-term KS, GFR, proteinuria, rejection, graft or patient survival. abstract_id: PUBMED:11486534 Pancreas and kidney transplantation: long-term metabolic results Study Aim: Pancreas and kidney transplantation (PKTx) is indicated in uremic patients with insulin-dependent diabetes mellitus (IDDM). The aim of this study was to determine its long-term effect on metabolic control in order to establish the real efficacy of this treatment in diabetic patients. Patients And Method: Among a total experience of 191 pancreas and kidney transplantations, a metabolic control was performed in 80 patients who underwent PKTx in our center, with both grafts functioning for more than one year. Immunological markers of diabetes mellitus were also evaluated (ICA and GADab) in 50 patients. Results: Basal glycemia and glycosylated hemoglobin (HbA1c) levels throughout follow-up were within the normal range. Hyperinsulinemia was present throughout follow-up till the fourth year. The oral glucose tolerance test (OGTT) was normal in 82.5% of the patients beyond one year after the graft. Over time, no differences were detected on basal glucose and insulin levels and areas under the curve (AUC) of glycemia and insulinemia. During the evolution, no differences were found in the fasting insulin resistance index (FIRI), in spite of increasing body weight. ICA were + in 2 patients before graft and + in 7 after graft (14%). GADab were + in 10 patients before graft and + in 11 after graft (22%). Conclusion: Pancreas and kidney transplantation provides without any insulin treatment and diet long-term normalization of glycemic control, assessed by HbA1c and OGTT, despite the existence of sustained hyperinsulinemia. Our results strongly suggest that pancreas and kidney transplantation is the most efficient treatment for uremic patients with insulin-dependent diabetes mellitus from a metabolic point of view. abstract_id: PUBMED:27203593 Long-term Outcomes for Living Pancreas Donors in the Modern Era. Background: Living donor segmental pancreas transplants (LDSPTx) have been performed selectively to offer a preemptive transplant option for simultaneous pancreas-kidney recipients and to perform a single operation decreasing the cost of pancreas after kidney transplant. For solitary pancreas transplants, this option historically provided a better immunologic match. Although short-term donor outcomes have been documented, there are no long-term studies. Methods: We studied postdonation outcomes in 46 segmental pancreas living donors. Surgical complications, risk factors (RF) for development of diabetes mellitus (DM) and quality of life were studied. A risk stratification model (RSM) for DM was created using predonation and postdonation RFs. Recipient outcomes were analyzed. Results: Between January 1, 1994 and May 1, 2013, 46 LDSPTx were performed. Intraoperatively, 5 (11%) donors received transfusion. Overall, 9 (20%) donors underwent splenectomy. Postoperative complications included: 6 (13%) peripancreatic fluid collections and 2 (4%) pancreatitis episodes. Postdonation, DM requiring oral hypoglycemics was diagnosed in 7 (15%) donors and insulin-dependent DM in 5 (11%) donors. RSM with three predonation RFs (oral glucose tolerance test, basal insulin, fasting plasma glucose) and 1 postdonation RF, greater than 15% increase in body mass index from preoperative (Δ body mass index &gt;15), predicted 12 (100%) donors that developed postdonation DM. Quality of life was not significantly affected by donation. Mean graft survival was 9.5 (±4.4) years from donors without and 9.6 (±5.4) years from donors with postdonation DM. Conclusions: LDSPTx can be performed with good recipient outcomes. The donation is associated with donor morbidity including impaired glucose control. Donor morbidity can be minimized by using RSM and predonation counseling on life style modifications postdonation. abstract_id: PUBMED:21150619 Long-term outcomes after simultaneous pancreas-kidney transplant. Purpose Of Review: Simultaneous pancreas-kidney (SPK) transplantation represents the only proven long-term therapeutic approach for type 1 diabetic, dialysis-dependent patients. This procedure potentially liberates these patients from dialysis and the need for exogenous insulin replacement. For the first time, data on the long-term natural history of patients receiving SPK have recently been analyzed. In this review, we discuss the outcomes and complications for patients receiving SPK in the context of the current literature. Recent Findings: In our analysis of 1000 SPKs performed at our center, we demonstrated that SPK increases patient survival compared with live-donor kidney alone or deceased donor kidney alone transplantation. The 5-year, 10-year, and 20-year patient survival for SPK recipients was 89, 80, and 58%, respectively. Enteric drainage improves quality of life, but not allograft survival, when compared with bladder drainage. After transplantation, approximately 50% of bladder-drained transplants undergo enteric conversion and late conversion after transplantation is associated with a higher complication rate. Surgical complications are higher in enteric-drained compared with bladder-drained pancreas transplants. Summary: Selecting the appropriate therapy for a type 1 diabetic recipient with renal failure continues to be a critical decision for programs offering pancreas transplantation. The principles and guidelines at our center are driven by the potential benefit of the SPK transplant needing to outweigh the increased morbidity of the surgical procedure and the use of lifelong immunosuppression. Results from long-term studies demonstrating improved patient survival suggest that the treatment of choice for an appropriate type 1 diabetic recipient is an SPK transplant. abstract_id: PUBMED:33194009 Long-term outcomes of laparoscopic versus open donor nephrectomy for kidney transplantation: a meta-analysis. Laparoscopic surgery is widely used for living donor nephrectomy and has demonstrated superiority over open surgery by improving several outcomes, such as length of hospital stay and morphine requirements. The purpose of the present study was to compare the long-term outcomes of open donor nephrectomy (ODN) versus laparoscopic donor nephrectomy (LDN) using meta-analytical techniques. The Web of Science, PubMed and Cochrane Library databases were searched, for relevant articles published between 1980 and January 20, 2020. Lists of reference articles retrieved in primary searches were manually screened for potentially eligible studies. Outcome parameters were explored using Review Manager version 5.3. The evaluated outcomes included donor serum creatinine levels, incidence of hypertension or proteinuria at 1 year postoperative, donor health-related quality of life, donation attitude, and graft survival. Thirteen of the 111 articles fulfilled the inclusion criteria. The LDN group demonstrated similar 1 year outcomes compared with ODN with respect to serum creatinine levels (weighted mean difference [WMD] -0.02 mg/dL [95% confidence interval (CI) -0.18-0.13]; P=0.77); hypertension (odds ratio [OR] 1.21 [95% CI 0.48-3.08]; P=0.68); proteinuria (OR 0.28 [95% CI 0.02-3.11]; P=0.30); and donation attitude (OR 4.26 [95% CI 0.06-298.27]; P=0.50). Donor health-related quality of life and recipient graft survival were also not significantly different between the groups analyzed. Thus, the long-term outcomes between LDN and ODN for living donor kidney transplantation are similar. abstract_id: PUBMED:31628870 Outcomes after simultaneous kidney-pancreas versus pancreas after kidney transplantation in the current era. Simultaneous pancreas and kidney (SPK) and pancreas after kidney (PAK) transplant are both potential options for diabetic ESRD patients. Historically, PAK pancreas graft outcomes were felt to be inferior to SPK pancreas graft outcomes. Little is known about outcomes in the modern era of transplantation. We analyzed our SPK and PAK recipients transplanted between 01/2000 and 12/2016. There were a total of 635 pancreas and kidney transplant recipients during the study period, 611 SPK and 24 PAK. Twelve of the PAK patients received a living donor kidney. There were no significant differences between the two groups in kidney or pancreas graft rejection at 1 year. Similarly, 1-year graft survival for both organs was not different. At last follow-up, uncensored and death-censored graft survival was not statistically different for kidney or pancreas grafts. In addition, in Cox regression analysis SPK and PAK were associated with similar graft survival. Although the majority of pancreas transplants are in the form of SPK, PAK is an acceptable alternative. Simultaneous pancreas and kidney avoids donor risks associated with live donation, so may be preferable in regions with short wait times, but PAK with a living donor kidney may be the best alternative in regions with long SPK wait times. abstract_id: PUBMED:18660712 Long-term benefits of pancreas transplantation. Purpose Of Review: Pancreas transplantation has emerged as an effective treatment for patients with diabetes mellitus, especially those with established end-stage renal disease. Surgical and immunosuppressive advances have significantly improved allograft survival. With more recipients enjoying normoglycemia for longer periods of time, the opportunity to study more closely the effects of pancreas transplantation has arisen. This review will focus on these long-term benefits. Recent Findings: The field of pancreas transplantation has been limited by a lack of randomized, controlled trials and relatively poor graft survival rates historically, however we can still glean many important points from the existing literature. The procedure reduces mortality compared with diabetic kidney transplant recipients and waitlisted patients. Improvements in diabetic nephropathy and retinopathy have also been demonstrated. Pancreas transplantation can improve cardiovascular risk profiles, improve cardiac function and decrease cardiovascular events. Lastly, improvements in diabetic neuropathy and quality of life can result from pancreas transplantation. Summary: Pancreas transplantation remains the most effective method to establish durable normoglycemia for patients with diabetes mellitus. Well designed clinical studies to assess outcomes and adverse events will be of paramount importance in providing optimal care to patients with diabetes mellitus. abstract_id: PUBMED:22186094 Long-term outcome after pancreas transplantation. Purpose Of Review: Pancreas transplantation provides the only proven method to restore long-term normoglycemia in patients with insulin-dependent diabetes mellitus. Although many studies describe the very important risk factors for short-term survival of a pancreas transplant, there is not a lot of information available about factors that distinguish short-term from long-term graft function. Recent Findings: The analysis of 18,159 pancreas transplants from the International Pancreas Transplant Registry, performed from 25 July 1978 to 31 December 2005, showed an improvement not only in short-term but also in long-term graft function. Most recent 5-year, 10-year and 20-year graft function for transplants with the appropriate follow-up time showed 80, 68 and 45%, respectively, for simultaneous pancreas/kidney transplants; 62, 46 and 16%, respectively, for pancreas after kidney; and 59, 39 and 12%, respectively, for pancreas transplants alone. Important factors influencing long-term function were factors that described the quality of the deceased donor. Pancreas transplants in younger or African-American recipients showed a higher risk of graft failure. Anti-T-cell induction therapy had a significant impact on long-term survival in solitary transplants. Summary: With a careful donor selection, not only short-term but also long-term pancreas graft function and, therefore, good metabolic control can be achieved for the diabetic patient. abstract_id: PUBMED:27258580 Long-term outcome after pancreas transplantation: a registry analysis. Purpose Of Review: Pancreas transplantation provides the only proven method to restore long-term normoglycemia in patients with insulin-dependent diabetes mellitus. Although many studies describe the most important risk factors for short-term survival of a pancreas transplant, more information about factors that distinguish short-term from long-term graft function is needed. Recent Findings: Analysis of 21 328 pancreas transplants from the International Pancreas Transplant Registry, performed from 1984 to 2009 (minimum 5-year follow-up), shows a significant improvement in long-term patient survival and pancreas graft function. Total 5-and 10-year pancreas graft function rates are 73 and 56%, respectively, for simultaneous pancreas-kidney transplants; 64 and 38%, respectively, for pancreas after kidney; and 53 and 36%, respectively, for pancreas transplants alone. The most influential period is the first year posttransplant. Recipients who reach this time point with a functioning graft have a much higher probability for excellent long-term graft function. Important factors influencing long-term function were features that described the quality of the deceased donor. Pancreas transplants in younger, high panel reactive antibody, or African-American recipients also showed an increased risk of early graft failure. Anti-T-cell induction therapy had a significant impact on long-term survival in solitary transplants. Summary: With careful recipient and donor selection and close follow-up in the first year posttransplant, not only good short-term but also long-term pancreas graft function and, therefore, durable metabolic control can be achieved for the diabetic patient. Answer: The long-term outcomes of pancreas after kidney (PAK) transplantation in small centers appear to be justified based on the evidence provided. A retrospective analysis reported favorable short- and long-term results for both simultaneous pancreas and kidney transplantation (SPK) and PAK in a small center performing 2-9 pancreas transplants annually. The patient and graft survival rates were similar in both SPK and PAK groups and were better than the reported national average. The mean follow-up was 61 ± 46 months for SPK and 74 ± 46 months for PAK, respectively. The study concluded that unless there is a contraindication, pancreas transplantation should be offered to all type 1 diabetic patients with end-stage renal disease at the time of kidney transplantation or afterward (PUBMED:25131071). Furthermore, long-term metabolic results from another study showed that pancreas and kidney transplantation (PKTx) provides long-term normalization of glycemic control without any insulin treatment and diet, assessed by HbA1c and oral glucose tolerance test (OGTT), despite the existence of sustained hyperinsulinemia. This suggests that PKTx is an efficient treatment for uremic patients with insulin-dependent diabetes mellitus from a metabolic point of view (PUBMED:11486534). Additionally, a review of long-term benefits of pancreas transplantation indicated that the procedure reduces mortality compared with diabetic kidney transplant recipients and waitlisted patients, improves diabetic nephropathy and retinopathy, improves cardiovascular risk profiles and cardiac function, decreases cardiovascular events, and can result in improvements in diabetic neuropathy and quality of life (PUBMED:18660712). In summary, the evidence suggests that PAK transplantation in small centers is justified, as it can provide favorable long-term outcomes and significant benefits for type 1 diabetic patients with end-stage renal disease.
Instruction: Nurse-initiated defibrillation: are nurses confident enough? Abstracts: abstract_id: PUBMED:35854651 Haemodialysis patient's adherence to treatment: Relationships among nurse-patient-initiated participation and nurse's attitude towards patient participation. Aims And Objectives: To evaluate the relationship between nurse-patient-initiated participation, nurses' attitudes towards patient's participation, and patients' adherence to treatment. Specifically, to (1) explore nurse-patient participation during haemodialysis and quantify the information into measurable indices; (2) determine the haemodialysis patient's adherence to treatment; (3) describe nurses' attitudes towards patient participation; and (4) establish the relationships between nurse-patient-initiated participation, nurses' attitudes towards patient participation and patients' adherence to treatment. Background: To improve haemodialysis patients' health, it is crucial to identify nurses' and patients' factors facilitating adherence to treatment. Design: An exploratory-sequential mixed-methods (quantitative and qualitative) design. Methods: All nurses working at a dialysis ward (n = 30) and their randomly selected patients (n = 102) participated. Qualitative data on nurse-patient-initiated participation were derived from transcribed nurse-patient conversations and quantified for further analyses. Nurses' attitudes towards patient participation were collected via questionnaire, and adherence to treatment via observed reduction in prescribed haemodialysis time. [CONSORT-SPI guidelines]. Results: Content analysis of the conversations indicated that nurse-initiated participation focused on patient's medical condition, treatment plan and education; while patients initiated more small talk. Non-adherence to treatment was significant (Mean = 0.19 h; SD = 0.33). Regression analyses indicated that nurses' attitude towards participation was negatively linked to patient adherence, while patient-nurse-initiated participation was unrelated. Nurses' attitudes towards patient participation moderated the relationship between nurse-patient-initiated participation and patient adherence: the more positive the attitude towards inclusion the more negative the link between patient or nurse-initiated participation and patient adherence. Conclusions: The findings provided paradoxical insights: Nurses' positive attitudes towards participation lead them to accept the patient's position for shortening haemodialysis treatment, so that adherence to care decreases. Relevance To Clinical Practice: Nurses require education on negotiating methods to help achieve patient adherence while respecting the patient's opinion. Patients should be educated how to approach nurses, seeking the information they need. abstract_id: PUBMED:33727062 The effectiveness of nurse-initiated interventions in the Emergency Department: A systematic review. Background: Nurse-initiated interventions potentially provide an opportunity for earlier response for time sensitive presentations to the Emergency Department, and may improve time-to-treatment, symptomatic relief and patient flow through the department. Objective: To determine the effectiveness of nurse-initiated interventions on patient outcomes in the Emergency Department. Method: The review followed the JBI methodology for reviews of quantitative evidence. Each study was assessed by two independent reviewers and data were extracted from included papers using standardized data extraction tools. Outcomes of interest included time-to-treatment, relief of acute symptoms, waiting times and admission rates. Results: Twenty-six studies were included in the final review, with a total of 9144 participants. Nine were randomized control trials, 17 had a quasi-experimental design. Twelve of the studies involved pediatric patients only and 14 included adult patients only. Interventions, protocols and outcomes were heterogeneous across studies. Overall, nurse-initiated interventions were effective in reducing time-to-analgesia, time-to-treatment for acute respiratory distress as well as improved pain relief and decreased admission rates. Conclusion: To achieve early intervention and timely relief of acute symptoms, nurses should seek to consistently implement nurse-initiated interventions into their care of patients in the Emergency Department. Several findings are made to inform practice, however future high-quality research with locally specific strategies is required to improve certainty and quality of findings. abstract_id: PUBMED:28462830 A systematic review of the impact of nurse-initiated medications in the emergency department. Background: Nurse-initiated medications are one of the most important strategies used to facilitate timely care for people who present to Emergency Departments (EDs). The purpose of this paper was to systematically review the evidence of nurse-initiated medications to guide future practice and research. Methods: A systematic review of the literature was conducted to locate published studies and Grey literature. All studies were assessed independently by two independent reviewers for relevance using titles and abstracts, eligibility dictated by the inclusion criteria, and methodological quality. Results: Five experimental studies were included in this review: one randomised controlled trial and four quasi-experimental studies conducted in paediatric and adult EDs. The nurse-initiated medications were salbutamol for respiratory conditions and analgesia for painful conditions, which enabled patients to receive the medications quicker by half-an-hour compared to those who did not have nurse-initiated medications. The intervention had no effect on adverse events, doctor wait time and length of stay. Nurse-initiated analgesia was associated with increased likelihood of receiving analgesia, achieving clinically-relevant pain reduction, and better patient satisfaction. Conclusion: Nurse-initiated medications are safe and beneficial for ED patients. However, randomised controlled studies are required to strengthen the validity of results. abstract_id: PUBMED:28363627 Evaluation of a Nurse-Initiated Acute Gastroenteritis Pathway in the Pediatric Emergency Department. Problem: Acute gastroenteritis (AGE) is a common illness treated in the emergency department. Delays in initiating rehydration for children with mild or moderate dehydration from AGE can lead to prolonged ED visits and increased resource utilization that do not provide prognostic value or support family-centered care. The purpose of this quality improvement project was to promote early oral rehydration therapy (ORT) for persons with AGE in an attempt to reduce unnecessary resource utilization and length of stay (LOS). Methods: This prospective quality improvement project used a nurse-initiated waiting room ORT pathway for patients 6 months to 21 years of age who presented to the emergency department with diarrhea with or without vomiting. Outcomes related to nurse-initiated ORT, intravenous fluid use, laboratory studies or diagnostic imaging, and LOS were measured before and after implementation. Results: Of 643 patients for whom the pathway was initiated, 392 received nurse-initiated care. The proportion of intravenous fluid use was 10.2% lower (odds ratio [OR], 0.43; 95% confidence interval [CI], 0.27-0.68) and laboratory test ordering was 7.4% lower (OR, 0.64; 95% CI, 0.43-0.94) in patients receiving nurse-initiated care. Time to discharge after provider examination was 46 minutes faster in the nurse-initiated care group (P &lt; .001), resulting in an overall LOS reduction by 40 minutes (P &lt; .001). Implications For Practice: Nurse autonomy in using an AGE pathway facilitates evidence-based practice, improves ED efficiency, and decreases resource utilization and LOS. Future research should focus on family satisfaction and ED revisits within 72 hours of discharge. abstract_id: PUBMED:21183524 Nurse-initiated defibrillation: are nurses confident enough? Objectives: To determine the capability of nurses to identify ventricular fibrillation (VF) and ventricular tachycardia (VT) rhythms on an ECG and carry out subsequent defibrillation on their own as soon as they identify and confirm cardiac arrest. Methods: This was a prospective cohort study to determine the capability of emergency department (ED) nurses to recognise VF or pulseless VT correctly and their willingness to perform defibrillation immediately in an ED of a teaching hospital in Hong Kong. A questionnaire was completed before and after a teaching session focusing on the identification of rhythms in cardiac arrest and defibrillation skills. Correct answers for both ECG interpretation and defibrillation decisions scored one point for each question. The differences in mean scores between the pre-teaching and post-teaching questionnaires of all nurses were calculated. Results: 51 pre-teaching and 43 post-teaching questionnaires were collected. There were no statistically significant changes in ECG scores after teaching. For defibrillation scores, there was an overall improvement in the defibrillation decision (absolute mean difference 0.42, p=0.014). Performance was also improved by the teaching (absolute mean difference 0.465, p=0.046), reflected by the combination of both scores. Two-thirds (67%) of nurses became more confident in managing patients with shockable rhythms. Conclusion: Nurses improve in defibrillation decision-making skills and confidence after appropriate brief, focused in-house training. abstract_id: PUBMED:33824736 Ten years of nurse-initiated antiretroviral treatment in South Africa: A narrative review of enablers and barriers. Background: The roll out of nurse-initiated and managed antiretroviral treatment (NIMART) was implemented in 2010 by the National Department of Health (NDoH) in South Africa in response to the large numbers of persons living with HIV who needed treatment. To enable access to treatment requires shifting the task from doctors to nurses, which had its own challenges, barriers and enablers. Objectives: The aim of this narrative is to review content on the implementation of NIMART in South Africa over the period 2010-2020, with a focus on enablers and barriers to the implementation. Method: A comprehensive search of databases, namely, PubMed, Google Scholar and Cumulative Index to Nursing and Allied Health Literature (CINAHL), yielded qualitative, quantitative and mixed-method studies that addressed various topics on NIMART. Inclusion and exclusion criteria were set and 38 publications met the inclusion criteria for the review. Results: Training, mentorship, tailored tuberculosis (TB) and HIV guidelines, integration of services and monitoring and support have enabled the implementation of NIMART. This resulted in increased knowledge and confidence of nurses to initiate patients on antiretroviral treatment (ART) and decreased time to initiation and loads on referral facilities. Barriers such as non-standardised training, inadequate mentoring, human resource constraints, health system challenges, lack of support and empowerment, and challenges with legislation, policy and guidelines still hinder NIMART implementation. Conclusion: Identifying barriers and enablers will assist policymakers in implementing a structured programme for NIMART in South Africa and improve access, as well as the training and mentoring of professional nurses, which will enhance their competence and confidence. abstract_id: PUBMED:32944762 The implementation process of the Confident Birth method in Swedish antenatal education: opportunities, obstacles and recommendations. Antenatal clinics in western Sweden have recently invested in a birth method called Confident Birth. In this study, we investigate midwives' and first line managers' perceptions regarding the method, and identify opportunities and obstacles in its implementation. Semi-structured individual interviews were conducted with ten midwives and five first line managers working in 19 antenatal clinics in western Sweden. The Consolidated Framework for Implementation Research was used in a directed content analysis approach. Intervention Characteristics-such as perceptions about the Confident Birth method-were found to have equipped the midwives with coping strategies that were useful for expecting parents during birth. Outer Setting-the method was implemented to harmonize the antenatal education, and provided a mean for a birth companionship of choice. Inner setting-included time-consuming preparations and insufficient information at all levels, which affected the implementation. Characteristics of individuals-, such as knowledge and believes in the method, where trust in the method was seen as an opportunity, while long experience of teaching other birth preparatory methods, affected how the Confident Birth method was perceived. Process-such as no strategy for ensuring that the core of the method remained intact or plans for guiding its implementation were major obstacles to successful implementation. The findings speak to the importance of adequate planning, time, information and communication throughout the process to have a successful implementation. Based on lessons learned from this study, we have developed recommendations for successful implementation of interventions, such as the Confident Birth, in antenatal care settings. abstract_id: PUBMED:15572019 Attitudes and perceptions of nurses and doctors to nurse-led and nurse-initiated thrombolysis--an Irish perspective. Unlabelled: Nurse-led and nurse-initiated thrombolysis are strategies utilised within the United Kingdom to reduce delays for patients with acute myocardial infarction (AMI) requiring thrombolytic therapy. Both strategies have been found to reduce delays significantly. A Reduction in the delays experienced by patients can increase an individual's long-term survival rate. To date, there appears to be no documented research pertaining to nurse-led and nurse-initiated thrombolysis within the Irish arena. Aim: To investigate if the attitudes and perceptions of nurses and doctors are positive to nurse-led and nurse-initiated thrombolysis. Methods: A quantitative approach employing a comparative descriptive survey design was utilised. A convenience sample of 75 nurses and 28 doctors was obtained. Findings: Highlighted that nurse-led and nurse-initiated thrombolysis are potential roles for coronary care nurses. There was a significant difference of opinion between the two professional groups regarding this initiative, with nurses having higher levels of agreement. Nurses were more willing to undertake nurse-led thrombolysis (91%) as compared to nurse-initiated (74%), with years of experience and education appearing to influence this decision. Conclusion: It is suggested nurse-led thrombolysis is the more favourable role to Irish nurses and doctors. abstract_id: PUBMED:27741385 Improving nurse initiated X-ray practice through action research. Introduction: Due to increasing demands on hospital Emergency Departments (EDs), the role of registered nurses, with additional training, has been extended to include requesting X-ray examinations. The aim of this study was to evaluate nurse practice guidelines for requesting X-rays in the ED setting and to utilise inter-professional learning and change management theory to promote practice improvements. Methods: Three hundred and one nurse initiated X-ray (NIX) requests were randomly selected between January and March 2012, and reviewed for observance of local department guidelines and quality of clinical history. The results of this preliminary review were used to inform the investigating team in order to improve and support practice. A collaborative educational intervention utilising inter-professional learning and change management theory was implemented with an aim of improving the clinical history provided in NIX requests and development of a new policy to support clinical practice. A second review was repeated from February to April 2014 to evaluate the success of the educational intervention. Results: Observance of NIX guidelines improved from pre-intervention to post-intervention (48% vs. 90%, P &gt; 0.001). Quality of clinical history also significantly improved in all four essential variable criteria: (1) mechanism of injury; (2) injury location; (3) side of injury; and (4) clinical question. Conclusion: This study demonstrates that utilising inter-professional learning and change management theory can contribute to significant improvements in and support clinical practice of NIX in the emergency setting. abstract_id: PUBMED:17983361 Nurse-initiated defibrillation? Reality or rhetoric. Evidence indicates that hospital nurse-initiated defibrillation improves survival following cardiac arrest. Accordingly, hospitals are changing their policies to permit nurses to initiate defibrillation. However, if nurse-initiated defibrillation is to be successful implemented, nurses' beliefs about this practice need to be understood. Therefore, the aim of this study was to explore the attitudes of rural nurses towards defibrillation to assist in the development of nurse-initiated defibrillation programmes. This cross-sectional study examined the defibrillation beliefs of registered nurses in rural areas. A proportionally stratified sample of registered nurses (n = 436) were drawn from 51 rural acute care hospitals in Australia. Most (n = 224; 52%) of the participants were not permitted to initiate defibrillation. A one-way between-groups multivariate analysis of variance showed that nurses who were permitted to initiate defibrillation held stronger positive beliefs towards defibrillation than nurses not permitted (10,410) = 13.88, p &lt; 0.001. Nurses not permitted were more concerned about the challenge of learning rhythms, incurring litigation and harming the patient or themselves. If it is accepted that defibrillation will become an essential part of all nurses' roles, these beliefs should be explored and integrated into educational programmes as inappropriate beliefs about defibrillation may impede implementation and skills development. Answer: The study "Nurse-initiated defibrillation: are nurses confident enough?" (PUBMED:21183524) aimed to determine the capability of nurses to identify ventricular fibrillation (VF) and ventricular tachycardia (VT) rhythms on an ECG and carry out subsequent defibrillation on their own as soon as they identify and confirm cardiac arrest. The study was conducted in the emergency department (ED) of a teaching hospital in Hong Kong and involved a prospective cohort study design. Nurses completed a questionnaire before and after a teaching session focusing on the identification of rhythms in cardiac arrest and defibrillation skills. The results showed that there were no statistically significant changes in ECG scores after teaching, but there was an overall improvement in the defibrillation decision (absolute mean difference 0.42, p=0.014). Performance was also improved by the teaching (absolute mean difference 0.465, p=0.046), and two-thirds (67%) of nurses became more confident in managing patients with shockable rhythms. The study concluded that nurses improve in defibrillation decision-making skills and confidence after appropriate brief, focused in-house training. This study suggests that with targeted training, nurses can become more confident and capable in initiating defibrillation, which is a critical intervention for patients experiencing VF or pulseless VT in the ED. The improvement in nurses' confidence and decision-making skills regarding defibrillation indicates that educational interventions can effectively prepare nurses for this responsibility. However, the study also implies that ongoing training and support are essential for maintaining and enhancing these skills in clinical practice.
Instruction: Workload projections for surgical oncology: will we need more surgeons? Abstracts: abstract_id: PUBMED:14597452 Workload projections for surgical oncology: will we need more surgeons? Background: Over the next two decades, the US population will experience dramatic growth in the number and relative proportion of older individuals. The aim of this study was to quantify the effect of these changes on the demand for oncological procedures. Methods: The 2000 Nationwide Inpatient Sample and the 1996 National Survey of Ambulatory Surgery were used to compute age-specific incidence rates for oncological procedures of the breast, colon, rectum, stomach, pancreas, and esophagus. Procedure rates were combined with census projections for 2010 and 2020 to estimate the future utilization of each procedure. Results: By 2020, the number of patients undergoing oncological procedures is projected to increase by 24% to 51%. The bulk of growth in procedures is derived from outpatient procedures, but significant growth will also be seen in inpatient procedures. Conclusions: The aging of the population will generate an enormous growth in demand for oncological procedures. If a shortage of surgeons performing these procedures does occur, the result will inevitably be decreased access to care. To prevent this from happening, the ability of surgeons to cope with an increased burden of work needs to be critically evaluated and improved. abstract_id: PUBMED:31916090 Workload Differentiates Breast Surgical Procedures: NSM Associated with Higher Workload Demand than SSM. Background: Breast surgery has evolved with more focus on improving cosmetic outcomes, which requires increased operative time and technical complexity. Implications of these technical advances in surgery for the surgeon are unclear, but they may increase intraoperative demands, both mentally and physically. We prospectively evaluated mental and physical demand across breast surgery procedures, and compared surgeon ergonomic risk between nipple-sparing (NSM) and skin-sparing mastectomy (SSM) using subjective and objective measures. Methods: From May 2017 to July 2017, breast surgeons completed modified NASA-Task Load Index (TLX) workload surveys after cases. From January 2018 to July 2018, surgeons completed workload surveys and wore inertial measurement units to evaluate their postures during NSM and SSM cases. Mean angles of surgical postures, ergonomic risk, survey items, and patient factors were analyzed. Results: Procedural duration was moderately related to surgeon frustration, mental and physical demand, and fatigue (p &lt; 0.001). NSMs were rated 23% more physically demanding (M = 13.3, SD = 4.3) and demanded 28% more effort (M = 14.4, SD = 4.6) than SSMs (M = 10.8, SD = 4.7; M = 11.8, SD = 5.0). Incision type was a contributing factor in workload and procedural difficulty. Left arm mean angle was significantly greater for NSM (M = 30.1 degrees, SD = 6.6) than SSMs (M = 18.2 degrees, SD = 4.3). A higher musculoskeletal disorder risk score for the trunk was significantly associated with higher surgeon physical workload (p = 0.02). Conclusion: Nipple-sparing mastectomy required the highest surgeon-reported workload of all breast procedures, including physical demand and effort. Objective measures identified the surgeons' left upper arm as being at the greatest risk for a work-related musculoskeletal disorder, specifically from performing NSMs. abstract_id: PUBMED:31342382 The 2018 Compensation Survey of the American Society of Breast Surgeons. Background: There is limited compensation data for breast surgery benchmarking. In 2018, the American Society of Breast Surgeons conducted its second membership survey to obtain updated compensation data as well as information on practice type and setting. Methods: In October 2018, a survey was emailed to 2676 active members. Detailed information on compensation was collected, as well as data on gender, training, years in and type of practice, percent devoted to breast surgery, workload, and location. Descriptive statistics and multivariate analyses were performed to analyze the impact of various factors on compensation. Results: The response rate was 38.2% (n = 1022, of which 73% were female). Among the respondents, 61% practiced breast surgery exclusively and 54% were fellowship trained. The majority of fellowship-trained surgeons within 5 years of completion of training (n = 126) were female (91%). Overall, mean annual compensation was $370,555. On univariate analysis, gender, years of practice, practice type, academic position, ownership, percent breast practice, and clinical productivity were associated with compensation, whereas fellowship training, region, and practice setting were not. On multivariate analysis, higher compensation was significantly associated with male gender, years in practice, number of cancers treated per year, and wRVUs. Compensation was lower among surgeons who practiced 100% breast compared with those who did a combination of breast and other surgery. Conclusions: Differences in compensation among breast surgeons were identified by practice type, academic position, ownership, years of practice, percent breast practice, workload, and gender. Overall, mean annual compensation increased by $40,000 since 2014. abstract_id: PUBMED:34697863 A system for equitable workload distribution in clinical medical physics. Background: Clinical medical physics duties include routine tasks, special procedures, and development projects. It can be challenging to distribute the effort equitably across all team members, especially in large clinics or systems where physicists cover multiple sites. The purpose of this work is to study an equitable workload distribution system in radiotherapy physics that addresses the complex and dynamic nature of effort assignment. Methods: We formed a working group that defined all relevant clinical tasks and estimated the total time spent per task. Estimates used data from the oncology information system, a survey of physicists, and group consensus. We introduced a quantitative workload unit, "equivalent workday" (eWD), as a common unit for effort. The sum of all eWD values adjusted for each physicist's clinical full-time equivalent yields a "normalized total effort" (nTE) metric for each physicist, that is, the fraction of the total effort assigned to that physicist. We implemented this system in clinical operation. During a trial period of 9 months, we made adjustments to include tasks previously unaccounted for and refined the system. The workload distribution of eight physicists over 12 months was compared before and after implementation of the nTE system. Results: Prior to implementation, differences in workload of up to 50% existed between individual physicists (nTE range of 10.0%-15.0%). During the trial period, additional categories were added to account for leave and clinical projects that had previously been assigned informally. In the 1-year period after implementation, the individual workload differences were within 5% (nTE range of 12.3%-12.8%). Conclusion: We developed a system to equitably distribute workload and demonstrated improvements in the equity of workload. A quantitative approach to workload distribution improves both transparency and accountability. While the system was motivated by the complexities within an academic medical center, it may be generally applicable for other clinics. abstract_id: PUBMED:20479543 Workload modeling for teletherapy unit. Aims: This study aims to derive a radiotherapy workload model using a prospectively collected dataset of patient and treatment information from a teletherapy treatment unit. Materials And Methods: Information about all individual radiotherapy treatment was collected for two weeks from the Phoenix unit in our department. This information included diagnosis, treatment site, treatment time, fields per fraction, technique, use of blocks and wedges. Data were collected for two weeks (10 working days) in January 2008. During this time, 45 patients were treated with 450 fractions of external beam radiotherapy in Phoenix unit. Results: The mean fraction duration, irradiation time and setup time were 9.55 minutes, 1.84 minutes and 7.66 minutes respectively. A mathematical workload model was derived using the average fraction duration time, total irradiation time and setup time of different types of treatment. A simple software program (Workload Calculation Chart) was also constructed in Microsoft Excel using the derived algorithm. The model based software program was tested and applied for one year and found that it can be used effectively to describe workload of teletherapy unit. Conclusion: Proposed methodology for workload modeling of teletherapy unit and the workload calculation software is very effective to quantitatively plan/calculate the optimal workload which will satisfy both the patient care administrator and radiation therapy technologists. abstract_id: PUBMED:11277144 Colorectal surgery in rural Australia: scars; a surgeon-based audit of workload and standards. Background: The collection and measurement of colorectal surgical workload, case management and clinical indicators have been mainly based on metropolitan specialist institutions. The aim of the present study was to examine the workload and standards of colorectal surgery in rural Australia. Methods: Sixty-nine rural general surgeons in Victoria, Albury and South Australia were invited to complete a questionnaire for each transabdominal colorectal operation performed over a 12-month period from 1 May 1996. Data were collected on comorbidity, operation detail, pathology, complications and intention to use adjuvant cancer therapy. Results: Sixty-two surgeons contributed 877 data forms. The patient average age was 65 years with 60% having pre-existing disease. One-third of operations were emergency presentations of which bowel obstruction was the most common. An anastomosis was performed in 675 patients of whom 22 (3.3%) had a clinical anastomotic leak. For low rectal anastomosis the leak rate was 8.9%. Two-thirds of patients had colorectal cancer and 42% of these cancer patients had advanced (Australian clinicopathological stage C or D) disease. The perioperative mortality rate was 4.6% but in the presence of more than two comorbidities it was 16.4%. Mortality was higher with emergency presentations (8.3%), particularly in patients older than 80 years (15.2%). Conclusions: The study sampled a very high percentage of rural colorectal surgery performed during the audit period. Colorectal surgery clinical indicators were comparable to other Australian studies. Anti-thrombotic and adjuvant therapy were identified as two areas requiring further education. Major surgery is being performed regularly in south-eastern rural Australia at a consistently high standard by surgeons who live and work in their rural community. abstract_id: PUBMED:20236155 Low abdominoperineal excision rates are associated with high-workload surgeons and lower tumour height. Is further specialization needed? Aim: Wide variation, independent of disease extent and case mix, has been observed in the rate of use of abdominoperineal excision (APE) for rectal cancer. Previous analyses have, however, been confounded by failure to adjust for the location of the tumour within the rectum. This population-based study sought to examine whether variations in tumour height explained differences in APE use. Method: Information was obtained on all individuals who underwent a major resection for a rectal tumour diagnosed between 1998 and 2005 across the Northern and Yorkshire regions of the UK. Median distances from the dentate line were calculated for all tumours excised by APE and compared with rates of use of APE between specialists and nonspecialist surgeons and across hospital trusts. Results: The completeness of pathological reporting of height of tumour within the rectum was variable. A low rate of APE use was associated with a lower median distance of tumours from the dentate line. Specialist colorectal cancer surgeons performed fewer APEs on patients with a tumour located lower in the rectum than nonspecialist surgeons. Conclusion: Variations in the height of tumour did not explain the variation in APE use. Specialist high-volume surgeons undertook fewer APEs and those they performed were closer to the dentate line than low-volume nonspecialist surgeons. abstract_id: PUBMED:10325683 Gastrointestinal surgical workload in the DGH and the upper gastrointestinal surgeon. Workload implications of upper gastrointestinal (UGI) subspecialisation within the district general hospital (DGH) have been assessed by prospective data collection over a 12-month period in a DGH with six general surgeons serving a population of 320,000. The single UGI surgeon (UGIS) performed all ten oesophageal resections, ten of 11 gastric resections for malignancy and all eight pancreatic operations. He also performed 91 of the 182 cholecystectomies, 164 of the 250 endoscopic retrograde cholangiopancreatograms (ERCP) and all endoscopic procedures for the palliation of unresected oesophageal tumours. The UGIS was responsible for the management of all patients with severe pancreatitis, yet he also performed 51 colorectal resections over the 12-month period. Successful management of severely ill patients with upper GI disease requires consultant supervision on a day-to-day basis. If such UGI disease is to be managed in the DGH, two surgeons with UGI experience will be required if high quality care and reasonable working conditions are to be achieved. Such UGIS will continue to perform some colorectal surgery. abstract_id: PUBMED:17711498 Stress and burnout among colorectal surgeons and colorectal nurse specialists working in the National Health Service. Background: It has been suggested that changes to the organization of the National Health Service (NHS) and clinical practices in dealing with cancer are associated with increased stress and burnout in healthcare professionals. The aim of this study, therefore, was to evaluate stress and burnout in colorectal surgeons (surgeons) and colorectal clinical nurse specialists (nurses) working in the NHS. Method: A list of all consultant surgeons and nurses was obtained from The Association of Coloproctology of Great Britain and Ireland. Participants were sent a questionnaire booklet consisting of standardized measures [General Health Questionnaire (GHQ), Maslach Burnout Inventory (MBI), Coping Questionnaire] and various ad hoc questions to obtain information about demographics, cancer workload and job satisfaction. Independent predictors of clinically significant distress and burnout were identified using logistic regression. Results: Four hundred and fifty-five surgeons and 326 nurses were sent booklets. The response rate was 55.6% in surgeons and 54.3% in nurses. The mean age of the nurses was lower than that of surgeons (42.8 vs 47.7, P &lt; 0.001). Psychiatric morbidity was similar in the surgeons and nurses as assessed using the GHQ (30.2% and 30.3% respectively). On the MBI, compared with nurses, surgeons had significantly higher levels of depersonalization (17.4%vs 7.4%, P = 0.003) and lower personal accomplishment (26.6%vs 14.2%, P = 0.002). Seventy-seven per cent of surgeons and 63.4% of nurses stated their intention to retire before the statutory retirement age. Coping strategies, especially those in which respondents isolated themselves from friends and family, were associated with higher psychiatric morbidity and burnout. Dissatisfaction with work, intention to retire early, intention to retire as soon as affordable and poor training in communication and management skills were also significantly associated with high GHQ scores and burnout in both groups. Discussion: We found high levels of psychiatric morbidity and burnout in this national cohort of surgeons and nurses working in the NHS. However, psychiatric morbidity and burnout were unrelated to cancer workload. Nurses have lower burnout levels than surgeons and this may be related to their different working practices, responsibilities and management structure. abstract_id: PUBMED:27355277 Departmental Workload and Physician Errors in Radiation Oncology. Purpose: The purpose of this work was to evaluate measures of increased departmental workload in relation to the occurrence of physician-related errors and incidents reaching the patient in radiation oncology. Materials And Methods: All data were collected for the year 2013. Errors were defined as forms received by our departmental process improvement team; of these forms, only those relating to physicians were included in the study. Incidents were defined as serious errors reaching the patient requiring appropriate action; these were reported through a separate system. Workload measures included patient volumes and physician schedules and were obtained through departmental records for daily and monthly data. Errors and incidents were analyzed for relation with measures of workload using logistic regression modeling. Results: Ten incidents occurred in the year. The number of patients treated per day was a significant factor relating to incidents (P &lt; 0.003). However, the fraction of department physicians off-duty and the ratio of patients to physicians were not found to be significant factors relating to incidents. Ninety-one physician-related errors were identified, and the ratio of patients to physicians (rolling average) was a significant factor relating to errors (P &lt; 0.03). The number of patients and the fraction of physicians off-duty were not significant factors relating to errors.A rapid increase in patient treatment visits may be another factor leading to errors and incidents. All incidents and 58% of errors occurred in months where there was an increase in the average number of fields treated per day from the previous month; 6 of the 10 incidents occurred in August, which had the highest average increase at 26%. Conclusions: Increases in departmental workload, especially rapid changes, may lead to higher occurrence of errors and incidents in radiation oncology. When the department is busy, physician errors may be perpetuated owing to an overwhelmed departmental checks system, leading to incidents reaching the patient. Insights into workload and workflow will allow for the development of targeted approaches to preventing errors and incidents. Answer: Based on the provided abstracts, it appears that there will be a significant increase in the demand for surgical oncology procedures due to the aging population, which will likely necessitate an increase in the number of surgeons. The study in abstract PUBMED:14597452 projects that by 2020, the number of patients undergoing oncological procedures is expected to increase by 24% to 51%, with the bulk of growth derived from outpatient procedures. This growth in demand, coupled with the potential shortage of surgeons, could result in decreased access to care unless the ability of surgeons to cope with an increased burden of work is critically evaluated and improved. Furthermore, the evolution of breast surgery towards improving cosmetic outcomes, which requires increased operative time and technical complexity, has increased intraoperative demands on surgeons both mentally and physically, as indicated in abstract PUBMED:31916090. This suggests that as surgical procedures become more complex, the workload on individual surgeons may also increase, potentially requiring more surgeons to manage the workload effectively. Additionally, abstract PUBMED:31342382 highlights that compensation among breast surgeons varies by practice type, academic position, ownership, years of practice, percent breast practice, workload, and gender. This suggests that workload is a significant factor in the professional lives of surgeons and may influence compensation and career choices, which could impact the supply of surgeons. The abstracts also discuss workload distribution systems in clinical medical physics (PUBMED:34697863) and teletherapy units (PUBMED:20479543), as well as workload and standards in colorectal surgery in rural Australia (PUBMED:11277144) and the association between surgeon workload and surgical procedures (PUBMED:20236155). These studies further emphasize the importance of managing workload in healthcare settings. In conclusion, the projected increase in demand for oncological procedures, the evolving complexity of surgeries, and the current considerations of workload in surgical practice all suggest that there will likely be a need for more surgeons to meet future demands and maintain quality patient care (PUBMED:14597452).
Instruction: Does Burch colposuspension cure coital incontinence? Abstracts: abstract_id: PUBMED:15118636 Does Burch colposuspension cure coital incontinence? Objective: The purpose of this study was to evaluate the effect of Burch colposuspension for stress urinary incontinence on concomitant coital incontinence. Study Design: The urogynecology database was searched for sexually active women, who experienced coital incontinence on vaginal penetration, orgasm, or both and who had subsequently undergone Burch colposuspension for urodynamic stress incontinence. The women were interviewed or sent a questionnaire on postoperative bladder and sexual function after a minimum follow-up time of 6 months. Results: Thirty of 43 women answered the questionnaire. Preoperatively, 22 women (73%) experienced urinary leakage during penetration, 3 (10%) during orgasm and 5 (17%) at both. Stress incontinence symptoms were successfully treated in 23 (77%). Coital incontinence was cured in 21 of 30 (70%) and improved in 2. Conclusion: The results of this small series suggest that coital incontinence is likely to be cured or improved when stress incontinence has been successfully treated by Burch colposuspension. abstract_id: PUBMED:34475912 The practice of Burch Colposuspension versus Mid Urethral Slings for the treatment of Stress Urinary Incontinence in developing country. Objectives: To compare the effectiveness and complications of Burch colposuspension and Mid Urethral Slings (MUS) for the treatment of Stress Urinary Incontinence (SUI). Methods: We conducted a cross-sectional study of 162 patients who underwent surgery for SUI with Burch colposuspension (n=40), tension free vaginal tape (TVT) (n= 59) or transobturator tape (TOT) (n=63), from 2006 to 2014 at the Aga Khan University Hospital- Karachi. All three groups were assessed in terms of demographics, cure rates, intraoperative and postoperative complications at one and five years using incontinence impact questionnaire-short form-7 (IIQ-7) and urogenital distress inventory -short form-6 (UDI-6). Results: Mean age of the participants in Burch, TVT and TOT group was 44.1 ± 7.4, 48.3 ± 8.9, 53.0 ± 9.4 respectively. Majority of patients in TVT group were premenopausal (59.3%) and postmenopausal in TOT group (53.9%). Most abdominal hysterectomies were done in Burch group (40) while vaginal hysterectomies and anterior and posterior colporrhaphy in TOT group (55). All the procedures had both subjective and objective cure rate of more than 82% at one year, with TVT having the highest success rate of 96.61%. The objective cure rate in Burch, TVT and TOT group at five years was 74.19%, 90.30% and 81.25% respectively. Intraoperative complications included hemorrhage in one patient during Burch procedure and bladder perforation in two cases of TVT, with no significant difference in short or long-term complications with either procedure. Conclusions: All the three procedures have equal efficacy and complication rates. Even though TVT is the new gold standard but in view of current debate regarding mesh related complications, there is a need to readdress Burch colposuspension for treatment of SUI. abstract_id: PUBMED:15512298 Burch colposuspension for the treatment of coital urinary leakage secondary to genuine stress incontinence. The efficacy of Burch colposuspension in treating the symptom of coital urinary leakage in women with genuine stress incontinence has to date never been reported. Women who presented to our clinic with regular coital urinary leakage and urodynamically proven genuine stress incontinence between 1993 and 1997, and who proceeded to a Burch colposuspension procedure, were reviewed to determine the outcome of surgery. Fifty-five women were identified (mean age 46.1 years) with a mean follow-up interval after surgery of 18 months (range 3-42 months). All 55 women had symptoms of stress, urge and coital incontinence preoperatively. Following colposuspension, the subjective cure rates for stress and urge incontinence were 84% and 85%, respectively. Of 52 women that were sexually active after surgery, 81% described no further coital incontinence. The success or failure of surgery was not influenced by whether leakage occurred with penetration or orgasm preoperatively. abstract_id: PUBMED:15928514 The contemporary role of Burch colposuspension. Purpose Of Review: The purpose of this review was to define the current role of Burch colposuspension for treatment of female stress urinary incontinence. Publications from 2004 were reviewed. Recent Findings: The open Burch colposuspension is reviewed with highlights on its efficacy, mechanism of continence, recent reports on intraoperative ultrasound, postoperative catheterization, coital incontinence, and the effect of concomitant procedures. Long-term efficacy has remained at approximately 70%. Less invasive Burch approaches are evaluated including the laparoscopic techniques and the mini-incisional Burch colposuspension. The laparoscopic Burch approach is satisfactory if sutures rather than mesh are used. Three well designed, prospective, randomized trials comparing the Burch (one open and two laparoscopic) colposuspension with tension-free vaginal tape are discussed. Summary: The open Burch procedure with its long-term success rates remains a gold standard for surgical treatment of genuine stress urinary incontinence. Less invasive Burch procedures require longer term follow-up studies with comparison to the open approach and tension-free vaginal tape before its role can be settled. At this point, tension-free vaginal tape appears to be at least equivalent to the Burch colposuspension and, with longer follow-up studies, may challenge its role as a gold standard surgical treatment for female stress incontinence. abstract_id: PUBMED:32339752 Laparoscopic TOT-like Burch Colposuspension: Back to the Future? Objective: To demonstrate a modification of the classic Burch procedure, called "laparoscopic transobturator tape (TOT)-like Burch colposuspension." The technique does not involve any type of prosthesis placement, and it is an alternative for patients with stress urinary incontinence in a future without meshes. Describing and standardizing the procedure in different steps makes the surgery reproducible for gynecologists and safe for the patients. Design: Step-by-step educational video, underlining and focusing on the main anatomical landmarks. Setting: A university tertiary care hospital. Interventions: The patient is set under general anesthesia and in lithotomy position. The distinct steps of the procedure are performed as followed: Step 1: Installation. Two 10-mm trocars are positioned in the midline and 2 5-mm trocars in the suprapubic region. The recommended intra-abdominal pressure is 6 to 8 mm Hg, and excessive Trendelenburg is not needed. Step 2: Entry in the Retzius space. The median umbilical ligament and the vesicoumbilical fascia are transected. Step 3: Exposure of the Retzius space and the anatomical structures. The dissection is continued consecutively toward the pubic bone and the Cooper's ligament, laterally toward the external iliac vessels and the corona mortis and medially toward the bladder neck. Step 4: Vaginal dissection. The pubocervical is dissected at the level of the pubourethral ligaments. Step 5: Suspension of the vagina to the Cooper's ligament. In contrast to the standard technique, with the TOT-like Burch, the sutures on the pubocervical fascia are placed at the level of the attachment of the arcus tendinous fascia pelvis and the pubourethral ligament. This way of suspension ensures a lateral traction on the bladder neck, resembling the effect of the TOT, which leads to lower incidence of dysuric symptoms. Step 6: Peritoneal closure. Conclusion: The classic colposuspension was created in 1961 for the treatment of stress urinary incontinence prolapse [1]. In the following years, vaginal meshes gained popularity as a treatment option for prolapse and for incontinence owing to their ease of use and satisfying results, which led to a decreased use of the Burch procedure [2,3]. In 2019, the Food and Drug Administration forbid the production of the transvaginal meshes for prolapse [4], an interdiction that could influence the use of synthetic meshes for incontinence in the future [5]. Owing to these recent events, searching for an effective way of management for patients with stress urinary incontinence without any synthetic prostheses, gynecologists have turned back to the 60-year-old Burch colposuspension. One of the drawbacks of the original technique is the high incidence of voiding difficulties-up to 22% [6]. Owing to the knowledge of the exact course of traction with the TOT, in our modified technique, the lateral direction of the suspension provides a tension-free support on the urethra and the bladder neck. The laparoscopic TOT-like Burch colposuspension is a safe and effective treatment for patients with stress urinary incontinence with low rates of dysuric symptoms and represents a valuable alternative for gynecologists in a future without meshes. abstract_id: PUBMED:28439634 Long-term clinical outcomes with the retropubic tension-free vaginal tape (TVT) procedure compared to Burch colposuspension for correcting stress urinary incontinence (SUI). Introduction And Hypothesis: The retropubic tension-free vaginal tape (TVT) procedure replaced Burch colposuspension as the primary surgical method for stress urinary incontinence (SUI) and mixed urinary incontinence (MUI) in women in our department in 1998. In this study we compared the short-term and long-term clinical outcomes of these surgical procedures. Methods: Using a case series design, we compared the last 5 years of the Burch procedure (n = 127, 1994-1999) with the first 5 years of the retropubic TVT procedure (n = 180, 1998-2002). Information from the medical records was transferred to a case report form comprising data on perioperative and long-term complications as well as recurrence of UI, defined as bothersome UI or UI in need of repeat surgery. Other endpoints were rates of perioperative and late complications and the rates of prolapse surgery after primary surgery. The data were analyzed with the chi-squared and t tests and survival analysis using SPSS. Results: The cumulative recurrence rate of SUI in women with preoperative SUI was significantly higher after the Burch procedure, but no difference was observed in women with MUI. There were no significant differences in rates of perioperative and late complications. At 12 years there was a significant increase in rates of repeat surgery for incontinence and prolapse in women after the Burch procedure. Conclusions: The long-term efficacy of TVT surgery was superior to that of Burch colposuspension in women with SUI. In addition, the rate of late prolapse surgery was significantly higher after the Burch procedure. abstract_id: PUBMED:29725708 Outcomes of stress urinary incontinence in women undergoing TOT versus Burch colposuspension with abdominal sacrocolpopexy. Introduction And Hypothesis: To compare postoperative rates of stress urinary incontinence (SUI) in patients with pelvic organ prolapse and SUI undergoing abdominal sacrocolpopexy (ASC) with Burch colposuspension or a transobturator tape (TOT) sling. Methods: In this retrospective cohort study, medical records of 117 patients who underwent ASC with Burch (n = 60) or TOT (n = 57) between 2008 and 2010 at NYU Winthrop Hospital were assessed. Preoperative evaluation included history, physical examination, cough stress test (CST), and multichannel urodynamic studies (MUDS). Primary outcomes were postoperative continence at follow-up up to 12 weeks. Patients considered incontinent reported symptoms of SUI and had a positive CST or MUDS. Secondary outcomes included intra- and postoperative complications. Associations were analyzed by Fisher's exact, McNemar's and Wilcoxon-Mann-Whitney tests. Results: The groups were similar regarding age, BMI, parity, Valsalva leak point pressure (VLPP), and prior abdominal surgery (p = 0.07-0.76). They differed regarding preoperative SUI diagnosed by self-reported symptoms, CST, or MUDS (TOT 89.5-94.7%, Burch 60.7-76.3%, p &lt; 0.0001-0.007). The TOT group had lower rates of postoperative SUI (TOT 12.5%, Burch 30%, OR = 0.15, 95% CI 0.04, 0.62). Relative risk reduction (RRR) in postoperative SUI for the TOT group compared with the Burch group was 79%-86%. There were no differences concerning intra- and postoperative complications. The Burch group had a higher rate of reoperation for persistent/recurrent SUI (Burch 25%, TOT 12% p = 0.078). Conclusions: The TOT group experienced a greater reduction in postoperative incontinence, and the Burch group underwent more repeat surgeries. The TOT sling may be superior in patients undergoing concomitant ASC. abstract_id: PUBMED:33316277 Long-term effectiveness and safety of open Burch colposuspension vs retropubic midurethral sling for stress urinary incontinence-results from a large comparative study. Background: There are few adequately powered long-term trials comparing midurethral sling and Burch colposuspension. Recent concerns about synthetic mesh with new stringent clinical and research governance support the need for evidence to facilitate shared decision making. Objective: This study aimed to compare long-term outcomes of open Burch colposuspension with the retropubic midurethral sling. Study Design: A matched cohort study of 1344 women with urodynamic stress incontinence (without intrinsic sphincter deficiency) who underwent surgery for stress urinary incontinence. Women had either open Burch colposuspension or the retropubic midurethral sling, from January 2000 to June 2018, in a tertiary center. Follow-up was by chart review and one-time phone follow-up until 2019, using a dedicated database. Primary outcomes were the presence or absence of stress urinary incontinence on follow-up, the success of index surgery based on response to validated questionnaires of patient-reported outcomes, and retreatment rates. Secondary outcomes are described below. Matching (1:3) was done at baseline to avoid confounding. Results: The study included 1344 women who had either Burch colposuspension (336) or retropubic midurethral sling (1008). Mean follow-up was 13.1 years for Burch colposuspension and 10.1 years for retropubic midurethral sling. In the Burch colposuspension group, 83.0% of patients (279 of 336) reported no ongoing stress urinary incontinence at the time of the latest follow-up vs 85.0% (857 of 1008) in the retropubic midurethral sling group (P=.38). Success in terms of the latest reported International Consultation on Incontinence Questionnaire-Urinary Incontinence Short Form (defined as International Consultation on Incontinence Questionnaire-Urinary Incontinence Short Form score of ≤6) where these data were available were similar within both groups: 76.0% (158 of 208 where this was available) in Burch colposuspension vs 72.1% (437 of 606 where this was available) in retropubic midurethral sling (P=.32). Where this information was available, success defined by a Patient Global Impression of Improvement of "very much improved" and "much improved" was similar between Burch colposuspension and retropubic midurethral sling groups (84.1% [243 of 289] vs 82.0% [651 of 794]; P=.88). Where data were available, 88.1% of women (178 of 202) in the Burch colposuspension group said they were very likely to recommend the surgery to family or a friend vs 85.0% (580 of 682) in retropubic midurethral sling (P=.30).Overall, 3.6% needed repeat incontinence procedures (13 in Burch colposuspension group [3.8%] vs 35 in retropubic midurethral sling group [3.5%]; P=.73). The incidence of mesh exposure was 1.0 %. Notably, 1 Burch colposuspension patient had a suture in the bladder during follow-up; 5 patients have reported long-standing pain across the study population. Overall, 51 women reported new-onset overactive bladder symptoms on follow-up: 10 of 336 (3.0%) had Burch colposuspension and 41 of 1008 (4.1%) had retropubic midurethral sling (P=.41). The need for future prolapse surgery per index procedure was 3.3% after Burch colposuspension vs 1.1% postretropubic midurethral sling (P=.01). Moreover, 9 of the 11 patients who needed a prolapse repair after Burch colposuspension required a posterior repair. The incidence of long-term severe voiding difficulty needing self-catheterization was similar in both groups (0.3% in Burch colposuspension and 0.5 % in retropubic midurethral sling group; P=1.00). Conclusion: This study shows no difference in success, patient satisfaction, or complications between Burch colposuspension and retropubic midurethral sling, although the risk of posterior compartment prolapse operations after Burch colposuspension is increased. Reoperation rates for incontinence were similar in both groups. Chronic pain was a rare outcome. abstract_id: PUBMED:15572485 Laparoscopic Burch colposuspension versus tension-free vaginal tape: a randomized trial. Objective: To compare the laparoscopic Burch colposuspension with the tension-free vaginal tape procedure (TVT) for efficacy. Methods: Seventy-two women from 2 institutions were randomized: 36 to laparoscopic Burch colposuspension and 36 to TVT. Multichannel urodynamic tests were performed preoperatively and 1 year after surgery. A research nurse administered the Urogenital Distress Inventory, Incontinence Impact Questionnaire, and pelvic examinations using the pelvic organ prolapse quantification system preoperatively, and at 6 months, 1 year, and 2 years after surgery. Voiding diaries were collected at 1 and 2 years. Primary outcome was objective cure, which was defined as no evidence of urinary leakage during postoperative urodynamic studies. Secondary outcomes included subjective continence, perioperative and postoperative data, and quality of life. Results: Thirty-three laparoscopic Burch colposuspension and 33 TVT patients were analyzed with a mean follow-up of 20.6 +/- 8 months (range 12-43). Mean operative time was significantly greater in the laparoscopic Burch colposuspension group compared with the TVT group, 132 versus 79 minutes, respectively (P = .003). Multichannel urodynamic studies in 32 laparoscopic Burch colposuspension and 31 TVT patients showed a higher rate of urodynamic stress incontinence at 1 year in the laparoscopic Burch colposuspension group, 18.8% versus 3.2% (P = .056). There was a significant improvement in the number of incontinent episodes per week and in Urogenital Distress Inventory and Incontinence Impact Questionnaire scores in both groups at 1 and 2 years after surgery (P &lt; .001). However, postoperative subjective symptoms of incontinence (stress, urge, and any urinary incontinence) were reported significantly more often in the laparoscopic Burch colposuspension group than in the TVT group (P &lt; .04 for each category). Conclusion: The TVT procedure results in greater objective and subjective cure rates for urodynamic stress incontinence than does laparoscopic Burch colposuspension. abstract_id: PUBMED:8092210 Long-term results after Burch colposuspension. Objective: Our purpose was to review the long-term (5 to 10 years) clinical and urodynamic outcome in patients with stress urinary incontinence after Burch colposuspension. Study Design: A follow-up of 87 women with stress urinary incontinence who had a Burch colposuspension between 1979 and 1985 at the Department of Obstetrics and Gynecology, University of Berne, was performed by clinical and urodynamic reevaluation of the patients. Results: Stress incontinence was cured in 81.6% of patients. The cure rate was not significantly related to age, hormonal status, body weight, or previous surgical procedures for incontinence. Burch colposuspension stabilized the urethrovesical junction. Urodynamic measurement at follow-up compared with the preoperative evaluation showed in the cured group a significant increase in (1) the functional urethral length at rest and at stress, (2) maximum urethral closure pressure at stress, and (3) pressure transmission. On the contrary, in unsuccessful operations none of the recorded parameters had improved. Women with failed surgery had significantly lower preoperative maximum urethral closure pressures at rest and at stress, lower continence areas, smaller functional urethral lengths at stress, smaller length to peak pressures, and lower index values of urethral relaxation at stress. The procedure had a low operative and postoperative morbidity, with no significant disturbance of voiding function noted at 5 to 10 years' follow-up. Conclusions: Our results with the Burch colposuspension showed a high success rate at 5 to 10 years' follow-up. The high cure rate and low operative and postoperative morbidity were related to careful preoperative selection. Answer: Based on the available abstracts, Burch colposuspension appears to have a positive effect on coital incontinence in women with stress urinary incontinence (SUI). In one study, coital incontinence was cured in 70% of women and improved in an additional 6.7% following Burch colposuspension, suggesting that the procedure is likely to cure or improve coital incontinence when stress incontinence is successfully treated (PUBMED:15118636). Another study reported that 81% of sexually active women described no further coital incontinence after undergoing the procedure (PUBMED:15512298). These results indicate that Burch colposuspension can be effective in treating coital incontinence associated with stress urinary incontinence.
Instruction: Identification of transcription factors associated with castration-resistance: is the serum responsive factor a potential therapeutic target? Abstracts: abstract_id: PUBMED:23359479 Identification of transcription factors associated with castration-resistance: is the serum responsive factor a potential therapeutic target? Background: Advanced prostate cancer is treated by hormone ablation therapy. However, despite an initial response, the majority of men relapse to develop castration-resistant disease for which there are no effective treatments. We have previously shown that manipulating individual proteins has only minor alterations on the resistant phenotype so we hypothesize that targeting the central transcription factors (TFs) would represent a better therapeutic approach. Methods: We have undertaken a transcriptomic analysis of gene expression differences between the androgen-dependent LNCaP parental cells and its castration-resistant Abl and Hof sublines, revealing 1,660 genes associated with castration-resistance. Using effective bioinformatic techniques, these transcriptomic data were integrated with TF binding sites resulting in a list of TFs associated with the differential gene expression observed. Results: Following validation of the gene-chip results, the serum response factor (SRF) was chosen for clinical validation and functional analysis due to its recent association with prostate cancer progression. SRF immunoreactivity in prostate tumor samples was shown for the first time to be associated with castration-resistance. SRF inhibition by siRNA and the small molecule inhibitor CCG-1423 resulted in decreased proliferation. Conclusion: SRF is a key TF by which resistant cells survive with depleted levels of androgens representing a target for therapeutic manipulation. abstract_id: PUBMED:37491856 Transcription Factor EB: A Promising Therapeutic Target for Ischemic Stroke. Transcription factor EB (TFEB) is an important endogenous defensive protein that responds to ischemic stimuli. Acute ischemic stroke is a growing concern due to its high morbidity and mortality. Most survivors suffer from disabilities such as numbness or weakness in an arm or leg, facial droop, difficulty speaking or understanding speech, confusion, impaired balance or coordination, or loss of vision. Although TFEB plays a neuroprotective role, its potential effect on ischemic stroke remains unclear. This article describes the basic structure, regulation of transcriptional activity, and biological roles of TFEB relevant to ischemic stroke. Additionally, we explore the effects of TFEB on the various pathological processes underlying ischemic stroke and current therapeutic approaches. The information compiled here may inform clinical and basic studies on TFEB, which may be an effective therapeutic drug target for ischemic stroke. abstract_id: PUBMED:35521682 Transcription Factor ASCL1 Acts as a Novel Potential Therapeutic Target for the Treatment of the Cushing's Disease. Background: The pathogenesis of Cushing's disease (CD) is still not adequately understood despite the identification of somatic driver mutations in USP8, BRAF, and USP48. In this multiomics study, we combined RNA sequencing (RNA-seq) with Sanger sequencing to depict transcriptional dysregulation under different gene mutation backgrounds. Furthermore, we evaluated the potential of achaete-scute complex homolog 1 (ASCL1), a pioneer transcription factor, as a novel therapeutic target for treatment of CD and its possible downstream pathway. Methods: RNA-seq was adopted to investigate the gene expression profile of CD, and Sanger sequencing was adopted to detect gene mutations. Bioinformatics analysis was used to depict transcriptional dysregulation under different gene mutation backgrounds. The function of ASCL1 in hormone secretion, cell proliferation, and apoptosis were studied in vitro. The effectiveness of an ASCL1 inhibitor was evaluated in primary CD cells, and the clinical relevance of ASCL1 was examined in 68 patients with CD. RNA-seq in AtT-20 cells on Ascl1 knockdown combined with published chromatin immunoprecipitation sequencing data and dual luciferase assays were used to explore downstream pathways. Results: ASCL1 was exclusively overexpressed in USP8-mutant and wild-type tumors. Ascl1 promoted adrenocorticotrophin hormone overproduction and tumorigenesis and directly regulated Pomc in AtT-20 cells. An ASCL1 inhibitor presented promising efficacy in both AtT-20 and primary CD cells. ASCL1 overexpression was associated with a larger tumor volume and higher adrenocorticotrophin secretion in patients with CD. Conclusion: Our findings help to clarify the pathogenesis of CD and suggest that ASCL1 is a potential therapeutic target the treatment of CD. Summary: The pathogenesis of Cushing's disease (CD) is still not adequately understood despite the identification of somatic driver mutations in USP8, BRAF, and USP48. Moreover, few effective medical therapies are currently available for the treatment of CD. Here, using a multiomics approach, we first report the aberrant overexpression of the transcription factor gene ASCL1 in USP8-mutant and wild-type tumors of CD. Ascl1 promoted adrenocorticotrophin hormone overproduction and tumorigenesis and directly regulated Pomc in mouse AtT-20 cells. Notably, an ASCL1 inhibitor presented promising efficacy in both AtT-20 and primary CD cells. Importantly, ASCL1 overexpression was associated with a larger tumor volume and higher adrenocorticotrophin secretion in patients with CD. Thus, our findings improve understanding of CD pathogenesis and suggest that ASCL1 is a potential therapeutic target the treatment of CD. abstract_id: PUBMED:34318904 MEIS1 and its potential as a cancer therapeutic target (Review). Meis homeobox 1 (Meis1) was initially discovered in 1995 as a factor involved in leukemia in an animal model. Subsequently, 2 years later, MEIS1, the human homolog, was cloned in the liver and cerebellum, and was found to be highly expressed in myeloid leukemia cells. The MEIS1 gene, located on chromosome 2p14, encodes a 390‑amino acid protein with six domains. The expression of homeobox protein MEIS1 is affected by cell type, age and environmental conditions, as well as the pathological state. Certain types of modifications of MEIS1 and its protein interaction with homeobox or pre‑B‑cell leukemia homeobox proteins have been described. As a transcription factor, MEIS1 protein is involved in cell proliferation in leukemia and some solid tumors. The present review article discusses the molecular biology, modifications, protein‑protein interactions, as well as the role of MEIS1 in cell proliferation of cancer cells and MEIS1 inhibitors. It is suggested by the available literature MEIS1 has potential to become a cancer therapeutic target. abstract_id: PUBMED:31181727 ZBTB46, SPDEF, and ETV6: Novel Potential Biomarkers and Therapeutic Targets in Castration-Resistant Prostate Cancer. Prostate cancer (PCa) is the second most common killer among men in Western countries. Targeting androgen receptor (AR) signaling by androgen deprivation therapy (ADT) is the current therapeutic regime for patients newly diagnosed with metastatic PCa. However, most patients relapse and become resistant to ADT, leading to metastatic castration-resistant PCa (CRPC) and eventually death. Several proposed mechanisms have been proposed for CRPC; however, the exact mechanism through which CRPC develops is still unclear. One possible pathway is that the AR remains active in CRPC cases. Therefore, understanding AR signaling networks as primary PCa changes into metastatic CRPC is key to developing future biomarkers and therapeutic strategies for PCa and CRPC. In the current review, we focused on three novel biomarkers (ZBTB46, SPDEF, and ETV6) that were demonstrated to play critical roles in CRPC progression, epidermal growth factor receptor tyrosine kinase inhibitor (EGFR TKI) drug resistance, and the epithelial-to-mesenchymal transition (EMT) for patients treated with ADT or AR inhibition. In addition, we summarize how these potential biomarkers can be used in the clinic for diagnosis and as therapeutic targets of PCa. abstract_id: PUBMED:37553583 Identification of GRIN2D as a novel therapeutic target in pancreatic ductal adenocarcinoma. Background: Pancreatic ductal adenocarcinoma (PDAC) is a devastating disease with a dismal prognosis, and despite significant advances in our understanding of its genetic drivers, like KRAS, TP53, CDKN2A, and SMAD4, effective therapies remain limited. Here, we identified a new therapeutic target GRIN2D and then explored its functions and mechanisms in PDAC progression. Methods: We performed a genome-wide RNAi screen in a PDAC xenograft model and identified GRIN2D, which encodes the GluN2D subunit of N-methyl-D-aspartate receptors (NMDARs), as a potential oncogene. Western blot, immunohistochemistry, and analysis on Gene Expression Omnibus were used for detecting the expression of GRIN2D in PDAC. Cellular experiments were conducted for exploring the functions of GRIN2D in vitro while subcutaneous and orthotopic injections were used in in vivo study. To clarify the mechanism, we used RNA sequencing and cellular experiments to identify the related signaling pathway. Cellular assays, RT-qPCR, and western blot helped identify the impacts of the NMDAR antagonist memantine. Results: We demonstrated that GRIN2D was highly expressed in PDAC cells, and further promoted oncogenic functions. Mechanistically, transcriptome profiling identified GRIN2D-regulated genes in PDAC cells. We found that GRIN2D promoted PDAC progression by activating the p38 MAPK signaling pathway and transcription factor CREB, which in turn promoted the expression of HMGA2 and IL20RB. The upregulated GRIN2D could effectively promote tumor growth and liver metastasis in PDAC. We also investigated the therapeutic potential of NMDAR antagonism in PDAC and found that memantine reduced the expression of GRIN2D and inhibited PDAC progression. Conclusion: Our results suggested that NMDA receptor GRIN2D plays important oncogenic roles in PDAC and represents a novel therapeutic target. abstract_id: PUBMED:31381810 The interaction between RUNX2 and core binding factor beta as a potential therapeutic target in canine osteosarcoma. Osteosarcoma remains the most common primary bone tumour in dogs with half of affected dogs unable to survive 1 year beyond diagnosis. New therapeutic options are needed to improve outcomes for this disease. Recent investigations into potential therapeutic targets have focused on cell surface molecules with little clear therapeutic benefit. Transcription factors and protein interactions represent underdeveloped areas of therapeutic drug development. We have utilized allosteric inhibitors of the core binding factor transcriptional complex, comprised of core binding factor beta (CBFβ) and RUNX2, in four canine osteosarcoma cell lines Active inhibitor compounds demonstrate anti-tumour activities with concentrations demonstrated to be achievable in vivo while an inactive, structural analogue has no activity. We show that CBFβ inhibitors are capable of inducing apoptosis, inhibiting clonogenic cell growth, altering cell cycle progression and impeding migration and invasion in a cell line-dependent manner. These effects coincide with a reduced interaction between RUNX2 and CBFβ and alterations in expression of RUNX2 target genes. We also show that addition of CBFβ inhibitors to the commonly used cytotoxic chemotherapeutic drugs doxorubicin and carboplatin leads to additive and/or synergistic anti-proliferative effects in canine osteosarcoma cell lines. Taken together, we have identified the interaction between components of the core binding factor transcriptional complex, RUNX2 and CBFβ, as a potential novel therapeutic target in canine osteosarcoma and provide justification for further investigations into the anti-tumour activities we describe here. abstract_id: PUBMED:26234767 PP2A inhibition as a novel therapeutic target in castration-resistant prostate cancer. Protein phosphatase 2A (PP2A) is a well-known tumor suppressor frequently inhibited in human cancer. Alterations affecting PP2A subunits together with the deregulation of endogenous PP2A inhibitors such as CIP2A and SET have been described as contributing mechanisms to inactivate PP2A in prostate cancer. Moreover, recent findings highlight that functional inactivation of PP2A could represent a key event in the acquisition of castration-resistant phenotype and a novel molecular target with high impact at both clinical and therapeutic levels in prostate cancer. abstract_id: PUBMED:30962287 Leukemia Inhibitory Factor Promotes Castration-resistant Prostate Cancer and Neuroendocrine Differentiation by Activated ZBTB46. Purpose: The molecular targets for castration-resistant prostate cancer (CRPC) are unknown because the disease inevitably recurs, and therapeutic approaches for patients with CRPC remain less well understood. We sought to investigate regulatory mechanisms that result in increased therapeutic resistance, which is associated with neuroendocrine differentiation of prostate cancer and linked to dysregulation of the androgen-responsive pathway. Experimental Design: The underlying intracellular mechanism that sustains the oncogenic network involved in neuroendocrine differentiation and therapeutic resistance of prostate cancer was evaluated to investigate and identify effectors. Multiple sets of samples with prostate adenocarcinomas and CRPC were assessed via IHC and other assays. Results: We demonstrated that leukemia inhibitory factor (LIF) was induced by androgen deprivation therapy (ADT) and was upregulated by ZBTB46 in prostate cancer to promote CRPC and neuroendocrine differentiation. LIF was found to be induced in patients with prostate cancer after ADT and was associated with enriched nuclear ZBTB46 staining in high-grade prostate tumors. In prostate cancer cells, high ZBTB46 output was responsible for the activation of LIF-STAT3 signaling and neuroendocrine-like features. The abundance of LIF was mediated by ADT-induced ZBTB46 through a physical interaction with the regulatory sequence of LIF. Analysis of serum from patients showed that cases of higher tumor grade and metastatic prostate cancer exhibited higher LIF titers. Conclusions: Our findings suggest that LIF is a potent serum biomarker for diagnosing advanced prostate cancer and that targeting the ZBTB46-LIF axis may therefore inhibit CRPC development and neuroendocrine differentiation after ADT. abstract_id: PUBMED:35733308 Basic Leucine Zipper Protein Nuclear Factor Erythroid 2-related Factor 2 as a Potential Therapeutic Target in Brain Related Disorders. Nuclear factor erythroid-2-related factor 2 (Nrf2), an inducible transcription factor in phase II metabolic reactions, as well as xenobiotic response pathway, is referred to as 'master regulator' in anti-oxidant, anti-inflammatory, and xenobiotic detoxification processes. The activity of Nrf2 is tightly regulated by KEAP1, which promotes ubiquitination, followed by degradation under homeostatic conditions and also allows Nrf2 to escape ubiquitination, accumulate within the cell, and translocate in the nucleus upon exposure to the stresses. The Nrf2 pathway has shown an intrinsic mechanism of defense against oxidative stress (OS). It emerged as a promising therapeutic target as both inducers and as there is an increasing number of evidence for the protective role of the Nrf2-ARE pathway towards exacerbations of ROS generation as well as OS, mitochondrial dysfunction as well as prolonged neuroinflammation is a prevalent pathophysiological process rooted in brain-related disorders. Elevated concentrations of ROS generation and OS have been linked to the pathophysiology of a diverse array of brain related disorders, including Alzheimer's disease, Parkinson's disease, Huntington's disease, Friedrich's ataxia, multiple sclerosis, and epilepsy. Further, it not only modulates the articulation of anti-oxidant genes but has often been associated with implicating anti-inflammatory consequences as well as regulating mitochondrial functionalities and biogenesis. Therefore, Nrf2 can be considered a potential therapeutic target for the regimen of various brain-related disorders. Answer: The identification of transcription factors associated with castration resistance is a critical area of research in the context of advanced prostate cancer. Castration-resistant prostate cancer (CRPC) is a stage of the disease where cancer progresses despite the reduction of androgens, which are typically required for prostate cancer cell growth. One study identified the serum response factor (SRF) as a key transcription factor by which resistant cells survive with depleted levels of androgens, suggesting that SRF represents a target for therapeutic manipulation. The study conducted a transcriptomic analysis revealing 1,660 genes associated with castration-resistance and integrated these data with transcription factor binding sites, leading to the validation of SRF. SRF immunoreactivity was shown to be associated with castration-resistance in prostate tumor samples. Inhibition of SRF by siRNA and the small molecule inhibitor CCG-1423 resulted in decreased proliferation, indicating that SRF could be a potential therapeutic target for CRPC (PUBMED:23359479). This finding is significant as it provides a potential new avenue for treatment in cases where current hormone ablation therapies fail, and no effective treatments are available. By targeting central transcription factors like SRF, rather than individual proteins, there may be a more effective approach to combating the resistant phenotype of CRPC.
Instruction: Does tobacco smoke prevent atopic disorders? Abstracts: abstract_id: PUBMED:31214241 Paternal Tobacco Smoke Correlated to Offspring Asthma and Prenatal Epigenetic Programming. Rationale: Little is known about effects of paternal tobacco smoke (PTS) on the offspring's asthma and its prenatal epigenetic programming. Objective: To investigate whether PTS exposure was associated with the offspring's asthma and correlated to epigenetic CG methylation of potential tobacco-related immune genes: LMO2, GSTM1 or/and IL-10 genes. Measurements and Main Results: In a birth cohort of 1,629 newborns, we measured exposure rates of PTS (23%) and maternal tobacco smoke (MTS, 0.2%), cord blood DNA methylation, infant respiratory tract infection, childhood DNA methylation, and childhood allergic diseases. Infants with prenatal PTS exposure had a significantly higher risk of asthma by the age of 6 than those without (p = 0.026). The PTS exposure doses at 0, &lt;20, and ≧20 cigarettes per day were significantly associated with the trend of childhood asthma and the increase of LMO2-E148 (p = 0.006), and IL10_P325 (p = 0.008) CG methylation. The combination of higher CG methylation levels of LMO2_E148, IL10_P325, and GSTM1_P266 corresponded to the highest risk of asthma by 43.48%, compared to other combinations (16.67-23.08%) in the 3-way multi-factor dimensionality reduction (MDR) analysis. The LMO2_P794 and GSTM1_P266 CG methylation levels at age 0 were significantly correlated to those at age of 6. Conclusions: Prenatal PTS exposure increases CG methylation contents of immune genes, such as LMO2 and IL-10, which significantly retained from newborn stage to 6 years of age and correlated to development of childhood asthma. Modulation of the LMO2 and IL-10 CG methylation and/or their gene expression may provide a regimen for early prevention of PTS-associated childhood asthma. Descriptor number: 1.10 Asthma Mediators. Scientific Knowledge on the Subject: It has been better known that maternal tobacco smoke (MTS) has an impact on the offspring's asthma via epigenetic modification. Little is known about effects of paternal tobacco smoke (PTS) on the offspring's asthma and its prenatal epigenetic programming. What This Study Adds to the Field: Prenatal tobacco smoke (PTS) can program epigenetic modifications in certain genes, such as LMO2 and IL-10, and that these modifications are correlated to childhood asthma development. The higher the PTS exposure dose the higher the CG methylation levels are found. The combination of higher CG methylation levels of LMO2_E148, IL10_P325 and GSTM1_P266 corresponded to the highest risk of asthma. Measuring the DNA methylation levels of certain genes might help to predict high-risk populations for childhood asthma and provide a potential target to prevent the development of childhood asthma. abstract_id: PUBMED:23026151 Bacterial and fungal markers in tobacco smoke. Previous research has demonstrated that cigarette smoke contains bacterial and fungal components including lipopolysaccharide (LPS) and ergosterol. In the present study we used gas chromatography-mass spectrometry to analyze tobacco as well as mainstream and second hand smoke for 3-hydroxy fatty acids (3-OH FAs) of 10 to 18 carbon chain lengths, used as LPS markers, and ergosterol, used as a marker of fungal biomass. The air concentrations of LPS were 0.0017 n mol/m(3) (N=5) and 0.0007/m(3) (N=6) in the smoking vs. non-smoking rooms (p=0.0559) of the studied private houses, and 0.0231 n mol/m(3) (N=5) vs. 0.0006 n mol/m(3) (N=5) (p=0.0173), respectively, at the worksite. The air concentrations of ergosterol were also significantly higher in rooms with ongoing smoking than in rooms without smoking. A positive correlation was found between LPS and ergosterol in rooms with smoking but not in rooms without smoking. 3-OH C14:0 was the main 3-OH FA, followed by 3-OH C12:0, both in mainstream and second hand smoke and in phenol:water smoke extracts prepared in order to purify the LPS. The Limulus activity of the phenolic phase of tobacco was 3900 endotoxin units (EU)/cigarette; the corresponding amount of the smoke, collected on filters from 8 puffs, was 4 EU/cigarette. Tobacco smoking has been associated with a range of inflammatory airway conditions including COPD, asthma, bronchitis, alveolar hypersensitivity etc. Significant levels of LPS and ergosterol were identified in tobacco smoke and these observations support the hypothesis that microbial components of tobacco smoke contribute to inflammation and airway disease. abstract_id: PUBMED:3281602 On the health effects of environmental tobacco smoke. Possible adverse health effects of breathing environmental tobacco smoke include lung cancer, respiratory illnesses in young children, decreased pulmonary function, decreased lung growth, allergy to tobacco, and exacerbation of angina. These effects are reviewed to aid informed discussion on this health issue. Some of the constituents of tobacco smoke are found in the home, the outdoor environment, and the workplace in permissible concentrations and are considered unlikely to cause ill health. A double standard, one in the workplace and another for the public, may be evolving for acceptable health risks. abstract_id: PUBMED:2202327 Effects of mainstream and environmental tobacco smoke on the immune system in animals and humans: a review. This review evaluates the available information on the effects of mainstream and environmental tobacco smoke on the immune system in animals and humans. The primary emphasis is on mainstream smoke since little information is available on the effects of environmental smoke. The effects of mainstream tobacco smoke on the immune system in humans and animals are similar. Animals exposed to mainstream tobacco smoke for periods of a few weeks generally exhibit a slight immunostimulation. However, subchronic and chronic exposure studies indicate that immunosuppressive changes develop. Lymphocyte proliferation in response to the mitogens PHA and LPS is decreased, suggesting compromise of cell function. Antibody production can be suppressed. Smoke-exposed animals that are challenged with metastasizing tumors or viruses have been shown to exhibit a higher incidence of tumorigenic and infectious diseases, respectively. Localized immunological changes in the lung can include reduction of bronchus-associated lymphoid tissue and immunoglobulin levels. Smoking-related changes in the peripheral immune system of humans have included elevated WBC counts, increased cytotoxic/suppressor and decreased inducer/helper T-cell numbers, slightly suppressed T-lymphocyte activity, significantly decreased natural killer cell activity, lowered circulating immunoglobin titers, except for IgE which is elevated, and increased susceptibility to infection. The effects of environmental tobacco smoke on the immune system, in contrast to mainstream tobacco smoke, have just begun to be investigated and information available in the literature, to date, is limited. Immunoreactive substances are known to be present in environmental tobacco smoke, but to date, environmental tobacco smoke has been more closely associated with irritation than sensitization. A few studies have indicated a potential for environmental smoke-induced hypersensitivity and suppression of immunoregulatory substances. In contrast, other investigators have failed to detect immunological or other biological changes associated with environmental smoke. Clearly, more research is needed to resolve these differences. abstract_id: PUBMED:4029102 Measurement of nicotine in building air as an indicator of tobacco smoke levels. Humans apparently differ greatly in their sensitivity and tolerance to tobacco smoke, thereby creating conflicts in the workplace. Resolution of conflicts in a large office complex at the authors' institution required an objective measure of smoke levels. A gas chromatographic technique was devised for collection and analysis of nicotine concentrations in the building air as an indicator of tobacco smoke pollution. Segregation of smokers and nonsmokers in the large office complex still resulted in substantial exposure of the nonsmoker to tobacco smoke, although a gradient of exposure was certainly observed. Passive tobacco smoke consumption in the smoking area of the office complex was calculated to be equivalent to 1.1 cigarettes per 8-hr period, and nicotine density in this area was 1.96 microgram/m. The restriction of smoking to a foyer area outside the office complex resulted in a slow but eventual reduction in nicotine concentrations in the office complex. Observed "background" nicotine concentration levels corresponding to 4 to 7% of those encountered in smoking areas demonstrate that central air circulation systems and people movement increase the nicotine level throughout all rooms of a building, regardless of the smoking policies of an individual office complex. Recent documentation of the relationship between passive smoking and cancer, heart disease, pulmonary dysfunction, and allergic responses argues for restriction of smoking to building exteriors. abstract_id: PUBMED:10401801 Breast-feeding and environmental tobacco smoke exposure. Background: Exposure to environmental tobacco smoke is associated with adverse effects in infants and children. Objective: To explore whether an increase in urinary cotinine fumarate level is caused by ingested nicotine and cotinine in breast-feeding infants. Methods: We studied newborns at risk for developing asthma and allergies based on a strong family history. We measured urinary cotinine levels in the infants as a measure of environmental tobacco smoke exposure and cotinine levels in the breast milk of breast-feeding mothers. Results: Of 507 infants, urinary cotinine levels during the first 2 weeks of life were significantly increased in infants whose mothers smoked. Breast-fed infants had higher cotinine levels than non-breast-fed infants, but this was statistically significant (P&lt;.05) only if mothers smoked. Urinary cotinine levels were 5 times higher in breast-fed infants whose mothers smoked than in those whose mothers smoked but did not breast-feed. Conclusions: Mothers should be encouraged to not smoke, and parents must be advised of the potential respiratory and systemic risks of environmental tobacco smoke exposure to their child, including the potential for future addiction to smoking. abstract_id: PUBMED:11206241 Tobacco hypersensitivity and environmental tobacco smoke exposure in a pediatric population. Background: Skin testing and RAST have verified the existence of tobacco-specific IgE. However, published studies report conflicting results concerning the clinical significance of tobacco IgE. Previous studies have not focused on the role of environmental tobacco smoke (ETS) as it relates to tobacco hypersensitivity (TH) in nonsmoking children. Objective: We used nonsmoking pediatric patients to investigate the relationship between ETS and TH. Methods: Children, ages 4 to 10 years, were prospectively enrolled. ETS exposure and smoke-triggered symptoms were recorded by questionnaire and physician history. Patients were given a skin test (ST) with a panel of aeroallergens plus tobacco extract. A ST reaction to at least one aeroallergen classified a patient as atopic; a ST reaction to tobacco classified a patient as TH. Results: We enrolled 170 patients, mean age 7.2 years. We found 58 (34%) patients reported routine exposure to ETS and 78 (46%) patients reported ETS-induced symptoms. We found 121 (71%) atopic patients and 61 (36%) TH patients. TH was more common in atopic patients (P &lt; .0001) and those routinely exposed to ETS (P &lt; .05). However, TH failed to predict ETS-induced symptoms in either atopic or non-atopic patients (PPV = 0.40, NPV = 0.69). Conclusions: We evaluated the clinical significance of TH in a nonsmoking patient population related to ETS exposure. We concluded that although TH is statistically related to atopy and ETS exposure, the low predictive values of skin testing for TH limit its clinical usefulness. abstract_id: PUBMED:6699306 Tobacco smoke "sensitivity"--is there an immunologic basis? This study was undertaken to determine if there is an immunologic basis for reported tobacco-smoke hypersensitivity in man. Ninety-three individuals who were recruited on the basis of their smoking history and/or claimed sensitivity to tobacco smoke were skin prick tested with tobacco smoke and leaf extracts and their sera analyzed for reaginic and precipitating antibodies to these antigens. Results demonstrated that a significant number of the individuals who were tested had positive skin test and RAST responses to tobacco leaf antigens, whereas only a small number responded to smoke antigens. RAST or skin test responses of study subjects to leaf or smoke antigens did not correlate with symptoms of tobacco-smoke "sensitivity" or smoking history but did correlate with atopic status. Precipitins were detected only to tobacco leaf C in 46 of the 93 individuals who were tested but did not correlate with smoking history or smoke "sensitivity." These results suggest that subjective tobacco-smoke sensitivity is not caused by hypersensitivity to tobacco leaf or smoke antigens. abstract_id: PUBMED:29883409 Tobacco Smoke Induces and Alters Immune Responses in the Lung Triggering Inflammation, Allergy, Asthma and Other Lung Diseases: A Mechanistic Review. Many studies have been undertaken to reveal how tobacco smoke skews immune responses contributing to the development of chronic obstructive pulmonary disease (COPD) and other lung diseases. Recently, environmental tobacco smoke (ETS) has been linked with asthma and allergic diseases in children. This review presents the most actual knowledge on exact molecular mechanisms responsible for the skewed inflammatory profile that aggravates inflammation, promotes infections, induces tissue damage, and may promote the development of allergy in individuals exposed to ETS. We demonstrate how the imbalance between oxidants and antioxidants resulting from exposure to tobacco smoke leads to oxidative stress, increased mucosal inflammation, and increased expression of inflammatory cytokines (such as interleukin (IL)-8, IL-6 and tumor necrosis factor &amp;alpha; ([TNF]-&amp;alpha;). Direct cellular effects of ETS on epithelial cells results in increased permeability, mucus overproduction, impaired mucociliary clearance, increased release of proinflammatory cytokines and chemokines, enhanced recruitment of macrophages and neutrophils and disturbed lymphocyte balance towards Th2. The plethora of presented phenomena fully justifies a restrictive policy aiming at limiting the domestic and public exposure to ETS. abstract_id: PUBMED:20731846 Early exposure to secondhand tobacco smoke and the development of allergic diseases in 4 year old children in Malmö, Sweden. Background: Earlier studies have shown an association between secondhand tobacco smoke and allergy development in children. Furthermore, there is an increased risk of developing an allergy if the parents have an allergy. However, there are only few studies investigating the potential synergistic effect of secondhand tobacco smoke and allergic heredity on the development of an allergy. Methods: The study was population-based cross-sectional with retrospective information on presence of secondhand tobacco smoke during early life. The study population consisted of children who visited the Child Health Care (CHC) centres in Malmö for their 4-year health checkup during 2006-2008 and whose parents answered a self-administered questionnaire (n = 4,278 children). The questionnaire was distributed to parents of children registered with the CHC and invited for the 4-year checkup during the study period. Results: There was a two to four times increased odds of the child having an allergy or having sought medical care due to allergic symptoms if at least one parent had an allergy, while there were rather small increased odds related to presence of secondhand smoke during the child's first month in life or at the age of 8 months. However, children with heredity for allergies and with presence of secondhand tobacco smoke during their first year in life had highly increased odds of developing an allergy and having sought medical care due to allergic symptoms at 4 years of age. Thus, there was a synergistic effect enhancing the independent effects of heredity and exposure to secondhand tobacco smoke on allergy development. Conclusions: Children with a family history of allergies and early exposure to secondhand tobacco smoke is a risk group that prevention and intervention should pay extra attention to. The tobacco smoke effect on children is an essential and urgent question considering it not being self chosen, possibly giving life lasting negative health effects and being possible to reduce. Answer: No, the evidence from the provided abstracts suggests that tobacco smoke does not prevent atopic disorders; rather, it is associated with an increased risk of developing allergic diseases and asthma. Several studies have shown that exposure to tobacco smoke, including environmental tobacco smoke (ETS) and secondhand smoke, is linked to the development of allergic diseases in children (PUBMED:20731846, PUBMED:10401801). For instance, early exposure to secondhand tobacco smoke has been shown to increase the odds of a child developing an allergy, especially if there is a family history of allergies (PUBMED:20731846). Additionally, tobacco smoke has been associated with a range of inflammatory airway conditions, including chronic obstructive pulmonary disease (COPD), asthma, bronchitis, and alveolar hypersensitivity (PUBMED:23026151). Furthermore, tobacco smoke has been found to induce and alter immune responses in the lung, triggering inflammation, allergy, asthma, and other lung diseases (PUBMED:29883409). It can lead to oxidative stress, increased mucosal inflammation, and an imbalance in lymphocyte populations towards a Th2 response, which is associated with allergic reactions. Paternal tobacco smoke (PTS) exposure has also been linked to an increased risk of asthma in offspring, with prenatal exposure leading to epigenetic modifications in immune genes that are retained from the newborn stage to 6 years of age (PUBMED:31214241). In summary, the evidence does not support the notion that tobacco smoke prevents atopic disorders. Instead, it indicates that tobacco smoke exposure is a risk factor for the development and exacerbation of allergic diseases and asthma.
Instruction: Noninvasive diagnosis of hepatocellular carcinoma on gadoxetic acid-enhanced MRI: can hypointensity on the hepatobiliary phase be used as an alternative to washout? Abstracts: abstract_id: PUBMED:25773941 Noninvasive diagnosis of hepatocellular carcinoma on gadoxetic acid-enhanced MRI: can hypointensity on the hepatobiliary phase be used as an alternative to washout? Objectives: To determine which dynamic phase(s) of gadoxetic acid-enhanced MRI is most appropriate to assess "washout" in the noninvasive diagnosis of hepatocellular carcinoma (HCC) based on hemodynamic pattern. Methods: In this retrospective cohort study, 288 consecutive patients with chronic liver disease presented with 387 arterially enhancing nodules (292 HCCs, 95 non-HCCs) (≥1 cm) on gadoxetic acid-enhanced MRI. All HCCs were confirmed by histopathology or by their typical enhancement pattern on dynamic liver CT. MR imaging diagnosis of HCC was made using criteria of arterial enhancement and hypointensity relative to the surrounding parenchyma (1) on the portal-venous phase (PVP), (2) on the PVP and/or transitional phase (TP), or (3) on the PVP and/or TP, and/or hepatobiliary phase (HBP). Results: For the noninvasive diagnosis of HCC, criterion 1 provided significantly higher specificity (97.9%; 95% confidence interval, 92.6 - 99.7%) than criteria 2 (86.3%; 77.7 - 92.5%), or 3 (48.4%; 38.0 - 58.9%). Conversely, higher sensitivity was obtained with criterion 3 (93.8%; 90.4 - 96.3%) than with criterion 2 (86.6%; 82.2 - 90.3%) or 1 (70.9%; 65.3 - 76.0%). Conclusions: To make a sufficiently specific diagnosis of HCC using gadoxetic acid-enhanced MRI based on typical enhancement features, washout should be determined on the PVP alone rather than combined with hypointensity on the TP or HBP. Key Points: • Gadoxetic acid-enhanced MRI enhancement features can be used to diagnose HCC. • Washout should be determined on the PVP alone for high specificity. • Hypointensity on the TP or HBP increases sensitivity but lowers specificity. abstract_id: PUBMED:30990381 Gadoxetic Acid-enhanced MRI of Hepatocellular Carcinoma: Value of Washout in Transitional and Hepatobiliary Phases. Background Current Liver Imaging Reporting and Data System guidelines define the washout appearance of gadoxetic acid-enhanced MRI only during the portal venous phase. Defining washout only during the portal venous phase may lead to lower sensitivity for diagnosis of hepatocellular carcinoma (HCC). Purpose To compare the diagnostic performances of three gadoxetic acid-enhanced MRI criteria for HCC according to the phases during which washout appearance was determined. Materials and Methods In this retrospective study, patients with a hepatic nodule detected at US surveillance for HCC from January to December 2012 underwent gadoxetic acid-enhanced MRI. Three diagnostic MRI criteria for HCC were defined according to the phases during which washout appearance was observed, with the presence of arterial phase hyperenhancement and hypointensity noted (a) only during the portal venous phase, with washout confined to the portal venous phase; (b) during the portal venous phase or transitional phase, with washout extended to the transitional phase; or (c) during the portal venous, transitional, or hepatobiliary phase, with washout extended to the hepatobiliary phase. If a nodule showed marked T2 hyperintensity or a targetoid appearance, it was precluded from the diagnosis of HCC. The sensitivity and specificity were compared by using a generalized estimating equation. Results A total of 178 patients were included (mean age ± standard deviation, 55.3 years ± 9.1) with 203 surgically confirmed hepatic nodules (186 HCCs and 17 non-HCCs) measuring 3.0 cm or smaller. The sensitivity with washout extended to the hepatobiliary phase (95.2% [177 of 186]) was better than that with washout extended to the transitional phase (90.9% [169 of 186]; P = .01) and washout confined to the portal venous phase (75.3% [140 of 186]; P &lt; .01). The specificity with extensions of washout to the transitional phase and hepatobiliary phase (82% [14 of 17] for both) was similar to that obtained with washout confined to the portal venous phase (94.1% [16 of 17]) (P = .47). Conclusion After exclusion of typical hemangiomas and nodules with a targetoid appearance, extending washout appearance to the transitional or hepatobiliary phase (instead of restricting it to the portal venous phase) allowed higher sensitivity without a reduction in specificity. © RSNA, 2019 See also the editorial by Fowler and Sirlin in this issue. abstract_id: PUBMED:27726242 Washout appearance in Gd-EOB-DTPA-enhanced MR imaging: A differentiating feature between hepatocellular carcinoma with paradoxical uptake on the hepatobiliary phase and focal nodular hyperplasia-like nodules. Purpose: To identify the most reliable imaging features for differentiating hepatocellular carcinoma with paradoxical uptake on the hepatobiliary phase (HCCpara ) from focal nodular hyperplasia (FNH)-like nodules using Gd-EOB-DTPA-enhanced MRI. Materials And Methods: This was a retrospective study. Twenty patients with HCCpara and 21 patients with FNH-like nodules were included. The following MRI features were evaluated using 3.0 Tesla unit by two radiologists: signal intensity (SI) on T1-, T2-, and diffusion-weighted imaging (DWI), arterial enhancement pattern, washout appearance on the portal venous phase (PVP) and/or transitional phase (TP), uptake pattern on the hepatobiliary phase (HBP), "T2 scar," "EOB scar," and chemical shift on in- and out-of-phase images. Multivariate logistic regression analysis was performed to assess MRI features for prediction of HCCpara . Results: Compared with FNH-like nodules, HCCpara had significantly more frequent heterogeneous T1 SI (P &lt; 0.0001), T2 hyperintensity (P = 0.032), heterogeneous arterial enhancement (P &lt; 0.0001), washout appearance on the PVP and/or TP (P &lt; 0.0001), heterogeneous uptake on the HBP (P &lt; 0.0001), absence of "EOB scar" (P &lt; 0.0001), and hyperintensity on DWI (P = 0.004). Multivariate logistic regression analysis revealed washout appearance as the only independent imaging feature associated with HCCpara (odds ratio, 7.019; P = 0.042). Washout appearance also showed the best diagnostic performance with a sensitivity of 90% and a specificity of 100%. Conclusion: Washout appearance on the PVP and/or TP is the most reliable imaging feature for differentiating HCCpara from FNH-like nodules. Level Of Evidence: 3 J. MAGN. RESON. IMAGING 2017;45:1599-1608. abstract_id: PUBMED:30255250 Retrospective validation of a new diagnostic criterion for hepatocellular carcinoma on gadoxetic acid-enhanced MRI: can hypointensity on the hepatobiliary phase be used as an alternative to washout with the aid of ancillary features? Objectives: To validate new diagnostic criteria for hepatocellular carcinoma (HCC) on gadoxetic acid-enhanced MR imaging (Gd-EOB-MRI) using hypointensity on the hepatobiliary phase (HBP) as an alternative to washout in combination with ancillary features. Methods: This retrospective study included 288 patients at high risk for HCC with 387 nodules (HCCs, n=292; non-HCCs, n=95) showing arterial phase hyper-enhancement (APHE) ≥1 cm on Gd-EOB-MRI. Imaging diagnoses of HCCs were made using different criteria: APHE plus hypointensity on the portal venous phase (PVP) (criterion 1), APHE plus hypointensity on the PVP and/or transitional phase (TP) (criterion 2), APHE plus hypointensity on the PVP and/or TP and/or HBP (criterion 3), and criterion 3 plus non-LR-1/2/M according to the Liver Imaging Reporting and Data System (LI-RADS) v2017 considering ancillary features (criterion 4). Sensitivities and specificities of those criteria were compared using McNemar's test. Results: Among diagnostic criteria for HCCs, criteria 3 and 4 showed significantly higher sensitivities (93.8% and 92.5%, respectively) than criteria 1 and 2 (70.9% and 86.6%, respectively) (p values &lt;0.001). The specificity of criterion 4 (87.4%) was shown to be significantly higher than that of criterion 3 (48.4%, p&lt;0.001), albeit comparable to criterion 2 (86.3%, p&gt;0.999) and significantly lower than criterion 1 (97.9%, p=0.002). Conclusions: In the non-invasive diagnosis of HCCs on Gd-EOB-MRI, HBP hypointensity may be used as an alternative to washout enabling a highly sensitive diagnosis with little loss in specificity if it is used after excluding nodules considered to be benignities or non-HCC malignancies based on characteristic imaging features. Key Points: • Gd-EOB-MRI enhancement and ancillary features can be used to diagnose HCC. • Exclusion of LR-1/2/M improves specificity when HBP hypointensity is used. abstract_id: PUBMED:20413759 Added value of gadoxetic acid-enhanced hepatobiliary phase MR imaging in the diagnosis of hepatocellular carcinoma. Purpose: To determine the added value of hepatobiliary phase images in gadoxetic acid-enhanced magnetic resonance (MR) imaging in the evaluation of hepatocellular carcinoma (HCC). Materials And Methods: Institutional review board approved this retrospective study and waived the informed consent. Fifty-nine patients with 84 HCCs underwent gadoxetic acid-enhanced MR examinations that included 20-minute delayed hepatobiliary phase imaging. MR imaging was performed with a 1.5-T system in 19 patients and a 3.0-T system in 40 patients. A total of 113 hepatic nodules were documented for analysis. Three radiologists independently reviewed two sets of MR images: set 1, unenhanced (T1- and T2-weighted) and gadoxetic acid-enhanced dynamic images; set 2, hepatobiliary phase images and unenhanced and gadoxetic acid-enhanced dynamic images. For each observer, the diagnostic accuracy was compared by using the area under the alternative free-response receiver operating characteristic curve (A(z)). Sensitivity and specificity were also calculated and compared between the two sets. Results: For all observers, A(z) values were higher with the addition of the hepatobiliary phase. The observer who had the least experience in abdominal imaging (2 years) demonstrated significant improvement in A(z), from 0.895 in set 1 to 0.951 in set 2 (P = .049). Sensitivity increased with the addition of hepatobiliary phase images but did not reach statistical significance. Nine HCCs (10.7%) in six patients (10.1%) were seen only on hepatobiliary phase images. Conclusion: Hepatobiliary phase images obtained after gadoxetic acid-enhanced dynamic MR imaging may improve diagnosis of HCC and assist in surgical planning. abstract_id: PUBMED:37404221 A case of focal nodular hyperplasia-like lesion presenting unusual signal intensity on the hepatobiliary phase of gadoxetic acid-enhanced magnetic resonance image. Focal nodular hyperplasia (FNH) or FNH-like lesions of the liver are benign lesions that can be mostly diagnosed by hepatobiliary phase gadoxetic acid-enhanced magnetic resonance imaging (MRI). Accurate imaging diagnosis is based on the fact that most FNHs or FNH-like lesions show characteristic hyper- or isointensity on hepatobiliary phase images. We report a case of an FNH-like lesion in a 73-year-old woman that mimicked a malignant tumor. Dynamic contrast-enhanced computed tomography (CT) and MRI using gadoxetic-acid revealed an ill-defined nodule showing early enhancement in the arterial phase and gradual and prolonged enhancement in the portal and equilibrium/transitional phases. Hepatobiliary phase imaging revealed inhomogeneous hypointensity, accompanied by a slightly isointense area compared to the background liver. Angiography-assisted CT showed a portal perfusion defect of the nodule, inhomogeneous arterial blood supply in the early phase, and less internal enhancement in the late phase, accompanied by irregularly shaped peritumoral enhancement. No central stellate scar was identified in any of the images. Imaging findings could not exclude the possibility of hepatocellular carcinoma, but the nodule was pathologically diagnosed as an FNH-like lesion by partial hepatectomy. In the present case, an unusual inhomogeneous hypointensity on hepatobiliary phase imaging made it difficult to diagnose the FNH-like lesions. abstract_id: PUBMED:31808004 Diagnosis of Pre-HCC Disease by Hepatobiliary-Specific Contrast-Enhanced Magnetic Resonance Imaging: A Review. We first proposed a new concept, pre-hepatocellular carcinoma (HCC) disease, to describe the precancerous condition of HCC, which has received scant attention from clinicians. Pre-HCC disease is defined as chronic liver injury concurrent with hepatic low- or high-grade dysplastic nodular lesions. Precise diagnosis of pre-HCC disease may prevent or arrest HCC and contribute to relieving the HCC burden worldwide, although noninvasive diagnosis is difficult and biopsy is generally required. Fortunately, recent advances and extensive applications of hepatobiliary-specific contrast-enhanced magnetic resonance imaging will facilitate the noninvasive identification and characterization of pre-HCC disease. This review briefly discusses the new concept of pre-HCC disease and offers an overview of the role of hepatobiliary-specific contrast-enhanced magnetic resonance imaging for the diagnosis of pre-HCC disease. abstract_id: PUBMED:28742376 Hypervascular Transformation of Hypovascular Hypointense Nodules in the Hepatobiliary Phase of Gadoxetic Acid-Enhanced MRI: A Systematic Review and Meta-Analysis. Objective: The purpose of this study is to evaluate the outcomes of hypovascular hypointense nodules in the hepatobiliary phase of gadoxetic acid-enhanced MRI and the risk factors for the hypervascular transformation of the nodules through a systematic review and meta-analysis. Materials And Methods: We searched the Ovid-MEDLINE and EMBASE databases for published studies of hypovascular hypointense nodules in patients with chronic liver disease. The pooled proportions of the overall and cumulative incidence rates at 1, 2, and 3 years for the transformation of hypovascular hypointense nodules into hypervascular hepatocellular carcinomas (HCCs) were assessed by using random-effects modeling. Metaregression analysis was performed. Results: Sixteen eligible studies with 944 patients and 1819 hypovascular hypointense nodules in total were included. The pooled overall rate of hypervascular transformation was 28.2% (95% CI, 22.7-33.6%; I2 = 87.46%). The pooled 1-, 2-, and 3-year cumulative incidence rates were 18.3% (95% CI, 9.2-27.4%), 25.2% (95% CI, 12.2-38.2%), and 30.3% (95% CI, 18.8-41.9%), respectively. The metaregression analysis revealed that the mean initial nodule size (cutoff value, 9 mm) was a significant factor affecting the heterogeneity of malignant transformation. Conclusion: Hypovascular hypointense nodules detected in the hepatobiliary phase of gadoxetic acid-enhanced MRI carry a significant potential of transforming into hypervascular HCCs. The size of nodules is a significant risk factor for hypervascular transformation. abstract_id: PUBMED:34298844 Characteristics and Lenvatinib Treatment Response of Unresectable Hepatocellular Carcinoma with Iso-High Intensity in the Hepatobiliary Phase of EOB-MRI. In hepatocellular carcinoma (HCC), CTNNB-1 mutations, which cause resistance to immune checkpoint inhibitors, are associated with HCC with iso-high intensity in the hepatobiliary phase of gadoxetic acid-enhanced magnetic resonance imaging (EOB-MRI) in resectable HCC; however, analyses on unresectable HCC are lacking. This study analyzed the prevalence, characteristics, response to lenvatinib, and CTNNB-1 mutation frequency in unresectable HCC with iso-high intensity in the hepatobiliary phase of EOB-MRI. In 52 patients with unresectable HCC treated with lenvatinib, the prevalence of iso-high intensity in the hepatobiliary phase of EOB-MRI was 13%. All patients had multiple HCCs, and 3 patients had multiple HCCs with iso-high intensity in the hepatobiliary phase of EOB-MRI. Lenvatinib response to progression-free survival and overall survival were similar between patients with or without iso-high intensity in the hepatobiliary phase of EOB-MRI. Seven patients (three and four patients who had unresectable HCC with or without iso-high intensity in the hepatobiliary phase of EOB-MRI, respectively) underwent genetic analyses. Among these, two (67%, 2/3) who had HCC with iso-high intensity in the hepatobiliary phase of EOB-MRI carried a CTNNB-1 mutation, while all four patients who had HCC without iso-high intensity in the hepatobiliary phase of EOB-MRI did not carry the CTNNB-1 mutation. This study's findings have clinical implications for the detection and treatment of HCC with iso-high intensity in the hepatobiliary phase of EOB-MRI. abstract_id: PUBMED:33660458 Intraindividual Comparison of Hepatocellular Carcinoma Washout between MRIs with Hepatobiliary and Extracellular Contrast Agents. Objective: To intraindividually compare hepatocellular carcinoma (HCC) washout between MRIs using hepatobiliary agent (HBA) and extracellular agent (ECA). Materials And Methods: This study included 114 prospectively enrolled patients with chronic liver disease (mean age, 55 ± 9 years; 94 men) who underwent both HBA-MRI and ECA-MRI before surgical resection for HCC between November 2016 and May 2019. For 114 HCCs, the lesion-to-liver visual signal intensity ratio (SIR) using a 5-point scale (-2 to +2) was evaluated in each phase. Washout was defined as negative visual SIR with temporal reduction of visual SIR from the arterial phase. Illusional washout (IW) was defined as a visual SIR of 0 with an enhancing capsule. The frequency of washout and MRI sensitivity for HCC using LR-5 or its modifications were compared between HBA-MRI and ECA-MRI. Subgroup analysis was performed according to lesion size (&lt; 20 mm or ≥ 20 mm). Results: The frequency of portal venous phase (PP) washout with HBA-MRI was comparable to that of delayed phase (DP) washout with ECA-MRI (77.2% [88/114] vs. 68.4% [78/114]; p = 0.134). The frequencies were also comparable when IW was allowed (79.8% [91/114] for HBA-MRI vs. 81.6% [93/114] for ECA-MRI; p = 0.845). The sensitivities for HCC of LR-5 (using PP or DP washout) were comparable between HBA-MRI and ECA-MRI (78.1% [89/114] vs. 73.7% [84/114]; p = 0.458). In HCCs &lt; 20 mm, the sensitivity of LR-5 was higher on HBA-MRI than on ECA-MRI (70.8% [34/48] vs. 50.0% [24/48]; p = 0.034). The sensitivity was similar to each other if IW was added to LR-5 (72.9% [35/48] for HBA-MRI vs. 70.8% [34/48] for ECA-MRI; p &gt; 0.999). Conclusion: Extracellular phase washout for HCC diagnosis was comparable between MRIs with both contrast agents, except for tumors &lt; 20 mm. Adding IW could improve the sensitivity for HCC on ECA-MRI in tumors &lt; 20 mm. Answer: The noninvasive diagnosis of hepatocellular carcinoma (HCC) on gadoxetic acid-enhanced MRI has been a subject of research to determine the most reliable imaging features. The use of hypointensity on the hepatobiliary phase (HBP) as an alternative to the traditional washout feature has been explored in several studies. One study found that using hypointensity on the HBP alone resulted in a significantly lower specificity for HCC diagnosis compared to using washout determined on the portal venous phase (PVP) alone. However, including hypointensity on the HBP increased sensitivity. The study concluded that for a specific diagnosis of HCC, washout should be assessed on the PVP alone rather than combined with hypointensity on the transitional phase (TP) or HBP (PUBMED:25773941). Another study suggested that extending the washout appearance to the transitional phase or HBP, instead of restricting it to the PVP, allowed higher sensitivity without a reduction in specificity after excluding typical hemangiomas and nodules with a targetoid appearance (PUBMED:30990381). A different study indicated that washout appearance on the PVP and/or TP is the most reliable imaging feature for differentiating HCC with paradoxical uptake on the HBP from focal nodular hyperplasia (FNH)-like nodules, showing the best diagnostic performance with high sensitivity and specificity (PUBMED:27726242). Further research validated new diagnostic criteria for HCC on gadoxetic acid-enhanced MRI using hypointensity on the HBP as an alternative to washout, in combination with ancillary features. This approach showed high sensitivity with little loss in specificity when excluding nodules considered benign or non-HCC malignancies based on characteristic imaging features (PUBMED:30255250). The addition of hepatobiliary phase images after gadoxetic acid-enhanced dynamic MR imaging may improve the diagnosis of HCC and assist in surgical planning, as some HCCs were seen only on hepatobiliary phase images (PUBMED:20413759). In summary, while hypointensity on the HBP can be used as an alternative to washout in the noninvasive diagnosis of HCC on gadoxetic acid-enhanced MRI, it is most effective when combined with other imaging features and when nodules with certain benign characteristics are excluded. The choice between using hypointensity on the HBP or traditional washout features depends on the clinical context and the need to balance sensitivity and specificity. For instance, in cases where high specificity is crucial, washout should be determined on the PVP alone. However, when higher sensitivity is desired, extending the assessment of washout to include the TP and HBP may be beneficial, especially if ancillary features are also considered to exclude benign lesions or non-HCC malignancies (PUBMED:30255250). It is also important to note that the hepatobiliary phase can provide additional value in the diagnosis of HCC. For example, one study reported that hepatobiliary phase images obtained after gadoxetic acid-enhanced dynamic MR imaging may improve the diagnosis of HCC, as some HCCs were identified only on hepatobiliary phase images (PUBMED:20413759). In the context of lesions with unusual imaging characteristics, such as FNH-like lesions that present atypical signal intensity on the hepatobiliary phase, diagnosis can be challenging and may mimic malignant tumors, further complicating the use of hypointensity on the HBP as a standalone feature (PUBMED:37404221). Additionally, hepatobiliary-specific contrast-enhanced MRI has been suggested to play a role in the diagnosis of pre-HCC disease, which is a precancerous condition of HCC. This noninvasive imaging technique may facilitate the identification and characterization of pre-HCC disease, potentially preventing or arresting the development of HCC (PUBMED:31808004). Lastly, a systematic review and meta-analysis on hypovascular hypointense nodules in the hepatobiliary phase indicated that these nodules carry a significant potential of transforming into hypervascular HCCs, with nodule size being a significant risk factor for hypervascular transformation (PUBMED:28742376). In conclusion, while hypointensity on the hepatobiliary phase can be considered as an alternative to washout in certain scenarios, it should be used judiciously and in conjunction with other imaging features to improve the accuracy of HCC diagnosis on gadoxetic acid-enhanced MRI.
Instruction: Do socioeconomic inequalities in mortality vary between different Spanish cities? Abstracts: abstract_id: PUBMED:32033162 Effect of the Financial Crisis on Socioeconomic Inequalities in Mortality in Small Areas in Seven Spanish Cities. Background: The aim of this study was to analyze the trend in socioeconomic inequalities in mortality in small areas due to several specific causes before (2001-2004, 2005-2008) and during (2009-2012) the economic crisis in seven Spanish cities. Methods: This ecological study of trends, with census tracts as the areas of analysis, was based on three periods. Several causes of death were studied. A socioeconomic deprivation index was calculated for each census tract. For each small area, we estimated standardized mortality ratios, and controlled for their variability using Bayesian models (sSMR). We also estimated the relative risk of mortality according to deprivation in the different cities, periods, and sexes. Results: In general, a similar geographical pattern was found for the socioeconomic deprivation index and sSMR. For men, there was an association in all cities between the deprivation index and all-cause mortality that remained stable over the three periods. For women, there was an association in Barcelona, Granada, and Sevilla between the deprivation index and all-cause mortality in the third period. Patterns by causes of death were more heterogeneous. Conclusions: After the start of the financial crisis, socioeconomic inequalities in total mortality in small areas of Spanish cities remained stable in most cities, although several causes of death showed a different pattern. abstract_id: PUBMED:27473140 Trends in socioeconomic inequalities in mortality in small areas of 33 Spanish cities. Background: In Spain, several ecological studies have analyzed trends in socioeconomic inequalities in mortality from all causes in urban areas over time. However, the results of these studies are quite heterogeneous finding, in general, that inequalities decreased, or remained stable. Therefore, the objectives of this study are: (1) to identify trends in geographical inequalities in all-cause mortality in the census tracts of 33 Spanish cities between the two periods 1996-1998 and 2005-2007; (2) to analyse trends in the relationship between these geographical inequalities and socioeconomic deprivation; and (3) to obtain an overall measure which summarises the relationship found in each one of the cities and to analyse its variation over time. Methods: Ecological study of trends with 2 cross-sectional cuts, corresponding to two periods of analysis: 1996-1998 and 2005-2007. Units of analysis were census tracts of the 33 Spanish cities. A deprivation index calculated for each census tracts in all cities was included as a covariate. A Bayesian hierarchical model was used to estimate smoothed Standardized Mortality Ratios (sSMR) by each census tract and period. The geographical distribution of these sSMR was represented using maps of septiles. In addition, two different Bayesian hierarchical models were used to measure the association between all-cause mortality and the deprivation index in each city and period, and by sex: (1) including the association as a fixed effect for each city; (2) including the association as random effects. In both models the data spatial structure can be controlled within each city. The association in each city was measured using relative risks (RR) and their 95 % credible intervals (95 % CI). Results: For most cities and in both sexes, mortality rates decline over time. For women, the mortality and deprivation patterns are similar in the first period, while in the second they are different for most cities. For men, RRs remain stable over time in 29 cities, in 3 diminish and in 1 increase. For women, in 30 cities, a non-significant change over time in RR is observed. However, in 4 cities RR diminishes. In overall terms, inequalities decrease (with a probability of 0.9) in both men (RR = 1.13, 95 % CI = 1.12-1.15 in the 1st period; RR = 1.11, 95 % CI = 1.09-1.13 in the 2nd period) and women (RR = 1.07, 95 % CI = 1.05-1.08 in the 1st period; RR = 1.04, 95 % CI = 1.02-1.06 in the 2nd period). Conclusions: In the future, it is important to conduct further trend studies, allowing to monitoring trends in socioeconomic inequalities in mortality and to identify (among other things) temporal factors that may influence these inequalities. abstract_id: PUBMED:24567425 Socioeconomic inequalities in mortality in 16 European cities. Aims: To explore inequalities in total mortality between small areas of 16 European cities for men and women, as well as to analyse the relationship between these geographical inequalities and their socioeconomic indicators. Methods: A cross-sectional ecological design was used to analyse small areas in 16 European cities (26,229,104 inhabitants). Most cities had mortality data for a period between 2000 and 2008 and population size data for the same period. Socioeconomic indicators included an index of socioeconomic deprivation, unemployment, and educational level. We estimated standardised mortality ratios and controlled for their variability using Bayesian models. We estimated relative risk of mortality and excess number of deaths according to socioeconomic indicators. Results: We observed a consistent pattern of inequality in mortality in almost all cities, with mortality increasing in parallel with socioeconomic deprivation. Socioeconomic inequalities in mortality were more pronounced for men than women, and relative inequalities were greater in Eastern and Northern European cities, and lower in some Western (men) and Southern (women) European cities. The pattern of excess number of deaths was slightly different, with greater inequality in some Western and Northern European cities and also in Budapest, and lower among women in Madrid and Barcelona. Conclusions: In this study, we report a consistent pattern of socioeconomic inequalities in mortality in 16 European cities. Future studies should further explore specific causes of death, in order to determine whether the general pattern observed is consistent for each cause of death. abstract_id: PUBMED:25631857 Socioeconomic inequalities in cause-specific mortality in 15 European cities. Background: Socioeconomic inequalities are increasingly recognised as an important public health issue, although their role in the leading causes of mortality in urban areas in Europe has not been fully evaluated. In this study, we used data from the INEQ-CITIES study to analyse inequalities in cause-specific mortality in 15 European cities at the beginning of the 21st century. Methods: A cross-sectional ecological study was carried out to analyse 9 of the leading specific causes of death in small areas from 15 European cities. Using a hierarchical Bayesian spatial model, we estimated smoothed Standardized Mortality Ratios, relative risks and 95% credible intervals for cause-specific mortality in relation to a socioeconomic deprivation index, separately for men and women. Results: We detected spatial socioeconomic inequalities for most causes of mortality studied, although these inequalities differed markedly between cities, being more pronounced in Northern and Central-Eastern Europe. In the majority of cities, most of these causes of death were positively associated with deprivation among men, with the exception of prostatic cancer. Among women, diabetes, ischaemic heart disease, chronic liver diseases and respiratory diseases were also positively associated with deprivation in most cities. Lung cancer mortality was positively associated with deprivation in Northern European cities and in Kosice, but this association was non-existent or even negative in Southern European cities. Finally, breast cancer risk was inversely associated with deprivation in three Southern European cities. Conclusions: The results confirm the existence of socioeconomic inequalities in many of the main causes of mortality, and reveal variations in their magnitude between different European cities. abstract_id: PUBMED:25879739 Trends in socioeconomic inequalities in preventable mortality in urban areas of 33 Spanish cities, 1996-2007 (MEDEA project). Background: Preventable mortality is a good indicator of possible problems to be investigated in the primary prevention chain, making it also a useful tool with which to evaluate health policies particularly public health policies. This study describes inequalities in preventable avoidable mortality in relation to socioeconomic status in small urban areas of thirty three Spanish cities, and analyses their evolution over the course of the periods 1996-2001 and 2002-2007. Methods: We analysed census tracts and all deaths occurring in the population residing in these cities from 1996 to 2007 were taken into account. The causes included in the study were lung cancer, cirrhosis, AIDS/HIV, motor vehicle traffic accidents injuries, suicide and homicide. The census tracts were classified into three groups, according their socioeconomic level. To analyse inequalities in mortality risks between the highest and lowest socioeconomic levels and over different periods, for each city and separating by sex, Poisson regression were used. Results: Preventable avoidable mortality made a significant contribution to general mortality (around 7.5%, higher among men), having decreased over time in men (12.7 in 1996-2001 and 10.9 in 2002-2007), though not so clearly among women (3.3% in 1996-2001 and 2.9% in 2002-2007). It has been observed in men that the risks of death are higher in areas of greater deprivation, and that these excesses have not modified over time. The result in women is different and differences in mortality risks by socioeconomic level could not be established in many cities. Conclusions: Preventable mortality decreased between the 1996-2001 and 2002-2007 periods, more markedly in men than in women. There were socioeconomic inequalities in mortality in most cities analysed, associating a higher risk of death with higher levels of deprivation. Inequalities have remained over the two periods analysed. This study makes it possible to identify those areas where excess preventable mortality was associated with more deprived zones. It is in these deprived zones where actions to reduce and monitor health inequalities should be put into place. Primary healthcare may play an important role in this process. abstract_id: PUBMED:24690471 Trends in socioeconomic inequalities in amenable mortality in urban areas of Spanish cities, 1996-2007. Background: While research continues into indicators such as preventable and amenable mortality in order to evaluate quality, access, and equity in the healthcare, it is also necessary to continue identifying the areas of greatest risk owing to these causes of death in urban areas of large cities, where a large part of the population is concentrated, in order to carry out specific actions and reduce inequalities in mortality. This study describes inequalities in amenable mortality in relation to socioeconomic status in small urban areas, and analyses their evolution over the course of the periods 1996-99, 2000-2003 and 2004-2007 in three major cities in the Spanish Mediterranean coast (Alicante, Castellón, and Valencia). Methods: All deaths attributed to amenable causes were analysed among non-institutionalised residents in the three cities studied over the course of the study periods. Census tracts for the cities were grouped into 3 socioeconomic status levels, from higher to lower levels of deprivation, using 5 indicators obtained from the 2001 Spanish Population Census. For each city, the relative risks of death were estimated between socioeconomic status levels using Poisson's Regression models, adjusted for age and study period, and distinguishing between genders. Results: Amenable mortality contributes significantly to general mortality (around 10%, higher among men), having decreased over time in the three cities studied for men and women. In the three cities studied, with a high degree of consistency, it has been seen that the risks of mortality are greater in areas of higher deprivation, and that these excesses have not significantly modified over time. Conclusions: Although amenable mortality decreases over the time period studied, the socioeconomic inequalities observed are maintained in the three cities. Areas have been identified that display excesses in amenable mortality, potentially attributable to differences in the healthcare system, associated with areas of greater deprivation. Action must be taken in these areas of greater inequality in order to reduce the health inequalities detected. The causes behind socioeconomic inequalities in amenable mortality must be studied in depth. abstract_id: PUBMED:23679869 Do socioeconomic inequalities in mortality vary between different Spanish cities? a pooled cross-sectional analysis. Background: The relationship between deprivation and mortality in urban settings is well established. This relationship has been found for several causes of death in Spanish cities in independent analyses (the MEDEA project). However, no joint analysis which pools the strength of this relationship across several cities has ever been undertaken. Such an analysis would determine, if appropriate, a joint relationship by linking the associations found. Methods: A pooled cross-sectional analysis of the data from the MEDEA project has been carried out for each of the causes of death studied. Specifically, a meta-analysis has been carried out to pool the relative risks in eleven Spanish cities. Different deprivation-mortality relationships across the cities are considered in the analysis (fixed and random effects models). The size of the cities is also considered as a possible factor explaining differences between cities. Results: Twenty studies have been carried out for different combinations of sex and causes of death. For nine of them (men: prostate cancer, diabetes, mental illnesses, Alzheimer's disease, cerebrovascular disease; women: diabetes, mental illnesses, respiratory diseases, cirrhosis) no differences were found between cities in the effect of deprivation on mortality; in four cases (men: respiratory diseases, all causes of mortality; women: breast cancer, Alzheimer's disease) differences not associated with the size of the city have been determined; in two cases (men: cirrhosis; women: lung cancer) differences strictly linked to the size of the city have been determined, and in five cases (men: lung cancer, ischaemic heart disease; women: ischaemic heart disease, cerebrovascular diseases, all causes of mortality) both kinds of differences have been found. Except for lung cancer in women, every significant relationship between deprivation and mortality goes in the same direction: deprivation increases mortality. Variability in the relative risks across cities was found for general mortality for both sexes. Conclusions: This study provides a general overview of the relationship between deprivation and mortality for a sample of large Spanish cities combined. This joint study allows the exploration of and, if appropriate, the quantification of the variability in that relationship for the set of cities considered. abstract_id: PUBMED:24112963 Socioeconomic inequalities in injury mortality in small areas of 15 European cities. This study analysed socioeconomic inequalities in mortality due to injuries in small areas of 15 European cities, by sex, at the beginning of this century. A cross-sectional ecological study with units of analysis being small areas within 15 European cities was conducted. Relative risks of injury mortality associated with the socioeconomic deprivation index were estimated using hierarchical Bayesian model. The number of small areas varies from 17 in Bratislava to 2666 in Turin. The median population per small area varies by city (e.g. Turin had 274 inhabitants per area while Budapest had 76,970). Socioeconomic inequalities in all injury mortality are observed in the majority of cities and are more pronounced in men. In the cities of northern and western Europe, socioeconomic inequalities in injury mortality are found for most types of injuries. These inequalities are not significant in the majority of cities in southern Europe among women and in the majority of central eastern European cities for both sexes. The results confirm the existence of socioeconomic inequalities in injury related mortality and reveal variations in their magnitude between different European cities. abstract_id: PUBMED:32899994 Changes in Socioeconomic Inequalities in Amenable Mortality after the Economic Crisis in Cities of the Spanish Mediterranean Coast. Several studies have described a decreasing trend in amenable mortality, as well as the existence of socioeconomic inequalities that affect it. However, their evolution, particularly in small urban areas, has largely been overlooked. The aim of this study is to analyse the socioeconomic inequalities in amenable mortality in three cities of the Valencian Community, namely, Alicante, Castellon, and Valencia, as well as their evolution before and after the start of the economic crisis (2000-2007 and 2008-2015). The units of analysis have been the census tracts and a deprivation index has been calculated to classify them according to their level of socioeconomic deprivation. Deaths and population were also grouped by sex, age group, period, and five levels of deprivation. The specific rates by sex, age group, deprivation level, and period were calculated for the total number of deaths due to all causes and amenable mortality and Poisson regression models were adjusted in order to estimate the relative risk. This study confirms that the inequalities between areas of greater and lesser deprivation in both all-cause mortality and amenable mortality persisted along the two study periods in the three cities. It also shows that these inequalities appear with greater risk of death in the areas of greatest deprivation, although not uniformly. In general, the risks of death from all causes and amenable mortality have decreased significantly from one period to the other, although not in all the groups studied. The evolution of death risks from before the onset of the crisis to the period after presented, overall, a general pro-cyclical trend. However, there are population subgroups for which the trend was counter-cyclical. The use of the deprivation index has made it possible to identify specific geographical areas with vulnerable populations in all three cities and, at the same time, to identify the change in the level of deprivation (ascending or descending) of the geographical areas throughout the two periods. It is precisely these areas where more attention is needed in order to reduce inequalities. abstract_id: PUBMED:32408791 Socioeconomic Inequalities in Small-for-Gestational-Age Births Before and During the Economic Crisis: An Ecologic Study of Trends in 13 Spanish Cities. Economic recessions can increase socioeconomic inequalities in health. The objective of this study was to analyze socioeconomic inequalities in small-for-gestational-age (SGA) births before and during the Spanish economic crisis. We conducted an ecological study of trends based on 2 periods before the crisis (1999-2003 and 2004-2008) and another during the crisis (2009-2013). The study population was Spanish women resident in 13 cities who had given birth during 1999-2013. The prevalence of SGA was calculated for each census tract. A hierarchical Bayesian model was used to obtain the prevalence ratio (PR) and 95% credible intervals (CI). We analyzed the association between SGA and socioeconomic deprivation in each period for each city and for 3 age groups. The PR was above 1 and statistically significant for all 3 time periods in most of the 13 cities. The differences in PR between periods were only statistically significant for Madrid (PR = 1.56, 95% CI 1.48-1.65 for 1999-2003; PR = 1.28, 95% CI 1.19-1.38 for 2004-2008) and Barcelona (PR = 0.99, 95% CI 0.87-1.12 for 2004-2008; PR = 1.20, 95% CI 1.05-1.36 for 2009-2013). Socioeconomic inequalities in SGA births in small areas (census tracts) of most Spanish cities studied remained stable before and during the economic crisis. Answer: Yes, socioeconomic inequalities in mortality do vary between different Spanish cities. A pooled cross-sectional analysis of data from the MEDEA project found that the relationship between deprivation and mortality varied across different Spanish cities. The analysis considered different deprivation-mortality relationships across the cities and took into account the size of the cities as a possible factor explaining differences between cities. The study found variability in the relative risks across cities for general mortality for both sexes, and except for lung cancer in women, every significant relationship between deprivation and mortality indicated that deprivation increases mortality. The variability in the relative risks across cities was observed for general mortality for both sexes (PUBMED:23679869). Moreover, studies have shown that socioeconomic inequalities in mortality were more pronounced for men than for women, and relative inequalities were greater in Eastern and Northern European cities, and lower in some Western (men) and Southern (women) European cities (PUBMED:24567425). In the context of the Spanish cities, the trends in socioeconomic inequalities in mortality in small areas of 33 Spanish cities were heterogeneous, with some studies finding that inequalities decreased or remained stable (PUBMED:27473140). However, after the start of the financial crisis, socioeconomic inequalities in total mortality in small areas of Spanish cities remained stable in most cities, although several causes of death showed a different pattern (PUBMED:32033162). In summary, socioeconomic inequalities in mortality do indeed vary between different Spanish cities, with some cities showing stable inequalities while others exhibit changes, and the magnitude of these inequalities can differ based on gender and geographic location within Europe.
Instruction: Does time pressure create barriers for people to receive preventive health services? Abstracts: abstract_id: PUBMED:25773470 Does time pressure create barriers for people to receive preventive health services? Objective: Regular use of recommended preventive health services can promote good health and prevent disease. However, individuals may forgo obtaining preventive care when they are busy with competing activities and commitments. This study examined whether time pressure related to work obligations creates barriers to obtaining needed preventive health services. Methods: Data from the 2002-2010 Medical Expenditure Panel Survey (MEPS) were used to measure the work hours of 61,034 employees (including 27,910 females) and their use of five preventive health services (flu vaccinations, routine check-ups, dental check-ups, mammograms and Pap smear). Multivariable logistic regression analyses were performed to test the association between working hours and use of each of those five services. Results: Individuals working long hours (&gt;60 per week) were significantly less likely to obtain dental check-ups (OR=0.81, 95% CI: 0.72-0.91) and mammograms (OR=0.47, 95% CI: 0.31-0.73). Working 51-60 h weekly was associated with less likelihood of receiving Pap smear (OR=0.67, 95% CI: 0.46-0.96). No association was found for flu vaccination. Conclusions: Time pressure from work might create barriers for people to receive particular preventive health services, such as breast cancer screening, cervical cancer screening and dental check-ups. Health practitioners should be aware of this particular source of barriers to care. abstract_id: PUBMED:8610196 Barriers to the use of preventive health care services for children. This article describes findings from interviews of parents targeted for outreach efforts that encouraged them to use Medicaid's Early and Periodic Screening, Diagnosis and Treatment(EPSDT) Program. Begun in the 1970s, the EPSDT program held out the promise of ensuring that needy children would receive comprehensive preventive care. With only one-third of eligible children in the United States receiving EPSDT checkups, the program has yet to fulfill its promise. This study sought to understand parents' perceptions of barriers to using EPSDT by interviewing (a) 110 parents who did not schedule EPSDT checkups for their children after being exposed to outreach efforts and (b) 30 parents who did. Although the EPSDT Program is designed to provide health care at no charge and to provide assistance with appointment scheduling and transportation, these low-income parents identified significant barriers to care. Reasons for not using EPSDT services included (a) competing family or personal issues and priorities; (b) perceived or actual barriers in the health care system; and (c) issues related directly to problems with the outreach efforts. Parents who successfully negotiated these barriers and received EPSDT services encountered additional barriers, for example, scheduling and transportation difficulties, long waiting room times, or care that they perceived to be either unresponsive to their medical needs or interpersonally disrespectful. The implications for future outreach efforts and improving access to preventive health care services are discussed. abstract_id: PUBMED:16787479 Barriers and strategies affecting the utilisation of primary preventive services for people with physical disabilities: a qualitative inquiry. Individuals with physical disabilities are less likely to utilise primary preventive healthcare services than the general population. At the same time they are at greater risk for secondary conditions and as likely as the general population to engage in health risk behaviours. This qualitative exploratory study had two principal objectives: (1) to investigate access barriers to obtaining preventive healthcare services for adults with physical disabilities and (2) to identify strategies to increase access to these services. We conducted five focus group interviews with adults (median age: 46) with various physically disabling conditions. Most participants were male Caucasians residing in Virginia, USA. Study participants reported a variety of barriers that prevented them from receiving the primary preventive services commonly recommended by the US Preventive Services Task Force. We used a health services framework to distinguish structural-environmental (to include inaccessible facilities and examination equipment) or process barriers (to include a lack of disability-related provider knowledge, respect, and skilled assistance during office visits). Participants suggested a range of strategies to address these barriers including disability-specific continuing education for providers, the development of accessible prevention-focused information portals for people with physical disabilities, and consumer self-education, and assertiveness in requesting recommended services. Study findings point to the need for a more responsive healthcare system to effectively meet the primary prevention needs of people with physical disabilities. The authors propose the development of a consumer- and provider-focused resource and information kit that reflects the strategies that were suggested by study participants. abstract_id: PUBMED:8331981 Making "time" for preventive services. Although the implementation of clinical preventive services is a high priority on the national agenda and physicians acknowledge the importance of these services, implementation rates remain far below the target years after the recommendations have been released. Physicians repeatedly report that the reason for not providing preventive services is that they do not have "time." In this article, we identify attributes of the health-services system that create this phenomenon. We present evidence that formal delivery systems for preventive services must be developed if the "time" problem is to be solved, and we review why preventive-services systems need to be integrated into the current health-services system. Finally, we list the attributes that we believe a preventive-services system must have if it is to be successful. The success of clinical trials of such systems indicates that our goals of preventive services can be achieved if all persons who have an investment in clinical preventive services commit themselves to developing and supporting these systems. abstract_id: PUBMED:9598000 Opportunistic preventive services delivery. Are time limitations and patient satisfaction barriers? Background: The use of illness visits as opportunities to increase the delivery of preventive services has been widely recommended, but its feasibility in community practice is not known. We examined the prevalence of this opportunistic approach to providing preventive services, and the degree to which patient satisfaction and time limitation are barriers. Methods: Consecutive patient illness visits to 138 community family physicians were directly observed. Visits by patients who received at least one preventive service recommended by the US Preventive Services Task Force were compared with visits by patients not receiving any recommended preventive services, controlling for potentially confounding patient characteristics. Results: Among 3547 illness visits, preventive services were delivered during 39% of visits for chronic illness and 30% of visits for acute illness. Opportunistic health habits counseling occurred more frequently than screening or immunization. Visit satisfaction reported by 2454 patients using the Medical Outcomes Survey 9-item Visit Rating Scale was not different during illness visits with or without the delivery of preventive services. The duration of illness visits that included preventive services was an average of 2.1 minutes longer than illness visits without such interventions (95% confidence interval, 1.7-2.4). Conclusions: The delivery of preventive services during illness visits is common in community practice and is well accepted by patients. The expansion of an opportunistic approach to providing preventive services will require attention to time-efficient approaches. abstract_id: PUBMED:10162852 Barriers to preventive health services for minority households in the rural south. Health values, behaviors, and status are shaped by place of residence, region, race, and socio-economic status, among other social factors. Consequently, this article examines barriers to preventive health services for lower-income blacks in five rural counties in Georgia. Qualitative and quantitative data were collected through 281 household, 51 community leader, and six focus group interviews. Female respondents who had been pregnant were most likely to have received pregnancy-related services and all respondents least likely to have received vision and dental screenings. Six of the seven types of services inquired about were most likely to have been received in a private practice setting. Primary barriers to preventive service utilization included ability to pay, perception of need, service availability, accessibility of services, and the perception of racism. The relationship between structural and nonstructural barriers, their impact on preventive service utilization, and research recommendations also were developed and presented. abstract_id: PUBMED:17335358 Perceived barriers to and facilitators of the implementation of priority clinical preventive services guidelines. Objective: To obtain feedback from contracted health plan (HP) clinicians responsible for implementing preventive services regarding an established set of priority guidelines identified by a coalition of medical directors and to identify barriers to and facilitators of the implementation of these priority guidelines in clinician practice. Study Design: Qualitative design using a focus group approach. Participants And Methods: Three focus group meetings among contracted HP clinicians were conducted in New Jersey in 3 geographic regions (northern, central, and southern New Jersey). Clinicians directly involved in delivering preventive services to pediatric, adult, and geriatric patients participated. Results: Barriers to guideline implementation were identified by the clinicians regarding payment and cost, time, legal issues, inconsistency among HP tools, tracking, a lack of internalization, and the patient-clinician relationship. In addition, facilitators of guideline implementation, including HP support, patient materials, clinician awareness, and tool consistency, were identified. Conclusions: Clinicians' perceived barriers to guideline implementation are in themselves a barrier to the delivery of preventive care services. If clinicians perceive barriers to implementing priority recommendations, they may be unlikely to make the conscious effort to deliver preventive care. There needs to be better dialogue between HPs and contracted clinicians to minimize the perceptions of barriers and to increase clinician awareness of and sensitivity to preventive care for priority implementation. To improve the delivery of preventive services in clinician practice, competing HPs must communicate in a single voice with contracted clinicians in the area of preventive care. abstract_id: PUBMED:37908168 Knowledge, familiarity, and impact of the COVID-19 pandemic on barriers to seeking mental health services among older people: a cross-sectional study. Aim: The COVID-19 pandemic caused drastic changes in older people's daily activities with a negative impact on their mental health, yet older people are less likely to seek mental health services. This study aims to explore the relationship between knowledge of and familiarity with mental health services, along with the impact of the COVID-19 pandemic, and barriers to seeking mental health services among older people. Methods: A descriptive cross-sectional study was conducted with a convenience sample of 352 older people, recruited among community-dwelling adults who attended randomly selected postal offices and pension outlets. Three tools were used: a structured interview schedule for sociodemographic and clinical characteristics of older people, the revised version of the Knowledge and Familiarity of Mental Health Services Scale (KFFMHS-R), and the Barriers to Mental Health Services Scale Revised (BMHSS-R). Results: All participants reported experiencing mental health distress during the COVID-19 pandemic. Intrinsic barriers had a higher mean score than extrinsic barriers, and 27.4% of the variance of overall barriers to seeking mental health could be explained through regression analysis by familiarity, knowledge of mental health services, and age. Overall barriers explained 24.4% of the variance of older people's perceived distress as an impact of the COVID-19 pandemic (F = 22.160, P &lt; 0.001). Conclusions: Knowledge of mental health services was the most significant predictor of barriers to seeking mental health services during the COVID-19 pandemic. Higher barriers predicted higher distress as an impact of the COVID-19 pandemic. The results of the study suggest the need for a multidisciplinary mental health team for older people. abstract_id: PUBMED:29784113 Identifying Barriers to Access and Utilization of Preventive Health-Care Services by Young Adults in Vermont. Purpose: The objective of this study was to examine barriers to accessing and utilizing routine preventive health-care checkups for Vermont young adults. Methods: A population-based analysis was conducted using aggregated data from the 2011-2014 Behavioral Risk Factor Surveillance System (BRFSS) surveys of Vermont young adults aged 18-25 years (N = 1,329). Predictors analyzed as barriers were classified county of residence, health-care coverage, and annual household income level, as well as covariates, with the outcome of the length of time since the last routine checkup. Results: A total of 81.1% of Vermont young adults reported having a routine checkup in the past 2 years. Health-care coverage was a predictor of undergoing routine checkups within the past 2 years, with 85.2% of insured respondents undergoing checkups compared with 56.3% of uninsured respondents (p &lt; .001). Additionally, 81.9% of respondents from Vermont counties classified as mostly rural reported undergoing a checkup within the past 2 years (p &lt; .05). A total of 80.8% of respondents from the middle level (p &lt; .05) and 89.0% of respondents from the highest level (p &lt; .001) of annual household incomes reported undergoing a checkup in the past 2 years. Finally, age (p &lt; .001) and sex (p &lt; .01) were shown to indicate receipt of routine preventive checkups more often. Conclusions: For Vermont young adults, health-care coverage, classified county of residence, and household income level were shown to be indicators of undergoing routine preventive health care more often. Further investigation is needed to examine how these barriers may impede preventive screenings, thereby contributing to the ongoing development of health-care guidelines and policies for young adults in rural settings. abstract_id: PUBMED:35457326 Barriers to the Provision of Preventive Care to People Living with Mental Health Conditions: Self-Report by Staff Working in an Australian Community Managed Organisation. People living with mental health conditions experience a reduced life expectancy largely due to a higher prevalence of chronic diseases. Addressing health risk behaviours, including tobacco smoking, inadequate nutrition, harmful alcohol consumption, and physical inactivity (SNAP), through the provision of preventive care, is recommended to reduce this burden. Community Managed Organisations (CMOs) may play an important role in providing preventive care to consumers with mental health conditions, however, few studies have examined preventive care provision in CMO settings; and no studies have comprehensively assessed barriers to the provision of this care using a tool such as the Theoretical Domains Framework (TDF). To fill this research gap, we conducted an online survey among staff (N = 190) from one CMO in Australia to (1) identify barriers to preventive care provision (ask, advise, assist, connect) to address SNAP behaviours among consumers; and (2) explore associations between barriers and preventive care provision. Results demonstrate that while staff reported knowing how to provide preventive care and believed it would positively impact consumers; barriers including confidence in providing this care and consumer uptake of referrals, were identified. Further research among multiple CMOs is needed to identify care provision and associated barriers in the sector more widely. Answer: Yes, time pressure does create barriers for people to receive preventive health services. Studies have shown that individuals working long hours are significantly less likely to obtain certain preventive health services. For example, those working more than 60 hours per week were found to be less likely to get dental check-ups and mammograms, and working 51-60 hours weekly was associated with a lower likelihood of receiving Pap smears (PUBMED:25773470). Additionally, physicians have reported that a lack of time is a reason for not providing preventive services, indicating that the health-services system attributes contribute to this phenomenon (PUBMED:8331981). Furthermore, the delivery of preventive services during illness visits, which could be an opportunity to provide such services, was found to be only slightly longer in duration compared to visits without preventive services, suggesting that time-efficient approaches are needed to expand preventive care (PUBMED:9598000).
Instruction: Impact of induction therapy on postoperative outcome after extrapleural pneumonectomy for malignant pleural mesothelioma: does induction-accelerated hemithoracic radiation increase the surgical risk? Abstracts: abstract_id: PUBMED:27005976 Impact of induction therapy on postoperative outcome after extrapleural pneumonectomy for malignant pleural mesothelioma: does induction-accelerated hemithoracic radiation increase the surgical risk? Objectives: Patients with malignant pleural mesothelioma (MPM) eligible for extrapleural pneumonectomy (EPP) may benefit from induction chemotherapy (CT) as historically described, or from induction-accelerated hemithoracic intensity-modulated radiation therapy (IMRT) as a potential alternative. However, the impact of the type of induction therapy on postoperative morbidity and mortality remains unknown. Methods: We performed a retrospective study including every patient who underwent EPP for MPM in our institution between January 2001 and December 2014. Patients without induction treatment (n = 7) or undergoing both induction CT and IMRT (n = 2) were then excluded. The remaining patients (study group) were divided according to the type of induction treatment in Group 1-CT and Group 2-IMRT. Major complications were defined by complications of Grade 3 or higher according to the National Cancer Institute Common Terminology Criteria for Adverse Events 4.0 guidelines. Red blood cell (RBC) transfusion was analysed as a number of packs, and dichotomized as &lt;3 vs ≥3 packs. Plasma and platelet transfusion were analysed as a number of units, and dichotomized as no transfusion versus any plasma or platelet transfusion. Results: Altogether, 126 patients (mean age 61.3 ± 8.1 years, males 82.5%, right side 60.3%, 90-day mortality rate 4.8%) accounted for the study group. Sixty-four patients were included in Group 1-CT and 62 patients were included in Group 2-IMRT. When compared with Group 1-CT, Group 2-IMRT was characterized by older patients (59.3 ± 9.2 vs 63.3 ± 8.3 years, P = 0.012), more right-sided resections (46.8 vs 74.1%, P = 0.003), more advanced disease (pathological stage IV: 28.1 vs 53.2%, P = 0.007), less RBC transfusions (5.1 ± 3.0 vs 3.0 ± 2.4 packs, P &lt; 0.001), less plasma or platelet transfusions (31.2 vs 9.6%, P = 0.005) and similar rate of major complications (29.6 vs 35.4%, P = 0.614). The 90-day mortality rate was 6.2% in Group 1-CT (n = 4) and 3.2% in Group 2-RT (n = 2, P = 0.680). Induction with IMRT was significantly associated with a decreased risk of transfusion with RBCs [odds ratio (OR) = 0.10, 95% confidence interval (CI) 0.04-0.23, P &lt; 0.001] as well as plasma and platelets (OR = 0.25, 95% CI 0.086-0.67, P = 0.008). Conclusions: In this large single-centre series of EPP for MPM, the implementation of induction IMRT was not associated with any significant increase in the surgical risks above and beyond induction CT. The switch from induction CT to induction IMRT was associated with resection in older patients with more advanced tumours, less transfusion requirements, comparable postoperative morbidity and 90-day mortality. abstract_id: PUBMED:25527013 Novel induction therapies for pleural mesothelioma. Malignant mesothelioma is becoming increasingly common, and rates of diagnosis are expected to continue to increase in the coming years because of the extensive use of asbestos in industrialized countries and the long time interval between exposure and onset of disease. Although much research has been done on the optimal treatment for this disease, the overall prognosis remains grim. The main components of therapy are surgery, chemotherapy, and radiation therapy, but there is controversy in the literature about the optimal inclusion and sequencing of these treatments, as each has unique risk profiles. We have developed a new Surgery for Mesothelioma After Radiation Therapy protocol consisting of induction-accelerated hemithoracic radiation followed by extrapleural pneumonectomy. The rationale behind this protocol is to maximize both the tumoricidal and immunogenic potential of the radiotherapy while minimizing the radiation toxicity to the ipsilateral lung. Our initial trial demonstrated the feasibility of this approach and has shown encouraging results in patients with epithelial histology. In this article, we reviewed the current literature on induction chemotherapy for mesothelioma as well as described the Surgery for Mesothelioma After Radiation Therapy protocol and upcoming studies of novel induction therapies for mesothelioma. abstract_id: PUBMED:26614413 Accelerated hemithoracic radiation followed by extrapleural pneumonectomy for malignant pleural mesothelioma. Objective: To evaluate a new protocol of accelerated hemithoracic intensity-modulated radiation therapy (IMRT) followed by extrapleural pneumonectomy (EPP) for patients with resectable malignant pleural mesothelioma (MPM). Methods: A total of 25 Gy of radiation was delivered in 5 daily fractions over 1 week to the entire ipsilateral hemithorax with concomitant boost of 5 Gy to volumes at high risk based on computed tomography and positron emission tomography scan findings. EPP was performed at 6 ± 2 days after the end of radiation therapy. Adjuvant chemotherapy was offered to patients with ypN2 disease. Results: A total of 62 patients were included between November 2008 and October 2014. One patient died in the hospital 2 months after EPP, for an operative mortality of 1.6%, and 2 died after discharged from the hospital for an overall treatment-related mortality (grade 5 toxicity) of 4.8%. Twenty-four patients (39%) developed grade 3 to 5 (grade 3+) complications. On final pathology, 94% of the patients were stage III or IV, and 52% had ypN2 disease. The median survival for all patients as an intention-to-treat analysis was 36 months. The median overall survival and disease-free survival was 51 and 47 months, respectively, in epithelial subtypes, compared with 10 and 8 months in biphasic subtypes (P = .001). Ipsilateral chest recurrence occurred in 8 patients. Conclusions: Accelerated hemithoracic IMRT followed by EPP has become our preferred approach for resectable MPM. The results have been encouraging in patients with epithelial subtype. abstract_id: PUBMED:19131427 A feasibility study of induction pemetrexed plus cisplatin followed by extrapleural pneumonectomy and postoperative hemithoracic radiation for malignant pleural mesothelioma. A prospective multi-institutional study has been commenced in Japan to evaluate the feasibility of induction chemotherapy using pemetrexed plus cisplatin, followed by extrapleural pneumonectomy (EPP) and postoperative hemithoracic radiation in patients with resectable malignant pleural mesothelioma. The study was initiated on May 2008 and 40 patients will be recruited over 3 years. Primary endpoints are macroscopic complete resection rate by EPP and treatment-related mortality for trimodality therapy. Secondary endpoints include treatment completion rate, adverse events, response rate by chemotherapy and 2-year overall and relapse-free survival. abstract_id: PUBMED:19224855 Trimodality therapy with induction chemotherapy followed by extrapleural pneumonectomy and adjuvant high-dose hemithoracic radiation for malignant pleural mesothelioma. Purpose: Malignant pleural mesothelioma (MPM) remains associated with poor outcome. We examined the results of trimodality therapy with cisplatin-based chemotherapy followed by extrapleural pneumonectomy (EPP) and adjuvant high-dose (50 to 60 Gy) hemithoracic radiation therapy for MPM. Patients And Methods: We conducted a retrospective review of all patients prospectively evaluated for trimodality therapy protocol between January 2001 and December 2007 in our institution. Results: A total of 60 patients were suitable candidates. Histology was epithelioid (n = 44) or biphasic (n = 16). Chemotherapy regimens included cisplatin/vinorelbine (n = 26), cisplatin/pemetrexed (n = 24), cisplatin/raltitrexed (n = 6), or cisplatin/gemcitabine (n = 4). EPP was performed in 45 patients, and hemithoracic radiation therapy to at least 50 Gy was administered postoperatively to 30 patients. Completion of the trimodality therapy in the absence of mediastinal node involvement was associated with the best survival (median survival of 59 months v &lt;or= 14 months in the remaining patients, P = .0003). The type of induction chemotherapy had no significant impact on survival. Pathologic nodal status remained a significant predictor of poor survival despite completion of the trimodality therapy. After completion of the protocol, the 5-year disease-free survival was 53% for patients with N0 disease, reaching 75% in patients with ypT1-2N0 and 45% in patients with ypT3-4N0. Conclusion: This large, single-center experience with induction chemotherapy followed by EPP and adjuvant high-dose hemithoracic radiation for MPM shows that half of the patients are able to complete this protocol. The results are encouraging for patients with N0 disease. However, N2 disease remains a major factor impacting on survival, despite completion of the entire trimodality regimen. abstract_id: PUBMED:25753747 Hemithoracic radiation therapy after extrapleural pneumonectomy for malignant pleural mesothelioma: Toxicity and outcomes at an Australian institution. Introduction: We aim to report the outcome of patients with malignant pleural mesothelioma who underwent extrapleural pneumonectomy (EPP) and adjuvant hemithoracic radiotherapy with or without chemotherapy at a single Australian institution. Method: Between July 2004 and March 2013, 53 patients were referred for radiation treatment following EPP, of whom 49 were suitable for adjuvant treatment. Radiation treatment initially involved a 3D conformal, mixed electron/photon technique, delivering 45-50.4 Gy in 25-28 fractions (31 patients) and subsequently a nine-field intensity-modulated radiotherapy technique, delivering 50.4-54 Gy in 28-30 fractions (18 patients). Fifty-five per cent of patients also received pre-operative chemotherapy. We assessed toxicity, disease-specific and overall survival in patients who commenced radiation treatment. Results: Forty-one patients (84%) completed treatment as prescribed. Six patients stopped prematurely due to toxicity, and two with disease progression. Most patients discontinuing due to toxicity received over 90% of the prescribed dose. Common acute toxicities included nausea, fatigue, anorexia and dermatitis. Severe early toxicities were rare. Late toxicities were uncommon, with the exception of a persistent elevation in liver enzymes in those with right-sided disease. Neither clinical hepatitis nor radiation pneumonitis was documented. With a median follow up of 18.7 months, median disease-free and overall survival were 21.6 and 30.5 months, respectively, and 2-year overall survival was 57.3%. Conclusion: Hemithoracic radiotherapy following EPP, although associated with significant early toxicity, is well tolerated. Most patients complete the prescribed treatment, and clinically significant late toxicities are rare. abstract_id: PUBMED:33012433 The Role of Extrapleural Pneumonectomy in Malignant Pleural Mesothelioma. Extrapleural pneumonectomy (EPP) is the most extensive form of surgery for mesothelioma, involving en bloc resection of visceral and parietal pleura, lung, diaphragm and pericardium, with reconstruction of the pericardium and diaphragm. It can be performed safely in carefully selected patients. It should be performed in experienced centers as part of a multimodality treatment plan. The SMART approach, with a short course of induction hemithoracic radiation followed by EPP has demonstrated safety and value of hypofractionated hemithoracic radiation combined with complete macroscopic resection. We are conducting a clinical trial with oligofractionated hemithoracic radiation in early-stage mesothelioma. abstract_id: PUBMED:20699634 Extrapleural pneumonectomy, photodynamic therapy and intensity modulated radiation therapy for the treatment of malignant pleural mesothelioma. Intensity modulated radiation therapy (IMRT) has recently been proposed for the treatment of malignant pleural mesothelioma (MPM). Here, we describe our experience with a multimodality approach for the treatment of mesothelioma, incorporating extrapleural pneumonectomy, intraoperative photodynamic therapy and postoperative hemithoracic IMRT. From 2004-2007, we treated 11 MPM patients with hemithoracic IMRT, 7 of whom had undergone porfimer sodium-mediated PDT as an intraoperative adjuvant to surgical debulking. The median radiation dose to the planning treatment volume (PTV) ranged from 45.4-54.5 Gy. For the contralateral lung, V20 ranged from 1.4-28.5%, V5 from 42-100% and MLD from 6.8-16.5 Gy. In our series, 1 patient experienced respiratory failure secondary to radiation pneumonitis that did not require mechanical ventilation. Multimodality therapy combining surgery with increased doses of radiation using IMRT, and newer treatment modalities such as PDT , appears safe. Future prospective analysis will be needed to demonstrate efficacy of this approach in the treatment of malignant mesothelioma. Efforts to reduce lung toxicity and improve dose delivery are needed and provide the promise of improved local control and quality of life in a carefully chosen multidisciplinary approach. abstract_id: PUBMED:12873676 Hemithoracic radiation after extrapleural pneumonectomy for malignant pleural mesothelioma. Purpose: The treatment of malignant pleural mesothelioma remains a therapeutic challenge, with median survival rates of about 12 months and local failure rates of up to 80%. Our institution recently published results showing that extrapleural pneumonectomy (EPP) followed by hemithoracic radiation yielded excellent local control. This paper reports our technique for high-dose hemithoracic radiation after EPP. Methods: Between 1990 and 2001, 35 patients with malignant pleural mesothelioma were treated with EPP followed by hemithoracic radiation therapy (median dose: 54 Gy, range: 45-54 Gy) at Memorial Sloan-Kettering Cancer Center. EPP was defined as en bloc resection of the entire pleura, lung, and diaphragm with or without resection of the pericardium. The radiation therapy target volume was the entire hemithorax, including the pleural folds and the thoracotomy and chest tube incision sites. Patients were treated with a total dose of 5400 cGy delivered in 30 fractions of 180 cGy. Radiation therapy was well tolerated, and toxicity data are described. Results: Of the 35 patients analyzed, 29 patients were male, and 18 had right-sided tumors. Twenty-six had epithelioid histologies. UICC stage was I in 4, II in 11, III in 19, and IV in 1 patient. As shown by axial and sagittal isodose distributions, we were able to deliver adequate doses to the target volume while limiting dose to critical structures such as heart, spinal cord, liver, and stomach. The most common toxicities were RTOG Grades 1 and 2 nausea and vomiting, as well as lung, esophageal, and skin toxicities. Conclusion: Extrapleural pneumonectomy followed by high-dose hemithoracic radiation therapy is a feasible treatment regimen that is well tolerated for patients with malignant mesothelioma. We have demonstrated adequate dose distributions, using a combined photon and electron technique with blocking of critical normal structures. abstract_id: PUBMED:23247629 Patterns of failure, toxicity, and survival after extrapleural pneumonectomy and hemithoracic intensity-modulated radiation therapy for malignant pleural mesothelioma. Introduction: We investigated safety, efficacy, and recurrence after postoperative hemithoracic intensity-modulated radiation therapy (IMRT) in patients with malignant pleural mesothelioma treated with extrapleural pneumonectomy (EPP), during the past decade at a single institution. Methods: In 2001-2011, 136 consecutive patients with malignant pleural mesothelioma underwent EPP with planned adjuvant IMRT. Eighty-six patients (64%) underwent hemithoracic IMRT; the rest were not eligible because of postoperative complications, disease progression, or poor performance status. We assessed toxicity, survival, and patterns of failure in these 86 patients. Toxicity was scored with the Common Terminology Criteria for Adverse Events version 4.0; survival outcomes were estimated with the Kaplan-Meier method; and locoregional patterns of failure were classified as in-field, marginal, or out-of-field. Risk factors related to survival were identified by univariate and multivariate Cox regression analysis. Results: Median overall survival time for all 86 patients receiving IMRT was 14.7 months. Toxicity rates of grade of 3 or more were: skin 17%, lung 12%, heart 2.3%, and gastrointestinal toxicity 16%. Five patients experienced grade 5 pulmonary toxicity. Rates of locoregional recurrence-free survival, distant metastasis-free survival, and overall survival (OS) were 88%, 55%, and 55% at 1 year and 71%, 40%, and 32% at 2 years. On multivariate analysis, pretreatment forced expiratory volume in 1 second, nonepithelioid histology, and nodal status were associated with distant metastasis-free survival and OS. Conclusion: IMRT after EPP is associated with low rates of locoregional recurrence, though some patients experience life-threatening lung toxicity. Tumor histology and nodal status can be helpful in identifying patients for this aggressive treatment. Answer: The impact of induction therapy on postoperative outcomes after extrapleural pneumonectomy (EPP) for malignant pleural mesothelioma (MPM) has been evaluated in several studies, with a particular focus on whether induction-accelerated hemithoracic radiation increases surgical risk. A retrospective study that included patients who underwent EPP for MPM between January 2001 and December 2014 compared outcomes between those who received induction chemotherapy (CT) and those who received induction-accelerated hemithoracic intensity-modulated radiation therapy (IMRT). The study found that induction with IMRT was not associated with a significant increase in surgical risks compared to induction CT. In fact, induction IMRT was associated with older patients with more advanced tumors, fewer red blood cell and plasma or platelet transfusions, comparable postoperative morbidity, and similar 90-day mortality rates. The study concluded that the switch from induction CT to induction IMRT was associated with less transfusion requirements and did not increase surgical risks (PUBMED:27005976). Another study evaluated a new protocol of accelerated hemithoracic IMRT followed by EPP for patients with resectable MPM. The results showed that this approach was encouraging, especially in patients with epithelial subtype, and did not lead to a significant increase in treatment-related mortality or severe complications (PUBMED:26614413). Furthermore, a study on trimodality therapy with induction chemotherapy followed by EPP and adjuvant high-dose hemithoracic radiation for MPM found that half of the patients were able to complete the protocol. The results were particularly encouraging for patients with N0 disease, although N2 disease remained a significant factor impacting survival (PUBMED:19224855). In summary, the available evidence suggests that induction-accelerated hemithoracic radiation does not increase the surgical risk when compared to induction chemotherapy in the treatment of MPM. Instead, it may be associated with certain benefits such as fewer transfusion requirements and encouraging survival outcomes, particularly in patients with epithelial subtype MPM.
Instruction: Liver transplantation as curative approach for advanced hepatocellular carcinoma: is it justified? Abstracts: abstract_id: PUBMED:35721283 Emerging curative-intent minimally-invasive therapies for hepatocellular carcinoma. Hepatocellular carcinoma (HCC) is the most common cause of liver malignancy and the fourth leading cause of cancer deaths universally. Cure can be achieved for early stage HCC, which is defined as 3 or fewer lesions less than or equal to 3 cm in the setting of Child-Pugh A or B and an ECOG of 0. Patients outside of these criteria who can be down-staged with loco-regional therapies to resection or liver transplantation (LT) also achieve curative outcomes. Traditionally, surgical resection, LT, and ablation are considered curative therapies for early HCC. However, results from recently conducted LEGACY study and DOSISPHERE trial demonstrate that transarterial radio-embolization has curative outcomes for early HCC, leading to its recent incorporation into the Barcelona clinic liver criteria guidelines for early HCC. This review is based on current evidence for curative-intent loco-regional therapies including radioembolization for early-stage HCC. abstract_id: PUBMED:36896302 Network meta-analysis of the prognosis of curative treatment strategies for recurrent hepatocellular carcinoma after hepatectomy. Background: Recurrent hepatocellular carcinoma (rHCC) is a common outcome after curative treatment. Retreatment for rHCC is recommended, but no guidelines exist. Aim: To compare curative treatments such as repeated hepatectomy (RH), radiofrequency ablation (RFA), transarterial chemoembolization (TACE) and liver transplantation (LT) for patients with rHCC after primary hepatectomy by conducting a network meta-analysis (NMA). Methods: From 2011 to 2021, 30 articles involving patients with rHCC after primary liver resection were retrieved for this NMA. The Q test was used to assess heterogeneity among studies, and Egger's test was used to assess publication bias. The efficacy of rHCC treatment was assessed using disease-free survival (DFS) and overall survival (OS). Results: From 30 articles, a total of 17, 11, 8, and 12 arms of RH, RFA, TACE, and LT subgroups were collected for analysis. Forest plot analysis revealed that the LT subgroup had a better cumulative DFS and 1-year OS than the RH subgroup, with an odds ratio (OR) of 0.96 (95%CI: 0.31-2.96). However, the RH subgroup had a better 3-year and 5-year OS compared to the LT, RFA, and TACE subgroups. Hierarchic step diagram of different subgroups measured by the Wald test yielded the same results as the forest plot analysis. LT had a better 1-year OS (OR: 1.04, 95%CI: 0.34-03.20), and LT was inferior to RH in 3-year OS (OR: 10.61, 95%CI: 0.21-1.73) and 5-year OS (OR: 0.95, 95%CI: 0.39-2.34). According to the predictive P score evaluation, the LT subgroup had a better DFS, and RH had the best OS. However, meta-regression analysis revealed that LT had a better DFS (P &lt; 0.001) as well as 3-year OS (P = 0.881) and 5-year OS (P = 0.188). The differences in superiority between DFS and OS were due to the different testing methods used. Conclusion: According to this NMA, RH and LT had better DFS and OS for rHCC than RFA and TACE. However, treatment strategies should be determined by the recurrent tumor characteristics, the patient's general health status, and the care program at each institution. abstract_id: PUBMED:33835223 Therapies for hepatocellular carcinoma: overview, clinical indications, and comparative outcome evaluation-part one: curative intention. Hepatocellular carcinoma (HCC) offers unique management challenges as it commonly occurs in the setting of underlying chronic liver disease. The management of HCC is directed primarily by the clinical stage. The most commonly used staging system is the Barcelona-Clinic Liver Cancer system, which considers tumor burden based on imaging, liver function and the patient's performance status. Early-stage HCC can be managed with therapies of curative intent including surgical resection, liver transplantation, and ablative therapies. This manuscript reviews the various treatment options for HCC with a curative intent, such as locablative therapy types, surgical resection, and transplant. Indications, contraindications and outcomes of the various treatment options are reviewed. Multiple concepts relating to liver transplant are discussed including Milan criteria, OPTN policy, MELD exception points, downstaging to transplant and bridging to transplant. abstract_id: PUBMED:37345066 Decoding Immune Signature to Detect the Risk for Early-Stage HCC Recurrence. Hepatocellular carcinoma (HCC) is often recognized as an inflammation-linked cancer, which possesses an immunosuppressive tumor microenvironment. Curative treatments such as surgical resection, liver transplantation, and percutaneous ablation are mainly applicable in the early stage and demonstrate significant improvement of survival rate in most patients. However, 70-80% of patients report HCC recurrence within 5 years of curative treatment, representing an important clinical issue. However, there is no effective recurrence marker after surgical and locoregional therapies, thus, tumor size, number, and histological features such as cancer cell differentiation are often considered as risk factors for HCC recurrence. Host immunity plays a critical role in regulating carcinogenesis, and the immune microenvironment characterized by its composition, functional status, and density undergoes significant alterations in each stage of cancer progression. Recent studies reported that analysis of immune contexture could yield valuable information regarding the treatment response, prognosis and recurrence. This review emphasizes the prognostic value of tumors associated with immune factors in HCC recurrence after curative treatment. In particular, we review the immune landscape and immunological factors contributing to early-stage HCC recurrence, and discuss the immunotherapeutic interventions to prevent tumor recurrence following curative treatments. abstract_id: PUBMED:33926242 Living Donor Liver Transplantation: The Optimal Curative Treatment for Hepatocellular Carcinoma Even Beyond Milan Criteria. Introduction: Liver transplantation offers the most reasonable expectation for curative treatment for hepatocellular carcinoma. Living-donor liver transplantation represents a treatment option, even in patients with extended Milan criteria. This study aimed to evaluate the outcomes of hepatocellular carcinoma patients, particularly those extended Milan criteria. Materials And Patients: All HCC patients who received liver transplant for HCC were included in this retrospective study. Clinical characteristics including perioperative data and survival data (graft and patient) were extracted from records. Univariate and multivariate analyses was performed to identify significant prognostic factors for survival, postoperative complications and recurrence. Results: Two-hundred and two patients were included. The median age was 54.8 years (IQR 53-61). Fifty-one patients (25.3%) underwent deceased donors liver transplantation and 151 patients (74.7%) underwent living donor liver transplantation. Perioperative mortality rate was 5.9% (12 patients). Recurrent disease occurred in 43 patients (21.2%). The overall 1-year and 5-year survival rates were 90.7% and 75.6%, respectively. Significant differences between patients beyond Milan criteria compared to those within Milan criteria were not found. Alpha-fetoprotein level &gt;300 ng/mL, vascular invasion, and bilobar tumor lesions were independent negative prognostic factors for survival. Conclusion: Liver transplantation is the preferred treatment for hepatocellular carcinoma and it has demonstrated an excellent potential to cure even in patients with beyond Milan criteria. This study shows that the Milan criteria alone are not sufficient to predict survival after transplantation. The independent parameters for survival prediction are Alpha-Fetoprotein-value and status of vascular invasion. abstract_id: PUBMED:26588992 Curative therapies for hepatocellular carcinoma: an update and perspectives. Curative treatments, including liver transplantation, surgical resection and percutaneous treatments, are the recommended therapies in BCLC-0 (Barcelona Clinic of Liver Cancer) or BCLC-A hepatocellular carcinoma (HCC). This review provides an overview of some issues of clinical importance concerning curative treatments in HCC. abstract_id: PUBMED:25841914 Hepatocellular carcinoma in the modern era: transplantation, ablation, open surgery or minimally invasive surgery? -A multidisciplinary personalized decision. Hepatocellular carcinoma (HCC) is one of the few gastrointestinal cancers with increasing incidence and mortality worldwide. It arises most frequently in the setting of cirrhosis and presents heterogeneously with varying degrees of preserved liver function. Surgical resection and liver transplantation represent the cornerstones of curative treatment worldwide, whereas tumor ablation is being increasingly used for small tumors. A variety of different treatment algorithms have been developed, taking into consideration both the tumor stage as well as the liver reserve. Currently, many treatment modalities are continuously evolving. Transplantation criteria are expanding and even higher stage tumors become transplantable with neoadjuvant treatment. Surgical resection is being affected by the introduction of minimally invasive approaches. Ablation techniques are increasingly being used for small tumors. Combinations of different treatments are being introduced such as surgical resection followed by salvage transplantation. In this continuously changing field, the objectives of this review are to summarize the current curative surgical treatment options for patients with HCC, focusing on the controversial areas that multiple treatments might be applicable for the same patient, highlight the recent advances in minimally invasive surgery for HCC, and emphasize the need for a multidisciplinary approach and treatment plan tailored to the characteristics of each patient. abstract_id: PUBMED:31293776 Hepatocellular carcinoma treatment: hurdles, advances and prospects. Hepatocellular carcinoma (HCC) is one of the major causes of cancer-related mortality and is particularly refractory to the available chemotherapeutic drugs. Among various etiologies of HCC, viral etiology is the most common, and, along with alcoholic liver disease and nonalcoholic steatohepatitis, accounts for almost 90% of all HCC cases. HCC is a heterogeneous tumor associated with multiple signaling pathway alterations and its complex patho-physiology has made the treatment decision challenging. The potential curative treatment options are effective only in small group of patients, while palliative treatments are associated with improved survival and quality of life for intermediate/advanced stage HCC patients. This review article focuses on the currently available treatment strategies and hurdles encountered for HCC therapy. The curative treatment options discussed are surgical resection, liver transplantation, and local ablative therapies which are effective for early stage HCC patients. The palliative treatment options discussed are embolizing therapies, systemic therapies, and molecular targeted therapies. Besides, the review also focuses on hurdles to be conquered for successful treatment of HCC and specifies the future prospects for HCC treatment. It also discusses the multi-modal approach for HCC management which maximizes the chances of better clinical outcome after treatment and identifies that selection of a particular treatment regimen based on patients' disease stage, patients' ages, and other underlying factors will certainly lead to a better prognosis. abstract_id: PUBMED:33394207 Predictors of five-year survival among patients with hepatocellular carcinoma in the United States: an analysis of SEER-Medicare. Background: Most patients with hepatocellular carcinoma (HCC) are ≥ 65 years old at diagnosis and ~ 20% present with disease amenable to curative intent surgical therapy. The aim of this study was to examine whether treatment, the demographic variables, and clinical factors could predict 5-year survival among HCC patients. Methods: We included patients, 66 years or older, diagnosed with a first primary HCC from 1994 through 2007 in the SEER-Medicare database, and followed up until death or 31 December 2012. Curative intent treatment was defined as liver transplantation, surgery resection, or ablation. We estimated odds ratios (OR) and 95% confidence intervals (CI) for associations with 5-year survival using logistic regression. Results: We identified 10,826 patients with HCC with mean age 75.3 (standard deviation, 6.4) years. Most were male (62.2%) and non-Hispanic white (59.7%). Overall, only 8.1% of patients were alive 5 years post-HCC diagnosis date. Among all patients that survived ≥ 5 years, 69.8% received potentially curative treatment. Conversely, patients who received potentially curative treatment represented only 15.7% of patients who survived &lt; 5 years. Curative intent treatment was the strongest predictor for surviving ≥ 5 years (vs. none/palliative treatment; adjusted OR 8.12, 95% CI 6.90-9.64). While stage at diagnosis and comorbidities were also independently associated with ≥ 5-year survival in HCC patients, these factors did not improve discrimination between short- and long-term survivors. Conclusions: Curative intent treatment was the strongest predictor for survival ≥ 5 years among HCC patients. Given the limited availability of liver transplant and limited eligibility for surgical resection, finding curative intent HCC therapies remain critically important. abstract_id: PUBMED:32193784 Curative-Intent Therapies in Localized Hepatocellular Carcinoma. Purpose Of Review: The high mortality rate for hepatocellular carcinoma (HCC) relative to its prevalence underscores the need for curative-intent therapies. Image-guided therapies such as ablation and embolization have an established role as primary or neoadjuvants preparing patients for curative treatment. Recent Findings: For HCC &lt; 3 cm, percutaneous thermal ablation provides oncologic outcomes similar to surgical resection and is now a recommended first-line therapy in the EASL guidelines. Both ablation and embolization are recommended as bridging therapies for HCC patients awaiting liver transplantation. T3 HCC can be downstaged by embolization to T2, allowing liver transplantation with similar outcomes to patients transplanted within Milan criteria. New and evolving techniques such as SBRT, radiation segmentectomy and lobectomy, and combination therapies show promise but require further prospective data before they can be integrated into treatment algorithms. Combinations of embolic, ablative, and extirpative therapies can increase access to curative-intent treatment of HCC. Multidisciplinary treatment decisions are required to craft optimal treatment strategies considering tumor size, location, and underlying liver cirrhosis. Answer: Liver transplantation (LT) is considered one of the curative treatment options for hepatocellular carcinoma (HCC), particularly in early-stage disease. However, the justification for LT as a curative approach for advanced HCC requires careful consideration of various factors. For early-stage HCC, LT can offer the most reasonable expectation for a cure, especially when the disease is within the Milan criteria, which traditionally include a single tumor less than 5 cm or up to three tumors with none larger than 3 cm (PUBMED:33926242). The Milan criteria have been used to select patients with a favorable prognosis after LT, and adherence to these criteria has been associated with excellent post-transplant survival rates (PUBMED:37345066). However, there is evidence that LT can also be an effective curative treatment for selected patients with HCC beyond the Milan criteria. A study evaluating the outcomes of HCC patients who underwent LT, including those with extended Milan criteria, found that significant differences in survival were not observed between patients within and beyond the Milan criteria. Independent negative prognostic factors for survival were identified as alpha-fetoprotein level greater than 300 ng/mL, vascular invasion, and bilobar tumor lesions (PUBMED:33926242). Despite these findings, the use of LT for advanced HCC remains controversial. The scarcity of donor organs and the risk of recurrence post-transplant are significant concerns. Moreover, the recurrence of HCC after LT can be a major issue, with 70-80% of patients reporting recurrence within 5 years of curative treatment (PUBMED:37345066). Therefore, the selection of patients for LT should be based on a thorough evaluation of tumor characteristics, the patient's general health status, and the availability of resources at the treating institution (PUBMED:36896302). In conclusion, while LT can be justified as a curative approach for early-stage HCC and may be considered for selected patients with advanced disease, it requires careful patient selection and consideration of prognostic factors. The decision to proceed with LT for advanced HCC should be made within a multidisciplinary framework, taking into account the potential benefits and risks associated with the procedure (PUBMED:32193784; PUBMED:31293776).
Instruction: Is floral diversification associated with pollinator divergence? Abstracts: abstract_id: PUBMED:37111853 Pollinator Proboscis Length Plays a Key Role in Floral Integration of Honeysuckle Flowers (Lonicera spp.). Pollinator-mediated selection is supposed to influence floral integration. However, the potential pathway through which pollinators drive floral integration needs further investigations. We propose that pollinator proboscis length may play a key role in the evolution of floral integration. We first assessed the divergence of floral traits in 11 Lonicera species. Further, we detected the influence of pollinator proboscis length and eight floral traits on floral integration. We then used phylogenetic structural equation models (PSEMs) to illustrate the pathway through which pollinators drive the divergence of floral integration. Results of PCA indicated that species significantly differed in floral traits. Floral integration increased along with corolla tube length, stigma height, lip length, and the main pollinators' proboscis length. PSEMs revealed a potential pathway by which pollinator proboscis length directly selected on corolla tube length and stigma height, while lip length co-varied with stigma height. Compared to species with short corolla tubes, long-tube flowers may experience more intense pollinator-mediated selection due to more specialized pollination systems and thus reduce variation in the floral traits. Along elongation of corolla tube and stigma height, the covariation of other relevant traits might help to maintain pollination success. The direct and indirect pollinator-mediation selection collectively enhances floral integration. abstract_id: PUBMED:37953983 The effect of experimental pollinator decline on pollinator-mediated selection on floral traits. Human-mediated environmental change, by reducing mean fitness, is hypothesized to strengthen selection on traits that mediate interactions among species. For example, human-mediated declines in pollinator populations are hypothesized to reduce mean seed production by increasing the magnitude of pollen limitation and thus strengthen pollinator-mediated selection on floral traits that increase pollinator attraction or pollen transfer efficiency. To test this hypothesis, we measured two female fitness components and six floral traits of Lobelia siphilitica plants exposed to supplemental hand-pollination, ambient open-pollination, or reduced open-pollination treatments. The reduced treatment simulated pollinator decline, while the supplemental treatment was used to estimate pollen limitation and pollinator-mediated selection. We found that plants in the reduced pollination treatment were significantly pollen limited, resulting in pollinator-mediated selection for taller inflorescences and more vibrant petals, both traits that could increase pollinator attraction. This contrasts with plants in the ambient pollination treatment, where reproduction was not pollen limited and there was not significant pollinator-mediated selection on any floral trait. Our results support the hypothesis that human-mediated environmental change can strengthen selection on traits of interacting species and suggest that these traits have the potential to evolve in response to changing environments. abstract_id: PUBMED:37529588 An elevational gradient in floral traits and pollinator assemblages in the Neotropical species Costus guanaiensis var. tarmicus in Peru. Different populations of plant species can adapt to their local pollinators and diverge in floral traits accordingly. Floral traits are subject to pollinator-driven natural selection to enhance plant reproductive success. Studies on temperate plant systems have shown pollinator-driven selection results in floral trait variation along elevational gradients, but studies in tropical systems are lacking. We analyzed floral traits and pollinator assemblages in the Neotropical bee-pollinated taxon Costus guanaiensis var. tarmicus across four sites along a steep elevational gradient in Peru. We found variations in floral traits of size, color, and reward, and in the pollinator assemblage along the elevational gradient. We examined our results considering two hypotheses, (1) local adaptation to different bee assemblages, and (2) the early stages of an evolutionary shift to a new pollinator functional group (hummingbirds). We found some evidence consistent with the adaptation of C. guanaiensis var. tarmicus to the local bee fauna along the studied elevational gradient. Corolla width across sites was associated with bee thorax width of the local most frequent pollinator. However, we could not rule out the possibility of the beginning of a bee-to-hummingbird pollination shift in the highest-studied site. Our study is one of the few geographic-scale analyses of floral trait and pollinator assemblage variation in tropical plant species. Our results broaden our understanding of plant-pollinator interactions beyond temperate systems by showing substantial intraspecific divergence in both floral traits and pollinator assemblages across geographic space in a tropical plant species. abstract_id: PUBMED:30393517 Pollinator-mediated selection on floral traits varies in space and between morphs in Primula secundiflora. Elucidating how variation in selection shapes the evolution of flowers is key to understanding adaptive differentiation processes. We estimated pollinator-mediated selection through female function in L-morph (long-style and short-anther phenotype) and S-morph (short-style and long-anther phenotype) flowers among four Primula secundiflora populations with different pollinator assemblages. Variation in pollinator assemblage strongly contributed to differences in reproductive success among populations and between morphs of the primrose species. A wider corolla tube width was selected in the bumblebee-dominated populations, whereas shorter corolla tube length and wider corolla tube width were selected in the syrphid fly-dominated populations. Morph-specific variation in pollinator-mediated selection on corolla tube length was detected in the syrphid fly-dominated populations. A shorter corolla tube was selected in the L-morph flowers. However, similar selective pressure on this trait was not observed in the S-morph flowers. These results show that variation in pollinator assemblage leads to variation in selection in space and between morphs. The findings highlight the potential forces of different pollinator agents in driving floral evolution in this primrose species. abstract_id: PUBMED:24107683 Floral adaptation to local pollinator guilds in a terrestrial orchid. Background And Aims: Studies of local floral adaptation in response to geographically divergent pollinators are essential for understanding floral evolution. This study investigated local pollinator adaptation and variation in floral traits in the rewarding orchid Gymnadenia odoratissima, which spans a large altitudinal gradient and thus may depend on different pollinator guilds along this gradient. Methods: Pollinator communities were assessed and reciprocal transfer experiments were performed between lowland and mountain populations. Differences in floral traits were characterized by measuring floral morphology traits, scent composition, colour and nectar sugar content in lowland and mountain populations. Key Results: The composition of pollinator communities differed considerably between lowland and mountain populations; flies were only found as pollinators in mountain populations. The reciprocal transfer experiments showed that when lowland plants were transferred to mountain habitats, their reproductive success did not change significantly. However, when mountain plants were moved to the lowlands, their reproductive success decreased significantly. Transfers between populations of the same altitude did not lead to significant changes in reproductive success, disproving the potential for population-specific adaptations. Flower size of lowland plants was greater than for mountain flowers. Lowland plants also had significantly higher relative amounts of aromatic floral volatiles, while the mountain plants had higher relative amounts of other floral volatiles. The floral colour of mountain flowers was significantly lighter compared with the lowland flowers. Conclusions: Local pollinator adaptation through pollinator attraction was shown in the mountain populations, possibly due to adaptation to pollinating flies. The mountain plants were also observed to receive pollination from a greater diversity of pollinators than the lowland plants. The different floral phenotypes of the altitudinal regions are likely to be the consequence of adaptations to local pollinator guilds. abstract_id: PUBMED:31236255 Pollinator parasites and the evolution of floral traits. The main selective force driving floral evolution and diversity is plant-pollinator interactions. Pollinators use floral signals and indirect cues to assess flower reward, and the ensuing flower choice has major implications for plant fitness. While many pollinator behaviors have been described, the impact of parasites on pollinator foraging decisions and plant-pollinator interactions have been largely overlooked. Growing evidence of the transmission of parasites through the shared-use of flowers by pollinators demonstrate the importance of behavioral immunity (altered behaviors that enhance parasite resistance) to pollinator health. During foraging bouts, pollinators can protect themselves against parasites through self-medication, disease avoidance, and grooming. Recent studies have documented immune behaviors in foraging pollinators, as well as the impacts of such behaviors on flower visitation. Because pollinator parasites can affect flower choice and pollen dispersal, they may ultimately impact flower fitness. Here, we discuss how pollinator immune behaviors and floral traits may affect the presence and transmission of pollinator parasites, as well as how pollinator parasites, through these immune behaviors, can impact plant-pollinator interactions. We further discuss how pollinator immune behaviors can impact plant fitness, and how floral traits may adapt to optimize plant fitness in response to pollinator parasites. We propose future research directions to assess the role of pollinator parasites in plant-pollinator interactions and evolution, and we propose better integration of the role of pollinator parasites into research related to pollinator optimal foraging theory, floral diversity and agricultural practices. abstract_id: PUBMED:26546275 Drought and leaf herbivory influence floral volatiles and pollinator attraction. The effects of climate change on species interactions are poorly understood. Investigating the mechanisms by which species interactions may shift under altered environmental conditions will help form a more predictive understanding of such shifts. In particular, components of climate change have the potential to strongly influence floral volatile organic compounds (VOCs) and, in turn, plant-pollinator interactions. In this study, we experimentally manipulated drought and herbivory for four forb species to determine effects of these treatments and their interactions on (1) visual plant traits traditionally associated with pollinator attraction, (2) floral VOCs, and (3) the visitation rates and community composition of pollinators. For all forbs tested, experimental drought universally reduced flower size and floral display, but there were species-specific effects of drought on volatile emissions per flower, the composition of compounds produced, and subsequent pollinator visitation rates. Moreover, the community of pollinating visitors was influenced by drought across forb species (i.e. some pollinator species were deterred by drought while others were attracted). Together, these results indicate that VOCs may provide more nuanced information to potential floral visitors and may be relatively more important than visual traits for pollinator attraction, particularly under shifting environmental conditions. abstract_id: PUBMED:29958983 Evolution of floral traits and impact of reproductive mode on diversification in the phlox family (Polemoniaceae). Pollinator-mediated selection is a major driver of evolution in flowering plants, contributing to the vast diversity of floral features. Despite long-standing interest in floral variation and the evolution of pollination syndromes in Polemoniaceae, the evolution of floral traits and known pollinators has not been investigated in an explicit phylogenetic context. Here we explore macroevolutionary patterns of both pollinator specificity and three floral traits long considered important determinants of pollinator attraction across the most comprehensive species-level phylogenetic tree yet produced for the family. The presence of floral chlorophyll is reconstructed as the ancestral character state of the family, even though the presence of floral anthocyanins is the most prevalent floral pigment in extant taxa. Mean corolla length and width of the opening of the floral tube are correlated, and both appear to vary with pollinator type. The evolution of pollination systems appears labile, with multiple gains and losses of selfing and conflicting implications for patterns of diversification. Explicit testing of diversification models rejects the hypothesis that selfing is an evolutionary dead-end. This study begins to disentangle the individual components that comprise pollination syndromes and lays the foundation for future work on the genetic mechanisms that control each trait. abstract_id: PUBMED:33758905 Divergence in floral scent and morphology, but not thermogenic traits, associated with pollinator shift in two brood-site-mimicking Typhonium (Araceae) species. Background: Flowers which imitate insect oviposition sites probably represent the most widespread form of floral mimicry, exhibit the most diverse floral signals and are visited by two of the most speciose and advanced taxa of insect - beetles and flies. Detailed comparative studies on brood-site mimics pollinated exclusively by each of these insect orders are lacking, limiting our understanding of floral trait adaptation to different pollinator groups in these deceptive systems. Methods: Two closely related and apparent brood-site mimics, Typhonium angustilobum and T. wilbertii (Araceae) observed to trap these distinct beetle and fly pollinator groups were used to investigate potential divergence in floral signals and traits most likely to occur under pollinator-mediated selection. Trapped pollinators were identified and their relative abundances enumerated, and thermogenic, visual and chemical signals and morphological traits were examined using thermocouples and quantitative reverse transcription-PCR, reflectance, gas chromatography-mass spectrometry, floral measurements and microscopy. Key Results: Typhonium angustilobum and T. wilbertii were functionally specialized to trap saprophagous Coleoptera and Diptera, respectively. Both species shared similar colour and thermogenic traits, and contained two highly homologous AOX genes (AOX1a and AOX1b) most expressed in the thermogenic tissue and stage (unlike pUCP). Scent during the pistillate stage differed markedly - T. angustilobum emitted a complex blend of sesquiterpenes, and T. wilbertii, a dung mimic, emitted high relative amounts of skatole, p-cresol and irregular terpenes. The species differed significantly in floral morphology related to trapping mechanisms. Conclusions: Functional specialization and pollinator divergence were not associated with differences in anthesis rhythm and floral thermogenic or visual signals between species, but with significant differences in floral scent and morphological features, suggesting that these floral traits are critical for the attraction and filtering of beetle or fly pollinators in these two brood-site mimics. abstract_id: PUBMED:34287921 A test of Sensory Drive in plant-pollinator interactions: heterogeneity in the signalling environment shapes pollinator preference for a floral visual signal. Sensory Drive predicts that habitat-dependent signal transmission and perception explain the diversification of communication signals. Whether Sensory Drive shapes floral evolution remains untested in nature. Pollinators of Argentina anserina prefer small ultraviolet (UV)-absorbing floral guides at low elevation but larger guides at high. However, mechanisms underlying differential preference are unclear. High elevation populations experience elevated UV irradiance and frequently flower against bare substrates rather than foliage, potentially impacting signal transmission and perception. At high and low elevation extremes, we experimentally tested the effects of UV light (ambient vs reduced) and floral backgrounds (foliage vs bare) on pollinator choice for UV guide size. We examined how different signalling environments shaped pollinator-perceived flower colour using visual system models. At high elevation, pollinators preferred locally common large UV guides under ambient UV, but lacked preference under reduced UV. Flies preferred large guides only against bare substrate, the common high elevation background. Ambient UV amplified contrast of large UV guides with floral backgrounds, and flowers contrasted more with bare ground than foliage. Results support that local signalling conditions contribute to pollinator preference for a floral visual signal, a key tenet of Sensory Drive. Components of Sensory Drive could shape floral signal evolution in other plants spanning heterogeneous signalling environments. Answer: Yes, floral diversification is associated with pollinator divergence. Studies have shown that different populations of plant species can adapt to their local pollinators and diverge in floral traits accordingly, which is subject to pollinator-driven natural selection to enhance plant reproductive success (PUBMED:37529588). This is supported by evidence that pollinator-mediated selection on floral traits varies in space and between morphs, with variation in pollinator assemblage contributing to differences in reproductive success among populations and between morphs of certain species (PUBMED:30393517). Additionally, local pollinator adaptation and variation in floral traits have been observed in species that span large altitudinal gradients, where different pollinator guilds are present (PUBMED:24107683). Furthermore, pollinator parasites and the evolution of floral traits are interconnected, as pollinator parasites can affect flower choice and pollen dispersal, potentially impacting flower fitness and leading to adaptations in floral traits to optimize plant fitness in response to pollinator parasites (PUBMED:31236255). The evolution of floral traits and known pollinators in the phlox family (Polemoniaceae) has also been investigated, revealing macroevolutionary patterns of pollinator specificity and floral trait variation that are correlated with pollinator type (PUBMED:29958983). In the case of brood-site mimicking Typhonium species, divergence in floral scent and morphology, but not thermogenic traits, was associated with a pollinator shift between beetle and fly pollinators, suggesting that these floral traits are critical for the attraction and filtering of specific pollinator groups (PUBMED:33758905). Finally, the concept of Sensory Drive, which predicts that habitat-dependent signal transmission and perception explain the diversification of communication signals, has been supported by experiments showing that heterogeneity in the signalling environment shapes pollinator preference for floral visual signals (PUBMED:34287921). In summary, the diversification of floral traits is closely linked to the divergence of pollinators, with adaptations in flower morphology, scent, and visual signals occurring in response to the specific requirements and behaviors of different pollinator assemblages.