input
stringlengths
6.82k
29k
Instruction: Does the level of soluble intercellular adhesion molecule 1 predict myocardial injury before cardiac markers increase? Abstracts: abstract_id: PUBMED:19073540 Does the level of soluble intercellular adhesion molecule 1 predict myocardial injury before cardiac markers increase? Background: The measurement of cardiac markers is still the gold standard for diagnosing myocardial infarction (MI), but there is always a transition period between the time of infarction and when a marker can be measured in the blood. Therefore, clinicians are shifting their focus to the identification of potential new analytes capable of predicting MIs before the standard cardiac markers increase. In this study, we tested whether measurement of the concentration of soluble intercellular adhesion molecule 1 (sICAM-1) in plasma can be used for this purpose. Materials And Methods: In this prospective study, we included 60 male patients who had a left main coronary artery lesion or a left main equivalent and who underwent elective (group I, n = 20), urgent (group II, n = 20), or emergent (group III, n = 20) coronary artery bypass grafting (CABG). We excluded patients who had increased cardiac markers at admission, and drew blood samples for sICAM-1 measurements from other patients immediately after coronary angiography evaluations. We divided the patients into 3 groups according to their clinical characteristics and cardiac marker levels. Only patients with increased cardiac markers underwent emergent CABG (group III). We measured sICAM-1 concentrations immediately after coronary angiography and measured creatine kinase MB (CK-MB) and cardiac troponin I (cTnI) just before CABG. We then evaluated the results for correlations. Results: CK-MB, cTnI, and sICAM-1 levels were significantly higher in group III than in groups I and II (P < .05 for all). Our analysis for correlations between the sICAM-1 level and cardiac marker levels revealed no significant correlations in group I (CK-MB, r = 0.241 [P = .15]; cTnI, r = -0.107 [P = .32]) and group II (CK-MB, r = -0.202 [P = .19]; cTnI, r = 0.606 [P = .002]), but our analysis did reveal highly significant correlations in group III (CK-MB, r = 0.584 [P = .003]; cTnI, r = 0.605 [P = .002]). Conclusion: Measuring the plasma concentration of sICAM-1 before the concentrations of cardiac markers increase in patients with MI may provide clinicians with faster and reliable data for deciding on and administering the most appropriate procedures and/or therapies. abstract_id: PUBMED:11440495 Early increase in levels of soluble inter-cellular adhesion molecule-1 (sICAM-1); potential risk factor for the acute coronary syndromes. Background: Studies have shown disparate results in relation to the role of plasma concentrations of cell adhesion molecules in atherosclerosis. Moreover, the differentiation of primary vs secondary alterations of these markers, in response to myocardial injury, has not been clear. We measured specific soluble cell adhesion molecules and inflammatory markers in men admitted acutely with chest pain and compared them to healthy controls. Methods And Results: We prospectively studied men (total n=241), admitted acutely with chest pain (7.4+/-9.4 h, 71% within 10 h), unstable angina (n=67), acute myocardial infarction (n=47) and chest pain without ischaemic heart disease (n=45) and compared them with a stratified sample of randomly selected healthy controls (n=82). Soluble intercellular adhesion molecule (sICAM-1), endothelial selectin, vascular cell adhesion molecule, interleukin-6 and C-reactive protein were measured by ELISA and P-selectin expression by flow cytometry. Multiple regression analysis was used to control for the impact of classical risk factors. At baseline ICAM-1, interleukin-6 and C-reactive protein were significantly elevated in patient groups whereas no difference in vascular cell adhesion molecule or endothelial selectin was found. At 3 month follow-up, ICAM-1 level was unchanged in ischaemic heart disease patients. In all groups C-reactive protein and interleukin-6 levels were lower at review. ICAM-1 levels at follow-up were higher in ischaemic heart disease groups (but not in chest pain without ischaemic heart disease) relative to controls and remained so only in the unstable angina group following regression. sICAM-1, interleukin-6 and C-reactive protein strongly correlated with smoking. In the acute phase, ICAM-1 was confounded by smoking following regression and C-reactive protein and interleukin-6 remained significant in both ischaemic heart disease groups after multiple regression. There was no relationship to events which occurred in 23% of ischaemic heart disease patients (further acute myocardial infarction 5.3%, sudden cardiac death 0.9% or recurrent angina 16.7%). Conclusion: We found an inflammatory response with higher sICAM-1, interleukin-6 and C-reactive protein in patients presenting soon after developing an acute coronary syndrome. As sICAM-1 was not affected by the acute event this plasma marker may be an important risk factor for the development of the acute coronary syndrome, particularly unstable angina. abstract_id: PUBMED:33037406 Engineered cardiac tissues: a novel in vitro model to investigate the pathophysiology of mouse diabetic cardiomyopathy. Rodent diabetic models, used to understand the pathophysiology of diabetic cardiomyopathy (DCM), remain several limitations. Engineered cardiac tissues (ECTs) have emerged as robust 3D in vitro models to investigate structure-function relationships as well as cardiac injury and repair. Advanced glycation end-products (AGEs), produced through glycation of proteins or lipids in response to hyperglycemia, are important pathogenic factor for the development of DCM. In the current study, we developed a murine-based ECT model to investigate cardiac injury produced by AGEs. We treated ECTs composed of neonatal murine cardiac cells with AGEs and observed AGE-related functional, cellular, and molecular alterations: (1) AGEs (150 µg/mL) did not cause acute cytotoxicity, which displayed as necrosis detected by medium LDH release or apoptosis detected by cleaved caspase 3 and TUNEL staining, but negatively impacted ECT function on treatment day 9; (2) AGEs treatment significantly increased the markers of fibrosis (TGF-β, α-SMA, Ctgf, Collagen I-α1, Collagen III-α1, and Fn1) and hypertrophy (Nppa and Myh7); (3) AGEs treatment significantly increased ECT oxidative stress markers (3-NT, 4-HNE, HO-1, CAT, and SOD2) and inflammation response markers (PAI-1, TNF-α, NF-κB, and ICAM-1); and (4) AGE-induced pathogenic responses were all attenuated by pre-application of AGE receptor antagonist FPS-ZM1 (20 µM) or the antioxidant glutathione precursor N-acetylcysteine (5 mM). Therefore, AGEs-treated murine ECTs recapitulate the key features of DCM's functional, cellular and molecular pathogenesis, and may serve as a robust in vitro model to investigate cellular structure-function relationships, signaling pathways relevant to DCM and pharmaceutical intervention strategies. abstract_id: PUBMED:30867697 Protective effect of remote ischemic pre-conditioning on patients undergoing cardiac bypass valve replacement surgery: A randomized controlled trial. Remote ischemic pre-conditioning (RIPC) may have a protective effect on myocardial injury associated with cardiac bypass surgery (CPB). The objective of the present study was to investigate the effect of RIPC on ischemia/reperfusion (I/R) injury and to assess the underlying mechanisms. A total of 241 patients who underwent valve replacement were randomly assigned to receive either RIPC (n=121) or control group (n=120). The primary endpoint was peri-operative myocardial injury (PMI), which was determined by serum Highly sensitive cardiac troponin T (hsTnT). The secondary endpoint was the blood gas indexes, acute lung injury and length of intensive care unit stay, length of hospital stay and major adverse cardiovascular events. The results indicated that in comparison with control group, RIPC treatment reduced the levels of hsTnT at 6 and 24 h post-CPB (P<0.001), as well as the alveolar-arterial oxygen pressure difference and respiratory index after CPB. Furthermore, RIPC reduced the incidence of acute lung injury by 15.3% (54.1% in the control group vs. 41.3% in the RIPC group, P=0.053). It was indicated that RIPC provided myocardial and pulmonary protection during CPB. In addition, the length of the intensive care unit and hospital stay was reduced by RIPC. Mechanistic investigation revealed a reduced content of soluble intercellular adhesion molecule-1, endothelin-1 and malondialdehyde, as well as elevated levels of nitric oxide in the RIPC group compared with those in the control group. This indicated that RIPC protected against I/R injury associated with CPB through reducing the inflammatory response and oxidative damage, as well as improving pulmonary vascular tension. In conclusion, RIPC reduced myocardial and pulmonary injury associated with CPB. This protective effect may be associated with the inhibition of the inflammatory response and oxidative injury. The present study proved the efficiency of this approach in reducing ischemia/reperfusion injury associated with cardiac surgery. Clinical trial registry no. ChiCTR1800015393. abstract_id: PUBMED:38370014 Impact of prolonged exposure to occasional and regular waterpipe smoke on cardiac injury, oxidative stress and mitochondrial dysfunction in male mice. Regular waterpipe smoking (Reg-WPS) is well recognized for its deleterious effect on the heart. However, there is a paucity of experimental studies on the impact of occasional waterpipe smoking (Occ-WPS), also known as nondaily smoking, versus Reg-WPS on cardiac homeostasis, and the mechanisms underlying these effects. Hence, we aimed, in the present study, to investigate the effect of Occ-WPS (30 min/day, 1 day/week) versus Reg-WPS (30 min/day, 5 days/week) for 6 months on systolic blood pressure (SBP), cardiac injury, oxidative markers, chemokines, proinflammatory cytokines, DNA damage and mitochondrial function compared with air (control) exposed mice. Our results show that SBP was increased following exposure to either Occ-WPS or Reg-WPS compared with air-exposed mice. Moreover, we found that only Reg-WPS induced a significant elevation in the levels of troponin I, brain natriuretic peptide, lactate dehydrogenase, and creatine phosphokinase. However, the atrial natriuretic peptide (ANP) was significantly increased in both Occ-WPS and Reg-WPS groups. Compared with air-exposed mice, the levels of lipid peroxidation, reduced glutathione and monocyte chemoattractant protein-1 were only significantly augmented in the Reg-WPS. However, catalase, superoxide dismutase, and CXCL1 were significantly increased in both Occ-WPS and Reg-WPS. The concentrations of the adhesion molecules E-selectin, vascular cell adhesion molecule-1, and intercellular adhesion molecule-1 were solely elevated in the heart of mice exposed to Reg-WPS. Similarly, the concentrations of interleukin-1β and tumor necrosis factor α were only significantly augmented in the Reg-WPS. However, both Occ-WPS and Reg-WPS triggered significant augmentation in the levels of IL17 and DNA damage compared to the control groups. Furthermore, while Occ-WPS induced a slight but statistically insignificant elevation in the concentrations of mammalian targets of rapamycin and nuclear factor erythroid-derived 2-like 2 (Nrf2) expression, Reg-WPS exposure increased their levels substantially, in addition to p53 and mitochondrial complexes II & III, and IV activities compared with air-exposed mice. In conclusion, our findings show that while the long-term Occ-WPS exposure induced an elevation of SBP, ANP, antioxidant enzymes, IL17, CXCL1, and cardiac DNA damage, Reg-WPS exposure was consistently associated with the elevation of SBP and occurrence of cardiac damage, inflammation, oxidative stress, DNA damage and mitochondrial dysfunction. abstract_id: PUBMED:22575997 Tropisetron attenuates cardiac injury in a rat trauma-hemorrhage model. Tropisetron is widely used for antiemesis. Recent evidence shows that tropisetron possesses anti-inflammatory properties. Protein kinase B (Akt) is known to play an important role in negating proinflammatory response in injury. The aim of this study was to determine whether tropisetron provides cardioprotection mediated via an Akt-dependent pathway in trauma-hemorrhaged animals. Male Sprague-Dawley rats underwent trauma-hemorrhage and resuscitation. Tropisetron (1 mg/kg) with or without a PI3K inhibitor (wortmannin, 1 mg/kg) or vehicle was administered intravenously during the resuscitation. At 24 h after either the trauma-hemorrhage or sham operation, the cardiac function parameters (cardiac output, left ventricle pressure variability) were measured. Cardiac myeloperoxidase activity, interleukin 6 and intercellular adhesion molecule 1 levels, Akt activity, and apoptosis were measured. One-way analysis of variance and Tukey test were used for statistical analysis. Cardiac function was depressed and cardiac myeloperoxidase activity, interleukin 6 and intercellular adhesion molecule 1 levels, and cardiac apoptosis were markedly increased after trauma-hemorrhage. Administration of tropisetron significantly improved cardiac function and proinflammatory parameters in the tropisetron-treated rats subjected to trauma-hemorrhage. The increase in cardiac apoptosis was attenuated in rats that received tropisetron. Although trauma-hemorrhage decreased cardiac Akt phosphorylation (p-Akt), tropisetron treatment prevented the same decrease in cardiac p-Akt following trauma-hemorrhage. Coadministration of wortmannin prevented the beneficial effects of tropisetron on the attenuation of proinflammatory responses and cardiac injury after trauma-hemorrhage. Tropisetron attenuates cardiac injury following trauma-hemorrhage, which is, at least in part, through Akt-dependent anti-inflammatory pathway. abstract_id: PUBMED:37658922 Chronic inflammatory effects of in vivo irradiation of the murine heart on endothelial cells mimic mechanisms involved in atherosclerosis. Purpose: Radiotherapy is a major pillar in the treatment of solid tumors including breast cancer. However, epidemiological studies have revealed an increase in cardiac diseases approximately a decade after exposure of the thorax to ionizing irradiation, which might be related to vascular inflammation. Therefore, chronic inflammatory effects were examined in primary heart and lung endothelial cells (ECs) of mice after local heart irradiation. Methods: Long-lasting effects on primary ECs of the heart and lung were studied 20-50 weeks after local irradiation of the heart of mice (8 and 16 Gy) in vivo by multiparameter flow cytometry using antibodies directed against cell surface markers related to proliferation, stemness, lipid metabolism, and inflammation, and compared to those induced by occlusion of the left anterior descending coronary artery. Results: In vivo irradiation of the complete heart caused long-lasting persistent upregulation of inflammatory (HCAM, ICAM‑1, VCAM-1), proliferation (CD105), and lipid (CD36) markers on primary heart ECs and an upregulation of ICAM‑1 and VCAM‑1 on primary ECs of the partially irradiated lung lobe. An artificially induced heart infarction induces similar effects with respect to inflammatory markers, albeit in a shorter time period. Conclusion: The long-lasting upregulation of prominent inflammatory markers on primary heart and lung ECs suggests that local heart irradiation induces chronic inflammation in the microvasculature of the heart and partially irradiated lung that leads to cardiac injury which might be related to altered lipid metabolism in the heart. abstract_id: PUBMED:12616425 Soluble adhesion molecules and myocardial injury during coronary artery bypass grafting. Cardiopulmonary bypass is acknowledged to be one of the major causes of a complex systemic inflammatory response after cardiac surgery. Leukocyte-endothelial binding followed by neutrophil migration appears to play a central role. These interactions are mediated by adhesion molecules on the surface of activated cells. The present study compared the perioperative levels of soluble adhesion molecules after coronary artery bypass grafting (CABG) in patients with or without cardiopulmonary bypass (CPB). Altogether, 9 patients underwent off-pump revascularization and 11 did so with CPB. Plasma levels of soluble adhesion molecules sE-selectin and sP-selectin and soluble intercellular adhesion molecule-1 (sICAM-1) were measured before anesthesia induction and 1, 4, and 20 hours after reperfusion to the myocardium. The baseline plasma levels of the adhesion molecules were similar in the two groups. Perioperative levels of sE-selectin remained the same and did not differ between groups. Plasma sP-selectin increased in both groups, the change being significantly greater in the CPB group than that in the off-pump group (p = 0.001). Plasma sICAM-1 decreased during an early stage after CABG with CPB, recovering at 4 hours after reperfusion; and a significant increase in ICAM-1 was observed 20 hours later. In the off-pump group, sICAM-1 levels did not change at 1 and 4 hours after reperfusion but increased 20 hours later. Postoperative creatine kinase-muscle bound (CK-MB) levels were significantly higher in the CPB group than in the off-pump group (p = 0.001). The change in sP-selectin levels also showed a correlation with CK-MB values (r = 0.676, p = 0.001). The results indicated that off-pump revascularization is associated with reduced endothelial activation and myocardial injury. abstract_id: PUBMED:36247432 Targeting DNA methylation can reduce cardiac injury associated with ischemia reperfusion: One step closer to clinical translation with blood-borne assessment. Ischemia reperfusion (I/R) injury is one of the main clinical challenges for cardiac surgeons. No effective strategies or therapy targeting the molecular and cellular mechanisms to reduce I/R exists to date, despite altered gene expression and cellular metabolism/physiology. We aimed to identify whether DNA methylation, an unexplored target, can be a potential site to curb I/R-associated cell death by using the left anterior descending artery occlusion model in male Wistar rats. I/R rat heart exhibited global DNA hypermethylation with a corresponding decline in the mitochondrial genes (PGC-1α, TFAM, POLG, ND1, ND3, ND4, Cyt B, COX1, and COX2), antioxidant genes (SOD2, catalase, and Gpx2) and elevation in apoptotic genes (Casp3, Casp7, and Casp9) expression with corresponding changes in their activity, resulting in injury. Targeting global DNA methylation in I/R hearts by using its inhibitor significantly reduced the I/R-associated infarct size by 45% and improved dysferlin levels via modulating the genes involved in cell death apoptotic pathway (Casp3, Casp7, and PARP), inflammation (IL-1β, TLR4, ICAM1, and MyD88), oxidative stress (SOD1, catalase, Gpx2, and NFkB) and mitochondrial function and its regulation (MT-ND1, ND3, COX1, ATP6, PGC1α, and TFAM) in the cardiac tissue. The corresponding improvement in the genes' function was reflected in the respective hearts via the reduction in apoptotic TUNEL positive cells and ROS levels, thereby improving myocardial architecture (H&E staining), antioxidant enzymes (SOD, catalase activity) and mitochondrial electron transport chain activities and ATP levels. The analysis of blood from the I/R animals in the presence and absence of methylation inhibition exhibited a similar pattern of changes as that observed in the cardiac tissue with respect to global DNA methylation level and its enzymes (DNMT and TET) gene expression, where the blood cardiac injury markers enzymes like LDH and CK-MB were elevated along with declined tissue levels. Based on these observations, we concluded that targeting DNA methylation to reduce the level of DNA hypermethylation can be a promising approach in ameliorating I/R injury. Additionally, the blood-borne changes reflected I/R-associated myocardial tissue alteration, making it suitable to predict I/R-linked pathology. abstract_id: PUBMED:33459958 Resolvin D1 and E1 promote resolution of inflammation in rat cardiac fibroblast in vitro. Cardiac fibroblasts (CFs) have a key role in the inflammatory response after cardiac injury and are necessary for wound healing. Resolvins are potent agonists that control the duration and magnitude of inflammation. They decrease mediators of pro-inflammatory expression, reduce neutrophil migration to inflammation sites, promote the removal of microbes and apoptotic cells, and reduce exudate. However, whether resolvins can prevent pro-inflammatory-dependent effects in CFs is unknown. Thus, the present work was addressed to study whether resolvin D1 and E1 (RvD1 and RvE1) can prevent pro-inflammatory effects on CFs after lipopolysaccharide (LPS) challenge. For this, CFs were stimulated with LPS, in the presence or absence of RvD1 or RvE1, to analyze its effects on intercellular adhesion molecule 1 (ICAM-1), vascular cell adhesion protein 1 (VCAM-1), monocyte adhesion and the cytokine levels of tumor necrosis factor alpha (TNF-α), interleukin-6(IL-6), interleukin-1beta (IL-1β), monocyte chemoattractant protein-1 (MCP-1) and interleukin-10 (IL-10). Our results showed that CFs are expressing ALX/FPR2 and ChemR23, RvD1 and RvE1 receptors, respectively. RvD1 and RvE1 prevent the increase of ICAM-1 and VCAM-1 protein levels and the adhesion of spleen mononuclear cells to CFs induced by LPS. Finally, RvD1, but not RvE1, prevents the LPS-induced increase of IL-6, MCP-1, TNF-α, and IL-10. In conclusion, our findings provide evidence that in CFs, RvD1 and RvE1 might actively participate in the prevention of inflammatory response triggered by LPS. Answer: Yes, the level of soluble intercellular adhesion molecule 1 (sICAM-1) may predict myocardial injury before cardiac markers increase. In a study involving patients undergoing coronary artery bypass grafting (CABG), sICAM-1 levels were significantly higher in patients who underwent emergent CABG due to increased cardiac markers, compared to those in elective or urgent CABG groups. The study found highly significant correlations between sICAM-1 levels and cardiac marker levels in the emergent CABG group, suggesting that sICAM-1 could provide clinicians with faster and reliable data for deciding on appropriate procedures and/or therapies in patients with myocardial infarction (MI) before the concentrations of standard cardiac markers increase (PUBMED:19073540). Another study found that sICAM-1 levels were elevated in patients presenting soon after developing an acute coronary syndrome, indicating an inflammatory response. Since sICAM-1 levels were not affected by the acute event, this marker may be an important risk factor for the development of acute coronary syndromes, particularly unstable angina (PUBMED:11440495). These findings suggest that sICAM-1 could serve as an early biomarker for myocardial injury, potentially allowing for earlier intervention before traditional cardiac markers such as creatine kinase MB (CK-MB) and cardiac troponin I (cTnI) become detectable in the blood.
Instruction: Is the mycobacteria-derived purified protein response in atopic asthmatic children different? Abstracts: abstract_id: PUBMED:15542937 Is the mycobacteria-derived purified protein response in atopic asthmatic children different? Background: The response to mycobacteria-derived purified protein (PPD) is mediated primarily by T-helper-1 response and is expected to be inhibited in atopic diseases. The aim of this study was to investigate whether the PPD response is different in atopic asthmatic children. Methods: 40 atopic asthmatic children (mean age 8.3 +/- 4.9 years) and 40 healthy age- and sex-matched children who had received bacillus Calmette-Guerin (BCG) vaccination were included in the study. Five PPD units were administered intradermally to all children and were interpreted after 48 h. Results: There was no correlation between serum total IgE level and PPD induration (p=0.054). The PPD induration was not statistically different between the children who used inhaled corticosteroid and those who did not. Although the PPD positivity (induration > or =5 mm) rate was higher in atopic asthmatic children (50%) than in healthy children (32.5%), the difference was not found to be statistically significant. The PPD induration in atopic asthmatic children (7.41 +/- 5.58 mm) was found to be greater than the one in healthy children (5.21 +/- 3.39) (p < 0.039). The induration in atopic asthmatic children (5.21 +/- 3.77) and healthy children (4.43 +/- 2.32) did not show a difference in children who where vaccinated only once with BCG, but it was found to be statistically significantly greater in atopic asthmatic children (12.50 +/- 5.90) than healthy children (7.08 +/- 4.70) who were vaccinated with BCG twice (p <0.012). The proportion of having a PPD induration of > or =10 mm was found to be higher in atopic asthmatic children than in the healthy ones (32.5 vs. 12.5%) (p <0.032). Conclusion: Our data showed that the PPD response was stronger in BCG-vaccinated atopic asthmatic children than in healthy BCG-vaccinated ones. abstract_id: PUBMED:34331429 Genotype Associations with the Different Phenotypes of Atopic Dermatitis in Children. This study deals with detecting the associations of atopic dermatitis' (AD) phenotypes in children: alone or combined with seasonal allergic rhino-conjunctivitis (SARC) and/or perennial allergic rhinitis (PAR), and/or with bronchial asthma (BA) with single nucleotide polymorphisms (SNP) of filaggrin (FLG), thymic stromal lymphopoietin (TSLP) and orsomucoid-like-1 protein 3 (ORMDL3) genes. Male and female pediatric patients aged from 3 to 18 years old were recruited into the main (AD in different combinations with SARC, PAR, BA) and control groups (disorders of digestives system, neither clinical nor laboratory signs of atopy). Patients were genotyped for SNP of rs_7927894 FLG, rs_11466749 TSLP, rs_7216389 ORMDL3 variants. Statistically significant associations of the increased risk were detected of AD combined with SARC and/or PAR and AD combined with BA (possibly, SARC and/or PAR) with C/T rs_7927894 FLG and T/T rs_7216389 ORMDL3 genotypes. Genotype C/C rs_7927894 FLG significantly decreases the risk of AD combined with SARC and/or PAR by 2.56 fold. Several genotypes' associations had a trend to significance: C/C rs_7216389 ORMDL3 decreases and C/T rs_7216389 ORMDL3 increases the risk for developing AD alone phenotype; A/G rs_11466749 TSLP decreases the risk of AD combined with BA (possibly, SARC and/or PAR) phenotype development. abstract_id: PUBMED:37954115 Natural history of eosinophil‑derived neurotoxin levels and the onset of allergic airway disease in preschool children. 'Atopic march' is the progression of allergic conditions through infancy and childhood. The present study investigated the association between blood eosinophil-derived neurotoxin (EDN) levels in preschool children with food allergy (FA) or atopic dermatitis (AD) and the onset of allergic airway disease [bronchial asthma (BA), allergic rhinitis (AR)]. A total of 123 children below the age of 1 year were enrolled in the present study, along with controls (n=37). Blood specimens were taken, serum EDN levels were measured and immunoglobulin E was quantified. Finally, a total of 86 subjects were analyzed. EDN values were measured at 3 time-points: before 1 year of age, before 2 years of age and before 3 years of age. The EDN levels were initially similar between those patients who did and those who did not develop allergic airway disease but then markedly diverged at the 2-year time-point (226.6 vs. 65.0 ng/ml; P<0.01) and remained divergent at the 3-year time-point (173.9 vs. 62.7 ng/ml; P<0.01). EDN levels prior to diagnosis were compared between the two groups and they were much higher in the Onset group (n=10) compared to the Non-onset group (n=67) (171.2±34.28 vs. 81.3±10.02 ng/ml; P=0.003), with 4 cases of BA and 6 cases of AR in the Onset group. After diagnosis, EDN levels were compared twice: i) At 1 and 2 years of age; and ii) 1 and 3 years of age. A significant difference was found only in the comparison at 2 years (P=0.001). In conclusion, young children with elevated EDN levels during the FA/AD disease period were more likely to develop allergic airway disease (BA, AR) in their first three years of life. A factor leading to this progression may be increased eosinophil activity. abstract_id: PUBMED:37843069 Cascade of atopic dermatitis comorbidities in children after birth for 15 years. Background: Individuals with atopic dermatitis often develop other conditions. Objective: This study aimed to determine how atopic dermatitis comorbidities develop in children over time. Methods: This population-based administrative cohort study used national health insurance data. We traced individuals born in Korea between 2002 and 2003 to 2018. The date of initial atopic dermatitis diagnosis was set as the index date. Fifty-three childhood comorbidities of atopic dermatitis were identified as outcomes of interest by performing a comprehensive literature search and comparing the prevalence of diagnostic codes in children with and without atopic dermatitis. Four control children per individual in the atopic dermatitis group were randomly matched based on sex and index date. The association between atopic dermatitis and the development of each specified disease was assessed using proportional hazard assumption, followed by mapping of the temporal sequences of interconnected comorbidities. Results: The atopic dermatitis and control groups contained 67,632 and 270,528 individuals, respectively. The median age at the index date was 10 months, whereas the median follow-up period was 15 years. Twenty diseases that were associated with a higher risk of atopic dermatitis were identified and a chain of interconnected conditions created. The progression began in childhood with febrile seizures, constipation, and asthma, and was later associated with the emergence of food allergy, allergic rhinitis, psychiatric disorders, and autoimmune diseases. Conclusion: Our study highlights the temporal nature of atopic dermatitis comorbidities in children, and indicates that an understanding of the comorbidities may inform its clinical management and treatment. abstract_id: PUBMED:25729616 Natural history and risk factors of atopic dermatitis in children. Atopic dermatitis (AD) is one of the most common inflammatory allergic diseases with pruritic skin lesions particularly in infancy. It is considered to be the first step of atopic march and has variable disease courses. Many children with AD may resolve their AD symptoms with increasing age and may develop respiratory allergies such as asthma and rhinoconjunctivitis at certain ages. Natural course of AD has been supported by many cross-sectional and longitudinal studies in many countries. In general, atopic dermatitis tends to be more severe and persistent in young children, particularly if they have some risk factors including genetic factors. It appears that approximately 40%-70% of childhood AD will get resolved when they reach the age of 6-7 years. However, it is also observed that over half of the children with AD developed respiratory allergy during late childhood. abstract_id: PUBMED:8807510 Patch test response to house dust mites is positive in children with atopic dermatitis and in their parents. We studied the patch test response to Dermatophagoides pteronyssinus in 79 atopic children and in their parents. The atopic children were divided into 3 groups: 1) children with atopic dermatitis (group 1); 2) children who had suffered from atopic dermatitis but the disease was cured (group 2); and 3) atopic children with asthma and without atopic dermatitis (group 3). Our data show that a significantly higher proportion of children with atopic dermatitis (groups 1 and 2) have positive patch tests to Dermatophagoides pteronyssinus in comparison to atopic children without atopic dermatitis (group 3) and the controls (p < 0.001). In addition, we have shown that a positive patch test to Dermatophagoides pteronyssinus is significantly more common in parents of children with atopic dermatitis with respect to parents of children with asthma or parents of control children. abstract_id: PUBMED:25958372 Profile of sensitization to allergens in children with atopic dermatitis assisting to Allergology Service of University Hospital, Nuevo Leon, Mexico Background: Sensitization to allergens in atopic dermatitis patients is a risk factor for developing asthma and allergic rhinitis in the future,as well as an aggravating factor in the course of the disease. Recent studies have attributed the activity of the proteases of some antigens to cause a grater defect in the epithelial barrier and a more severe disease. Objective: To know the sensitization to allergens pattern in children with atopic dermatitis attended at Allergology Service of University Hospital of UANL, Mexico, and to know if these children have higher sensitization to antigens with proteolytic activity. Material And Method: A retrospective study was done reviewing the skin prick test reports done in our service to children ranging from 5 months to 16 years old, diagnosed with atopic dermatitis during a period of 2 years, from January 2012 to January 2014. The frequency of sensitization to aeroallergens and food were analyzed as well as the weal size (≥6mm) on the skin in response to each particular allergen in the case of food skin prick test. Results: Reports of skin tests of 66 children, 30 boys and 36 girls, were included; 37 of children were sensitized to more than one allergen,18/66 had asthma and/or allergic rhinitis, 40/66 60% skin prick tests were positive to high activity protease aeroallergens (Dermatophagoides pteronyssinus/Dermatophagoides farinae). Regarding food, sensitization was seen in 38 children; fruits and vegetables were the two most common foods. Only seven children had skin prick weal bigger than 6 mm, mainly to egg, fish and cow's milk. Conclusions: Children with atopic dermatitis are often sensitized to high protease activity aeroallergens, polysensitization is very common and the association with airway allergy is seen early in life. Sensitization to food is also common in these patients, but only a small percentage showed a response large enough to be associated with disease severity. abstract_id: PUBMED:22359737 The prevalence of atopic dermatitis, asthma, and allergic rhinitis and the comorbidity of allergic diseases in children. Objectives: Childhood allergic diseases are a major concern because they lead to a heavy economic burden and poor quality of life. The purpose of this study was to investigate the prevalence of childhood atopic dermatitis, asthma, allergic rhinitis, and the comorbidity of allergic diseases in Seoul, Korea. Methods: We conducted a cross-sectional survey between May and October 2010 to evaluate the prevalence of childhood allergic diseases, including atopic dermatitis, asthma, and allergic rhinitis, using a questionnaire from the International Study of Asthma and Allergies in Childhood group. Each questionnaire was completed by the parent or guardian of a child. Results: In the 31,201 children studied, the prevalence of atopic dermatitis symptoms in the past 12 months was 19.3% in children 0 to 3 years of age, 19.7% in children 4 to 6 years of age, 16.7% in children 7 to 9 years of age, and 14.5% in children 10 to 13 years of age (p for trend < 0.001). The prevalence of asthma in these age groups was 16.5%, 9.8%, 6.5%, and 5.4%, respectively (p for trend < 0.001). The prevalence of allergic rhinitis in these age groups was 28.5%, 38.0%, 38.5%, and 35.9%, respectively (p for trend = 0.043). The percentage of subjects with both atopic dermatitis and asthma, both asthma and allergic rhinitis, or both atopic dermatitis and allergic rhinitis was 2.5%, 4.7%, and 8.7%, respectively. The prevalence of comorbid allergic diseases decreased with age (p for trend < 0.001). Conclusions: Our study revealed that the prevalence of some allergic diseases, such as atopic dermatitis and asthma, was relatively high in very young children and that all of the principal allergic diseases in children often co-exist. abstract_id: PUBMED:30206451 Whole-cell pertussis vaccine (DTwP) has no influence on allergic diseases and atopic sensitization in children. Introduction: Vaccine opponents indicate that the infant's immune system is inadequately developed to handle multiple vaccines which may overwhelm the immune system, leading to allergic diseases. Aim: To verify the association between the vaccine antigen overload derived from DTwP and the development of atopic sensitization and allergic diseases. Material And Methods: Data from an earlier established birth cohort in Krakow, followed up to the 6th year of life were used. Allergic diseases such as eczema, hay fever and asthma were diagnosed by a physician and reported every half a year from the 1st to 6th year of life by the child's parent. Skin prick tests (SPT) were performed in children at 5 years of age. The data on infants' vaccination were extracted from the physician's records. The status of vaccine antigen exposure was based on different types of vaccines against pertussis (DTwP or DTaP) in a primary course. Results were determined by multiple logistic regression, adjusted to potential confounders. Results: The analyzed population consisted of 234 children: 53.4% - boys and 46.6% - girls. Infants up to the age of 8 months were vaccinated with the primary course against pertussis, with DTwP - 60.7%, DTaP - 32.9% and further 6.4% with a mixed course (DTwP + DTaP). There were no significant relationships between any of vaccination groups and allergic disease and allergen sensitivity in the multiple logistic regression model with adjustment to potential confounders. Conclusions: The exposure to a large number of vaccine antigens derived from DTwP has no influence on the development of allergic diseases and atopic sensitization in children. abstract_id: PUBMED:25966818 Epidemiology and Comorbidity in Children with Psoriasis and Atopic Eczema. Background: First studies have shown that juvenile psoriasis is associated with an increased prevalence of comorbidity. Objectives: We carried out a data analysis to characterise the profiles of comorbidity in children with psoriasis and atopic eczema. Methods: Prevalence data were derived from the database of a German statutory health insurance company according to ICD-10 codes L40 (psoriasis) and L20 (atopic eczema) of children up to 18 years insured in 2009. Results: Data sets included 1.64 million persons and 293,181 children. 1,313 children = 0.45% (0.42-0.47) had a diagnosis of psoriasis and 30,354 = 10.35% (10.24-10.47) had a diagnosis of atopic eczema. Obesity, hyperlipidaemia, arterial hypertension and diabetes were more often diagnosed in children with psoriasis in comparison to all children without psoriasis and to those with atopic eczema. Conclusion: Children with psoriasis and atopic eczema show different and specific patterns of comorbidity which should be detected early and treated adequately. Answer: Yes, the mycobacteria-derived purified protein response in atopic asthmatic children is different. A study that investigated this response in atopic asthmatic children compared to healthy children found that the purified protein derivative (PPD) response was stronger in BCG-vaccinated atopic asthmatic children than in healthy BCG-vaccinated ones. Specifically, the PPD induration in atopic asthmatic children was greater than in healthy children, and the proportion of having a PPD induration of ≥10 mm was higher in atopic asthmatic children than in the healthy ones. This suggests that the PPD response, which is primarily mediated by a T-helper-1 response, is not inhibited in atopic asthmatic children as might be expected in atopic diseases (PUBMED:15542937).
Instruction: Trends in cancer prognosis in a population-based cohort survey: can recent advances in cancer therapy affect the prognosis? Abstracts: abstract_id: PUBMED:25541411 Trends in cancer prognosis in a population-based cohort survey: can recent advances in cancer therapy affect the prognosis? Background: The aim of the study was to investigate trends in cancer prognosis by examining the relationship between period of diagnosis and probability of death from cancer in a population-based cohort. Methods: Within a cohort of Japanese men and women aged 40-69 years and free of prior diagnosis of cancer and cardiovascular disease at baseline, data from 4403 patients diagnosed with cancer between 1990 and 2006 and followed up until 2012 were analyzed using survival regression models to assess the presence of an effect of the period of diagnosis (before 1998 versus after 1998) on the risk of dying from cancer. Results: We noted a significant decrease in risk of dying from cancer among individuals diagnosed after 1998 with lung cancer (hazard ratio [HR]=0.676 [0.571-0.800]) or colorectal cancer (HR=0.801 [0.661-0.970]). A decrease in the estimated five-year probability of death from cancer was also noted between the first (before 1998) and the second (after 1998) period of diagnosis for lung and colorectal cancers (e.g., 85.4% vs. 73.3% for lung cancer and 44.6% vs. 37.7% for colorectal cancer, respectively, for stage III in men aged 60 at diagnosis). Conclusions: This study presented the first scientific evidence of improvement in prognosis for lung and colorectal cancer patients in a population-based cohort in Japan. Our results suggest that recent advances in cancer treatment could have influenced cancer survival differently among lung, colorectal and gastric cancers. abstract_id: PUBMED:22876030 Methodology for high-quality studies on course and prognosis of inflammatory bowel disease. Inflammatory bowel diseases (IBDs) are characterized by a chronic course with an alternation of relapses and remissions. Questions about prognosis are important for the patient who wants to know how the disease will affect his/her life and also for clinicians to make management decisions. Correct selection of the patients is the basis for good methodological studies on the course of IBD. A great proportion of data on the course of IBD is derived from a limited number of cohort studies. Studies help to define the endpoints for clinical trials and to identify subsets of patients in whom the prognosis of the disease can be stratified according to clinical features. Specific scientific requirements for high-quality studies on prognosis are the following: use of inception cohort, description of referral patterns, completeness of follow-up, objective outcome criteria, blind outcome assessment, adjustment for extraneous prognostic factors and statistical issues. We analyzed each of these requirements in studies on IBDs. To date, prospective and population-based cohort studies are the standard for an unbiased assessment of prognosis. A better knowledge of the course of disease of chronic disorders ideally requires: (1) data from population-based studies, to avoid selection bias from referral centers in which patients with a more severe disease are usually treated; (2) inclusion of patients seen at the onset of the disease excluding misdiagnosed cases; and (3) follow-up from the onset of the disease to the end without dropouts. abstract_id: PUBMED:37994633 Subsite-specific trends in mid- and long-term survival for head and neck cancer patients in Japan: A population-based study. Advances in diagnostic techniques and treatment modalities have impacted head and neck cancer (HNC) prognosis, but their effects on subsite-specific prognosis remain unclear. This study aimed to assess subsite-specific trends in mid- and long-term survival for HNC patients diagnosed from 1993 to 2011 using data from population-based cancer registries in Japan. We estimated the net survival (NS) for HNC by subsite using data from 13 prefectural population-based cancer registries in Japan. Changes in survival over time were assessed by multivariate excess hazard model of mortality. In total, 68,312 HNC patients were included in this analysis. We observed an overall improvement in 5-year NS for HNC patients in Japan. However, survival varied among subsites of HNC, with some, such as naso-, oro- and hypopharyngeal cancers, showing significant improvement in both 5- and 10-year NS, whereas others such as laryngeal cancer showed only a slight improvement in 5-year NS and no significant change in 10-year NS after adjustment for age, sex and stage. In conclusion, the study provides insights into changing HNC survival by site at the population level in Japan. Although advances in diagnostic techniques and treatment modalities have improved survival, these improvements are not shared equally among subsites. abstract_id: PUBMED:23106225 Melanoma epidemiology, prognosis and trends in Latvia. Background: Melanoma incidence and mortality rates are increasing worldwide within the white population. Clinical and histological factors have been usually used for the prognosis and assessment of the risk for melanoma. Objectives: The aim of the study was to describe the clinical and histopathological features of the cutaneous melanoma (CM) in the Latvian population, to test the association between melanoma features and patient survival, and to assess the time trends for melanoma incidence. Methods: We undertook a descriptive, retrospective analysis of archive data of 984 melanoma patients treated at the largest oncological hospital of Latvia, Riga East University Hospital Latvian Oncology Centre (LOC), between 1998 and 2008. Cox proportional hazards model was used to analyse patient survival and autoregressive models were applied to detect trends in melanoma incidence over time for various categories of melanoma. Results: The study showed a significant ascending trend in melanoma incidence in Latvia during the time period from 1998 to 2008 (ß = 1.83, 95% CI = 1.15-2.91, P = 0.011). Nodular melanoma was the most common tumour subtype with a frequency of 39.2%. Ulceration was present in 45.2% of melanomas. The mean Breslow thickness was 6.0 mm (6.8 mm) and no significant decline in median Breslow thickness was observed during the study period (P = 0.609). A better overall prognosis was detected for females in comparison with males (HR = 1.49; 95% CI = 1.22-1.81; P < 0.001). Conclusions: There is a steady increase in melanoma incidence in Latvia with the majority of melanomas diagnosed at late stages with poor prognosis for survival. abstract_id: PUBMED:23160783 Prognosis in women with interval breast cancer: population based observational cohort study. Objective: To compare the prognosis in women with interval breast cancer (cancer detected after a normal screening mammogram and before the next scheduled mammogram) with breast cancer detected among women not yet invited to mammography screening (non-screened). Design: Population based observational study. Setting: Norwegian breast cancer screening programme, implemented in different counties from 1996 to 2005. Participants: 7116 women with a diagnosis of breast cancer at age 50 to 72 years; 1816 had interval breast cancer and 5300 had a diagnosis of breast cancer but had not yet been invited to screening. Main Outcome Measures: Characteristics of the breast tumours, and survival of the women using Kaplan Meier curves and multivariable Cox proportional hazard models. Results: Although interval cancers on average were slightly larger than the cancers in women not invited to screening, the histological type or status of axillary lymph nodes did not differ noticeably between the two groups. Among interval cancers, there were no appreciable trends in size, nodal status, grade, or hormone receptor positivity associated with time since the last normal mammogram as a marker of growth rate. After 10 years of follow-up, the survival rates were 79.1% (95% confidence interval 75.4% to 82.3%) among women with interval cancers and 76.8% (75.3% to 78.2%) among women in the non-screened cancer group (hazard ratio 0.98, 95% confidence interval 0.84 to 1.15; P=0.53). Analyses stratified by time since last normal mammogram, age at diagnosis, or screening round showed similar results. Conclusion: The prognosis of women with interval breast cancers was the same as that of women with breast cancers diagnosed without mammography screening. abstract_id: PUBMED:36748436 The other colon cancer: a population-based cohort study of appendix tumour trends and prognosis. Aim: Appendiceal neoplasms are rare subtypes of colorectal tumours that mainly affect younger patients some 20 years earlier than other colon tumours. The aim of this study was to gain more insight into the histological subtypes of this rare disease and include cases previously excluded, such as mucinous neoplasia. Method: The cohort study included 1097 patients from the Munich Cancer Registry (MCR) diagnosed between 1998 and 2020. Joinpoint analysis was used to determine trend in incidence. Baseline demographic comparisons and survival analyses using competing risk and univariate/multivariate methods were conducted according to tumour histology: adenocarcinoma (ADENO), neuroendocrine neoplasia (NEN), mixed adeno-neuroendocrine carcinoma (MANEC), and low- (LAMN) and high-grade mucinous neoplasia (HAMN). Results: Up to 2016 the number of cases increased significantly [annual per cent change (APC) = 6.86, p < 0.001] followed by a decline in the following years (APC = -14.82, p = 0.014; average APC = 2.5, p = 0.046). Comparison of all patients showed that NEN (48.4%) and mucinous neoplasms (11.6%) had a considerably better prognosis than ADENO (36.0%) and MANEC (3.0%, p < 0.0001). A multivariate analysis within the NEN and ADENO subgroups revealed that further histological classification was not prognostically relevant, while older age and regional tumour spread at diagnosis were associated with a poor prognosis. ADENO histology with high tumour grade and appendectomy only was also associated with poorer survival. Conclusion: Appendiceal neoplasms are histologically heterogeneous; however, this diversity becomes less relevant compared with the marked difference from cancers of the remaining colon. The previously observed increase in cases appears to be abating; fewer cases of appendicitis and/or appendectomies or changes in histopathological assessment may be behind this trend. abstract_id: PUBMED:35239725 Impact of histological subtype on prognosis in stage IV colorectal cancer: A population-based cohort study. Background: There are a few established prognostic factors for stage IV colorectal cancer. Thus, this study aimed to evaluate the impact of histological subtypes on prognosis and metastatic patterns in patients with stage IV colorectal cancer. Methods: This was a population-based, multicenter, cohort study. We included consecutive patients diagnosed with stage IV colorectal cancer between 2008 and 2015 at all designated cancer hospitals in Fukushima prefecture, Japan. Patients were classified into two groups according to histological subtypes as follows: poorly differentiated adenocarcinoma (Por), mucinous adenocarcinoma (Muc), or signet-ring cell carcinoma (Sig) and well (Wel) or moderately differentiated adenocarcinoma (Mod). We evaluated the relationship between these histological groups and survival time. After adjusting for other clinical factors, we calculated the hazard ratio for Por/Muc/Sig. Results: A total of 1,151 patients were enrolled, and 1,031 and 120 had Wel/Mod and Por/Muc/Sig, respectively. The median overall survival was 19.2 and 11.9 months for Wel/Mod and Por/Muc/Sig, respectively (p < 0.001). The adjusted hazard ratio for Por/Muc/Sig with regard to survival time was 1.42 (95% confidence interval: 1.13-1.77). Por/Muc/Sig had a lower incidence of liver and lung metastases and a higher incidence of peritoneal dissemination and metastasis to rare organs, such as the bone and brain. Conclusions: The Por/Muc/Sig histological subtype was an independent prognostic factor for poor prognosis among patients with stage IV colorectal cancer. The histological subtype may be useful for predicting the prognosis of patients with stage IV colorectal cancer and designing the treatment strategy. abstract_id: PUBMED:37060880 Antimuscarinics and lung cancer survival: A Norwegian population-based cohort study. Objectives: Epidemiological studies have reported an association between antimuscarinics and reduced risk of cancer, including lung cancer (LC). However, the potential association between antimuscarinic use and LC prognosis has not previously been assessed. In a large population-based cohort, we aimed to investigate the association between the use of antimuscarinics and LC-specific survival. Materials And Methods: Norwegian residents, aged ≥ 50 years, and diagnosed with LC between 2005 and 2018, were identified in the Cancer Registry of Norway, and information on filled prescriptions was obtained from the Norwegian Prescription Database. We used Cox proportional hazard models to estimate hazard ratios (HR) and 95 % confidence intervals (CI) for the association between peri-diagnostic and post-diagnostic use of antimuscarinics and LC-specific survival. Results: We included 26,693 patients with incident primary invasive LC. Of these, 466 (1.7 %) were peri-diagnostic users, and 877 (3.3 %) were post-diagnostic users of antimuscarinics, respectively. During a median follow-up of nine months, 18,088 (67.8 %) patients died due to LC. In the overall LC population, the HRs for the association between the use of antimuscarinics, compared to no use, were estimated at 1.01 (95 %CI: 0.90-1.12) for peri-diagnostic use, and 0.84 (95 %CI: 0.77-0.92) for post-diagnostic use. The association with post-diagnostic use was observed in many subgroups defined by sex, age, smoking status, histopathology, and stage, except for patients with unspecified or other histopathology than small cell LC and non-small cell LC, and for patients with local disease. The association was observed in patients treated with chemotherapy (HR = 0.75, 95 %CI: 0.64-0.88), but not in those not treated with chemotherapy (HR = 1.00, 95 %CI: 0.86-1.17; p for interaction: 0.007). Conclusion: Our results suggest a possible association between use of antimuscarinics and longer LC-specific survival. More studies are warranted to investigate the use of antimuscarinics to possibly prolong LC prognosis. abstract_id: PUBMED:33845291 Emerging technologies and commercial products in exosome-based cancer diagnosis and prognosis. Academic and industrial groups worldwide have reported technological advances in exosome-based cancer diagnosis and prognosis. However, the potential translation of these emerging technologies for research and clinical settings remains unknown. This work overviews the role of exosomes in cancer diagnosis and prognosis, followed by a survey on emerging exosome technologies, particularly microfluidic advances for the isolation and detection of exosomes in cancer research. The advantages and drawbacks of each of the technologies used for the isolation, detection and engineering of exosomes are evaluated to address their clinical challenges for cancer diagnosis and prognosis. Furthermore, commercial platforms for exosomal detection and analysis are introduced, and their performance and impact on cancer diagnosis and prognosis are assessed. Also, the risks associated with the further development of the next generation of exosome devices are discussed. The outcome of this work could facilitate recognizing deliverable Exo-devices and technologies with unprecedented functionality and predictable manufacturability for the next-generation of cancer diagnosis and prognosis. abstract_id: PUBMED:34352658 Routes to diagnosis and the association with the prognosis in patients with cancer - A nationwide register-based cohort study in Denmark. Background: The prognosis of cancer is related to how the cancer is identified, and where in the healthcare system the patient presents, i.e. routes to diagnosis (RtD). We aimed to describe the RtD for patients diagnosed with cancer in Denmark by using routinely collected register-based data and to investigate the association between RtD and prognosis measured as one-year all-cause mortality. Methods: We conducted a population-based national cohort study by linking routinely collected Danish registry data. We categorised each patient into one of eight specified RtD based on an algorithm using a stepwise logic decision process. We described the proportions of patients with cancer diagnosed by different RtD. We examined associations between RtD and one-year all-cause mortality using logistic regression models adjusting for sex, age, cancer type, year of diagnosis, region of residence, and comorbidity. Results: We included 144,635 cancers diagnosed in 139,023 patients in 2014-2017. The most common RtD were cancer patient pathway from primary care (45.9 %), cancer patient pathway from secondary care (20.0 %), unplanned hospital admission (15.8 %), and population-based screening (7.5 %). The one-year mortality ranged from 1.4 % in screened patients to 53.0 % in patients diagnosed through unplanned hospital admission. Patients with an unplanned admission were more likely to die within the first year after diagnosis (OR = 3.38 (95 %CI: 3.24-3.52)) compared to patients diagnosed through the cancer patient pathway from primary care. Conclusion: The majority of cancer patients were diagnosed through a cancer patient pathway. The RtD were associated with the prognosis, and the prognosis was worst in patients diagnosed through unplanned admission. The study suggests that linking routinely collected registry data could enable a national framework for RtD, which could serve to identify variations across patient-, health-, and system-related and healthcare factors. This information could be used in future research investigating markers for monitoring purposes. Answer: Recent advances in cancer therapy have indeed affected the prognosis of various cancers, as evidenced by population-based cohort studies. A study in Japan found a significant decrease in the risk of dying from lung cancer (HR=0.676) and colorectal cancer (HR=0.801) among individuals diagnosed after 1998, suggesting that recent advances in cancer treatment have influenced survival rates for these cancers (PUBMED:25541411). Similarly, a study on head and neck cancer (HNC) in Japan observed an overall improvement in 5-year net survival (NS) for HNC patients, although the extent of improvement varied among different subsites of HNC (PUBMED:37994633). In Latvia, a significant ascending trend in melanoma incidence was observed from 1998 to 2008, with the majority of melanomas diagnosed at late stages, indicating a poor prognosis for survival (PUBMED:23106225). For breast cancer, a study found that the prognosis of women with interval breast cancers was the same as that of women with breast cancers diagnosed without mammography screening (PUBMED:23160783). A study on appendix tumors, a rare subtype of colorectal tumors, showed that the number of cases increased significantly up to 2016, followed by a decline in subsequent years. The study also found that neuroendocrine neoplasia (NEN) and mucinous neoplasms had a considerably better prognosis than adenocarcinoma (ADENO) and mixed adeno-neuroendocrine carcinoma (MANEC) (PUBMED:36748436). In stage IV colorectal cancer, histological subtypes were found to impact prognosis, with poorly differentiated adenocarcinoma (Por), mucinous adenocarcinoma (Muc), or signet-ring cell carcinoma (Sig) associated with a poorer prognosis compared to well or moderately differentiated adenocarcinoma (PUBMED:35239725). A Norwegian study on lung cancer survival suggested a possible association between the use of antimuscarinics and longer lung cancer-specific survival, particularly in patients treated with chemotherapy (PUBMED:37060880). Lastly, emerging technologies, particularly in exosome-based cancer diagnosis and prognosis, have been reported by academic and industrial groups, which could potentially translate to clinical settings and impact cancer prognosis (PUBMED:33845291).
Instruction: Familial risks of breast and prostate cancers: does the definition of the at risk period matter? Abstracts: abstract_id: PUBMED:34503195 Familial Risks and Proportions Describing Population Landscape of Familial Cancer. Background: Familial cancer can be defined through the occurrence of the same cancer in two or more family members. We describe a nationwide landscape of familial cancer, including its frequency and the risk that it conveys, by using the largest family database in the world with complete family structures and medically confirmed cancers. Patients/methods: We employed standardized incidence ratios (SIRs) to estimate familial risks for concordant cancer among first-degree relatives using the Swedish Cancer Registry from years 1958 through 2016. Results: Cancer risks in a 20-84 year old population conferred by affected parents or siblings were about two-fold compared to the risk for individuals with unaffected relatives. For small intestinal, testicular, thyroid and bone cancers and Hodgkin disease, risks were higher, five-to-eight-fold. Novel familial associations included adult bone, lip, pharyngeal, and connective tissue cancers. Familial cancers were found in 13.2% of families with cancer; for prostate cancer, the proportion was 26.4%. High-risk families accounted for 6.6% of all cancer families. Discussion/conclusion: High-risk family history should be exceedingly considered for management, including targeted genetic testing. For the major proportion of familial clustering, where genetic testing may not be feasible, medical and behavioral intervention should be indicated for the patient and their family members, including screening recommendations and avoidance of carcinogenic exposure. abstract_id: PUBMED:20022238 Familial risks of breast and prostate cancers: does the definition of the at risk period matter? Aim: 'Being at familial risk' may have different connotations in studies on familial risk of cancer. The register-based definition of a family history considers individuals with an affected relative at familial risk independently of the family member's diagnostic time. Alternatively, the individuals are classified to be at familial risk only after the diagnosis date of their relative, relevant to clinical counselling and screening situations. The aim of this study was to compare familial breast and prostate cancer risks according to the two definitions. Patients And Methods: The nationwide Swedish Family-Cancer Database with information on cancers from 1958 to 2006 was used to calculate the hazard ratio of breast and prostate cancers according to family history using Cox regression. Family history was defined considering the number and type of affected relatives and the relative's diagnostic age, respectively. Individuals were considered at familial risk from their entry to the study or, alternatively, from the diagnostic time of the relative. Results: Hazard ratios were equal whether individuals were considered at risk independent of the relative's diagnostic date or only after the relative's diagnostic date. Conclusion: These results indicate that studies on familial breast or prostate cancer risk which do not take the relative's diagnosis date into account are applicable to screening and clinical counselling situations. The estimates according to the register-based definition are based on larger numbers of patients, which may be crucial for analysis of small groups such as families of multiple cases. abstract_id: PUBMED:36882784 Are population level familial risks and germline genetics meeting each other? Large amounts of germline sequencing data have recently become available and we sought to compare these results with population-based family history data. Family studies are able to describe aggregation of any defined cancers in families. The Swedish Family-Cancer Database is the largest of its kind in the world, covering the Swedish families through nearly a century with all cancers in family members since the start of national cancer registration in 1958. The database allows estimation of familial risks, ages of cancer onset and the proportion of familial cancer in different family constellations. Here, we review the proportion of familial cancer for all common cancers and specify them based on the number of affected individuals. With the exception of a few cancers, age of onset of familial cancer is not different from all cancers combined. The highest proportions of familial cancer were found for prostate (26.4%), breast (17.5%) and colorectal (15.7%) cancers, but the proportions of high-risk families with multiple affected individuals were only 2.8%, 1% and 0.9%, respectively. A large sequencing study on female breast cancer found that BRCA1 and BRCA2 mutations could account for 2% of the cases (subtracting the proportions in healthy individuals) and that all germline mutations accounted for 5.6% of the cases. Early age of onset was a distinct feature of only BRCA mutations. In heritable colorectal cancer, Lynch syndrome genes dominate. Large studies on penetrance in Lynch syndrome have shown an approximately linear increase in risk from 40-50 years up to age 80 years. Interesting novel data revealed a strong modification of familial risk by unknown factors. High-risk germline genetics of prostate cancer is characterized by BRCA and other DNA repair genes. HOXB13 encodes a transcription factor which contributes to germline risk of prostate cancer. A strong interaction was shown with a polymorphism in the CIP2A gene. The emerging germline landscape of common cancers can be reasonably accommodated by family data on these cancers as to high-risk proportions and age of onset. abstract_id: PUBMED:22116601 Familial risk and familial survival in prostate cancer. Objectives: The Swedish Family-Cancer Database has been the major source of population-based prospective data on familial risks on most cancers, including prostate cancer. In the present review, I focus on three lines of family studies with novel results. Methods: The covered studies originate from the same database with publication dates spanning a period of somewhat over 3 years. Age-specific hazard ratios (HRs) of incident prostate cancer and fatal prostate cancer were determined according to the number of affected relatives. Cumulative risks for incidence and death were calculated for the various types of families. Finally, data on the familial association of prostate cancer with some other cancers were reviewed. Results: If the father had prostate cancer, the HR for sons was 2.1 and it increased to 17.7 when three brothers were affected. The highest HR (23) was observed for men before age 60 years with three affected brothers. The patterns for fatal familial prostate cancer were similar. Other reviewed sets of data showed evidence for the familial concordance of good and poor survival in prostate cancer. Familial risks were somewhat higher for fatal than for incident prostate cancer, suggesting that fatal prostate cancers may be a genetic subgroup. Conclusions: Considering the high familial risks in fatal prostate cancer, family history remains an important prognostic piece of information useful for clinical genetic counseling. Obviously, preventive measures for at-risk men are needed but these are beyond the present paper. Starting screening before any prostate cancers are diagnosed in a family appears counterproductive. abstract_id: PUBMED:29449591 Familial Risks Between Urolithiasis and Cancer. Urolithiasis (UL, urinary tract stone disease) has been reported to increase subsequent cancers in the urinary tract. Recently, we showed data that surveillance bias may be an important confounder in the reported associations. In the present approach we want to address the question of possible cancer risk posed by UL mechanistically. Both UL and cancer have strong genetic components and we hypothesize that familial association between UL and cancer may be plausible. We thus assess familial risks between UL and cancer, hoping to find an explanation why UL may pose a risk of cancer. UL patients were identified from hospital inpatient and outpatient records and they were organized in families based on the Multigeneration Register into which also national cancer data were linked. Standardized incidence ratios were calculated for cancer in the offspring generation when parents were diagnosed with UL, and conversely for UL when parents were diagnosed with cancer. Familial risks between UL and cancer were generally small and inconsistent providing no convincing support of genetic sharing between UL and cancer. However, bladder UL was associated weakly with prostate cancer, and ureter and bladder UL were associated with salivary gland cancer. Potential mechanisms for these findings are proposed. abstract_id: PUBMED:28801919 Familial associations of female breast cancer with other cancers. Familial risks of breast cancer (BC) are well established but whether BC clusters with other, i.e. discordant, cancers is less certain but of interest for the identification of common genetic and possible environmental factors contributing to a general cancer susceptibility. We apply a novel approach to search for familial associations of BC with other (discordant) cancers based on the Swedish Family-Cancer Database. Relative risks (RRs) were calculated for BC in families with increasing numbers of patients with discordant cancer X, and conversely, familial RRs for cancer X in families with increasing numbers of BC patients. Joint p-values were calculated from independent analyses. The total number of familial BCs was 12,266, 14.9% with one first-degree relative with BC and 1.2% with at least 2 affected relatives. Ovarian and prostate cancers showed the strongest associations with BC (p-value <10-11 ). The p-value for melanoma was <10-6 , for stomach and male colorectal cancer <2.5 × 10-6 , for cancer of unknown primary <2.5 × 10-5 and for lung cancer <5 × 10-5 . Significance level <5 × 10-4 was reached with pancreatic cancer. The remaining associations (p < 0.0025) included thyroid, endometrial, testicular, eye cancers (uveal melanoma), nervous system and endocrine tumors and non-Hodgkin lymphoma. The RR for BC increased by increasing numbers of patients with any cancer in family members and it reached 1.62 when three or more family members were affected. The results suggest that BC shares susceptibility with a number of other cancers. This might alert genetic counselors and challenge approaches for gene and gene-environment identification. abstract_id: PUBMED:27498599 Familial Associations Between Prostate Cancer and Other Cancers. Prostate cancer (PCa) has a large familial component, but understanding of its genetic basis is fragmentary. Breast cancers may be associated with PCa, but whether this is true for other tumor types is poorly established. We used a novel approach to study familial associations of any type of cancer with PCa. We assessed the relative risk (RR) for all types of tumors as a function of the number of first-degree relatives diagnosed with PCa. We hypothesized that for a familial association to be real, the RR for a given type of cancer should increase with the number of PCa diagnoses. In families with multiple PCa patients, significantly increased risks were observed for female breast cancer (RR 1.37 for families with three men with PCa), kidney cancer (RR 2.32), nervous system tumors (RR 1.77; RR 2.40 when PCa was diagnosed before age 70 yr), and myeloma (RR 2.44; RR 6.29 when PCa was diagnosed before age 70 yr). Some evidence of association was also found for melanoma (RR 1.82) and endocrine tumors (RR 2.18). The consistency and magnitude of the effects suggest that familial PCa is genetically associated with breast, kidney, and nervous system tumors and myeloma. This suggestion has implications for clinical counseling and design of genetic studies. Patient Summary: It is known that prostate cancer runs in families, but it is not known whether other cancers are common in such families. We showed that at least breast, kidney, and nervous system tumors and myeloma occur more often than by chance. abstract_id: PUBMED:10077145 Familial cancer risks in offspring from discordant parental cancers. Analysis of familial cancer risks between discordant sites provides etiologic understanding on genetic and environmental risks factors of site-specific cancers. We used the Swedish nation-wide Family-Cancer Database to analyze familial risks in discordant cancers of offspring and parents. Familial risk ratios (FRRs) were calculated for cancer in offspring aged 15 to 53 years at 22 sites, discordant from parental sites. We confirmed many reported associations. Consistent novel findings associated parental-offspring sites of pancreas-breast, breast-testis and uterus-nervous system. For these, the FRRs were modest, 1.2 to 1.5 in the whole Database, but the FRRs increased in those whose parents were diagnosed before age 50. Pancreas and liver cancers showed FRRs of 2.5 to 3.3 in offspring of women and of 1.3 in offspring of men. One or both of these cancers was/were associated with cancers of stomach, colon, breast, uterus, ovary and prostate. Melanoma was associated with pancreas, breast, skin and nervous-system cancers and with leukemias. Myeloma showed a concordant FRR of about 4.0 and was associated with prostate cancer and non-thyroid endocrine-gland cancers. Mutations in known cancer-related genes may explain some of these findings, but new susceptibility genes are yet to be found. For melanoma, pancreatic and liver cancer, environmental factors are important etiologic factors and may contribute to the familial effects observed. abstract_id: PUBMED:12455061 Familial risk of cancer by site and histopathology. Familial risks for histopathology-specific cancers have not been determined. We used the nationwide Swedish Family-Cancer Database on 10.2 million individuals and 1 million tumors to calculate standardized incidence ratios (SIRs) for familial cancers of specific histology and morphology among 0- to 66-year-old offspring. We used histology codes for both offspring and parents, but because of the limited number of cases, the morphology-specific classification could be used only for offspring by all site-specific cancers in parents, resulting in inflated risk estimates. A number of novel findings emerged in the histopathology-specific analysis of familial risks, in addition to some known associations. Overall, specific histology showed an SIR of 2.07 for all cancers compared to an SIR of 2.00 for any histology. However, the small effect was due to breast and prostate cancers, which showed a negligible effect of specific histology. Familial risks of over 4.0 were found for serous papillary cystadenocarcinoma of the ovary, papillary thyroid cancer and low-grade astrocytoma. Familial risks of over 3.0 were found for signet-ring gastric cancer, various forms of ovarian cancer and squamous cell skin cancer. Also noteworthy were familial risks of hepatocellular carcinoma (2.48), pancreatic adenocarcinoma (1.92), large cell carcinoma and adenocarcinoma of the lung (2.29 and 2.18, respectively) and clear cell carcinoma of the kidney (2.73). Many of the findings were novel and could be revealed only by applying codes for specific histopathology. These data call for a closer description of familial aggregations and probing for the underlying genetic mechanisms. abstract_id: PUBMED:16264177 Familial risk of cancer shortly after diagnosis of the first familial tumor. Background: The diagnosis of the first cancer in a family may lead to the medical examination of the patient's relatives and the subsequent identification of additional familial cancers. If detection bias is present, familial risks could be overestimated soon after first diagnosis. Methods: We followed 1,677,722 offspring/siblings of 846,448 probands from the year of diagnosis of the first familial tumor to the diagnosis of first cancer, death, emigration, or December 31, 2002, using the Swedish Family Cancer Database. The risks of cancer among the offspring and siblings of patients with melanoma and cancers of the breast, prostate, colorectum, cervix, and lung were compared with those in the general population. Relative risks (RRs) were determined using Poisson regression, according to the time after first diagnosis. All statistical tests were two-sided. Results: Daughters of women with breast cancer had a statistically significantly higher relative risk of in situ breast cancer during the year of the mother's diagnosis than they did 5 or more years later (RR = 4.78, 95% confidence interval [CI] = 2.16 to 10.6, 26.6 cases per 100,000, versus RR = 1.97, 95% CI = 1.65 to 2.37, 27.2 cases per 100,000; P = .033). Daughters diagnosed the same year as their mothers were younger and were diagnosed earlier in the calendar year than daughters of women diagnosed 5 or more years after their mothers. Similarly, the risk of invasive melanoma among the offspring of individuals with invasive melanoma was higher during the year of the parent's diagnosis than it was 5 or more years afterward (RR = 8.27, 95% CI = 3.82 to 17.9, 57.0 cases per 100,000, versus RR = 3.18, 95% CI = 2.55 to 3.97, 37.6 cases per 100,000; P = .019). Sibling risks of in situ breast cancer, in situ cervical cancer, and invasive prostate cancer also decreased with time after diagnosis of the first familial tumor. Conclusions: Increased surveillance may result in the earlier detection of asymptomatic familial cancers, i.e., in detection bias. The possibility of overestimated familial risks of cancer shortly after diagnosis of the first familial tumor should be considered before a patient's clinical and genetic counseling is implemented. Answer: The definition of the at-risk period for familial risks of breast and prostate cancers does not significantly impact the hazard ratios (HRs) for these cancers. A study compared familial breast and prostate cancer risks using two definitions: one where individuals with an affected relative were considered at familial risk regardless of the relative's diagnostic time, and another where individuals were classified as at familial risk only after the diagnosis date of their relative. The results showed that hazard ratios were equal whether individuals were considered at risk independent of the relative's diagnostic date or only after the relative's diagnostic date. This indicates that studies on familial breast or prostate cancer risk that do not take the relative's diagnosis date into account are applicable to screening and clinical counseling situations (PUBMED:20022238). Therefore, for the purposes of assessing familial risk and implementing screening and clinical counseling, the definition of the at-risk period, whether it is from birth or from the time of a relative's diagnosis, does not matter. The estimates according to the register-based definition are based on larger numbers of patients, which may be crucial for analysis of small groups such as families with multiple cases (PUBMED:20022238).
Instruction: Is peritoneal cytology a predictor of unresectability in pancreatic carcinoma? Abstracts: abstract_id: PUBMED:15532836 Is peritoneal cytology a predictor of unresectability in pancreatic carcinoma? Background/aims: Demonstration of unresectability often requires diagnostic laparotomy with high co-morbidity in pancreatic carcinoma. The value of detected peritoneal tumor cells and influence on outcome should be determined. Methodology: In a prospective study 150 consecutive patients with pancreatic carcinoma were evaluated. Improvement of diagnostic accuracy in diagnosing unresectability was calculated for combination of CT, endoscopic ultrasonography and peritoneal cytology. Results: Unexpected subglissonian metastases were found in 10%. 87 patients with peritoneal washings were included in the study, 20 patients with detected peritoneal tumor cells (n=22) were inoperable. In all patients of this group the ventral integrity of pancreas was damaged. In 23 patients with preoperative existent ascites, only in 4 cases could peritoneal tumor cells be detected. In patients with positive cytology together with disrupted ventral pancreatic margin as predictors of unresectability, sensitivity was 100% and specificity 96%. Conclusions: Addition of diagnostic laparoscopy combined with peritoneal cytology and minimally invasive palliation procedures (endoscopic biliary prosthesis, laparoscopic gastrojejunostomy) to the staging-protocol for pancreatic carcinoma may decrease operative lethality and unnecessary laparotomies may be avoided safely. As unexpected subglissonian micrometastases may additionally be present in 5-10%, perioperative laparoscopy is always advisable. abstract_id: PUBMED:10195727 Positive peritoneal cytology predicts unresectability of pancreatic adenocarcinoma. Background: Peritoneal cytology is clinically useful in gastric and gynecologic malignancies. Its role in pancreatic adenocarcinoma remains less well defined. Controversy exists as to the relationship between percutaneous fine needle aspiration (FNA) of the pancreas and shedding of malignant cells with the peritoneum. The aim of this study was to determine whether positive peritoneal cytology (PPC) predicts unresectability of pancreatic adenocarcinoma and impacts on overall survival. In addition, the study aimed to determine whether antecedent FNA increases the incidence of PPC. Study Design: Between January 1993 and June 1996, 228 patients with radiographically resectable pancreatic adenocarcinoma underwent laparoscopic staging. Specimens were taken from right and left upper quadrants at the beginning of laparoscopy. Various prognostic factors were analyzed. Results: PPC was identified in 34 patients (15%). Of patients that had an antecedent FNA, 20% had PPC, and 13% of those without an antecedent FNA had PPC (p = 0.22). The majority of patients with PPC had stage IV disease (26 of 34 [76%]) and only 8 (24%) had no evidence of metastases. Overall survival was significantly higher in patients with negative peritoneal cytology (NPC) compared with PPC (p<0.0006). PPC had a positive predictive value of 94.1%, specificity of 98.1%, and a sensitivity of 25.6% for determining unresectability of pancreatic adenocarcinoma. PPC was not an independent prognostic variable for survival on multivariate analysis. Conclusions: PPC is associated with advanced disease and is highly specific in predicting unresectability of pancreatic adenocarcinoma, resulting in decreased survival. Antecedent FNA is not associated with an increased the incidence of PPC, nor does it significantly impact on overall survival. abstract_id: PUBMED:26823852 CA724 is a novel factor for predicting the unresectability in pancreatic adenocarcinoma. This study aimed to assess the relationship between serum CA724 levels and the unresectability of pancreatic adenocarcinoma. A total of 302 patients with pancreatic adenocarcinoma were analyzed for the potential association between serum CA724 levels and the unresectability of pancreatic adenocarcinoma. Serum CA724 levels in patients with unresectable pancreatic adenocarcinoma were remarkably higher than those with resectable pancreatic adenocarcinoma (P < 0.001). Patients with elevated serum CA724 levels exhibited a 12.27-fold higher risk of unresectability than those with normal serum CA724 levels after adjusting for age, sex, and tumor location (95% CI = 5.28-28.51, P < 0.001). The analysis of receiver operating characteristics demonstrated that CA724 had superior predictive value to other tumor markers (AUC was 0.77 ± 0.03, 0.65 ± 0.04, and 0.62 ± 0.04 for CA724, CA125, and CA199, respectively). CA724 appeared to be a better predictor of unresectability than CA199 and CA125. abstract_id: PUBMED:33389074 Predicting positive peritoneal cytology in pancreatic cancer. Introduction: Positive cytology from peritoneal washings obtained prior to potential resection of pancreatic cancer is associated with grim prognosis, equivalent to M1 disease. We examine our experience with pancreatic cancer patients who underwent pre-resection lavage in an attempt to predict who would have malignant cells on peritoneal cytology. Methods: We conducted a retrospective review of patients undergoing pancreatectomy for pancreatic adenocarcinoma at a tertiary care institution from 1995 to 2019 and had pre-resection lavage performed. Demographic and clinicopathologic data were collected. Logistic regression models were used to identify predictors of positive cytology. Results: Three hundred ninety-nine patients underwent pancreatic resection and had lavage performed. Forty-three (10.8%) had positive peritoneal cytology. Those with positive cytology had higher median Ca19-9 value than those with negative cytology at diagnosis (368.5 vs 200 U/mL, p = 0.007) and after neoadjuvant therapy (100.3 vs 43 U/mL, p = 0.013). After controlling for preoperative therapy received, an initial Ca19-9 greater than 1220 U/mL (OR 2.72, 95% CI 1.07-6.89, p = 0.035), locally advanced disease (OR 4.86, 95% CI 1.31-18.09, p = 0.018), and BMI ≥ 25 kg/m2 (OR 2.67, 95% CI 1.04-6.97, p = 0.042) were associated with positive cytology in multivariate logistic regression model. The associated ROC curve had an AUC of 0.7507, suggesting adequate discrimination of those with positive peritoneal cytology. Conclusion: Diagnostic laparoscopy remains an important adjunct to the workup, diagnosis, and staging of pancreatic adenocarcinoma. Patients with locally advanced disease, significantly elevated serum Ca19-9 at diagnosis, and BMI ≥ 25 kg/m2 may be at higher risk for positive peritoneal cytology, regardless of whether neoadjuvant therapy is administered. abstract_id: PUBMED:24764208 Predicting factors for unresectability in patients with pancreatic ductal adenocarcinoma. Background: The aim of the present study was to identify the predicting factors for unresectability and to clarify who should receive precise evaluations for distant metastasis and locally advanced unresectability in patients with pancreatic ductal adenocarcinoma (PDAC). Methods: A total of 200 consecutive patients with PDAC who presented to the outpatient clinic between June 2009 and October 2012 were analyzed retrospectively. Clinical factors and the serum levels of carcinoembryonic antigen (CEA), carbohydrate antigen (CA) 19-9, DUPAN-2 (pancreatic cancer-associated antigen) and CA 125 were analyzed. Results: Of the 200 patients who were investigated for PDAC, 60 (30%) were initially considered unresectable (15 patients with locally advanced tumors, 45 patients with distant metastases). Of the 136 (68%) patients who were surgically explored, 19 (9.5%) were detected to have minute metastases on laparotomy. A multivariate analysis revealed that tumor size (≥30 mm) and abnormalities in the levels of DUPAN-2 and CA 125 were independent predictors of unresectability (P = 0.002, 0.014, < 0.001, respectively). The patients with triple positive findings presented with the highest sensitivity (78.8%) for unresectability. Conclusions: Patients with triple positive findings for a tumor size ≥30 mm, abnormalities in the levels of DUPAN-2 and CA 125 should receive precise evaluations for unresectability. abstract_id: PUBMED:37750024 Efficacy of staging laparoscopy for resectable pancreatic cancer on imaging and the therapeutic effect of systemic chemotherapy for positive peritoneal cytology. Background: The frequency and prognosis of positive peritoneal washing cytology (CY1) in resectable pancreatic ductal adenocarcinoma (R-PDAC) remains unclear. The objective of this study was to identify the clinical implications of CY1 in R-PDAC and staging laparoscopy (SL). Methods: We retrospectively analyzed 115 consecutive patients with R-PDAC who underwent SL between 2018 and 2022. Patients with negative cytology (CY0) received radical surgery after neoadjuvant chemotherapy, while CY1 patients received systemic chemotherapy and were continuously evaluated for cytology. Results: Of the 115 patients, 84 had no distant metastatic factors, 22 had only CY1, and nine had distant metastasis. Multivariate logistic regression revealed that larger tumor size was an independent predictor of the presence of any distant metastatic factor (OR: 6.30, p = .002). Patients with CY1 showed a significantly better prognosis than patients with distant metastasis (MST: 24.6 vs. 18.9 months, p = .040). A total of 11 CY1 patients were successfully converted to CY-negative, and seven underwent conversion surgery. There was no significant difference in overall survival between patients with CY0 and those converted to CY-negative. Conclusion: SL is effective even for R-PDAC. The prognosis of CY1 patients converted to CY-negative is expected to be similar to that of CY0 patients. abstract_id: PUBMED:17175453 The influence of positive peritoneal cytology on survival in patients with pancreatic adenocarcinoma. The American Joint Committee on Cancer (AJCC) staging system for pancreatic adenocarcinoma classifies positive peritoneal cytology as stage IV disease. Data are limited with respect to the prevalence of positive peritoneal cytology and its influence on survival in patients with resectable, locally advanced, and metastatic disease. Four hundred sixty-two patients underwent staging laparoscopy for pancreatic adenocarcinoma between January 1995 and December 2005. Kaplan-Meier survival comparisons were performed to evaluate the significance of positive peritoneal cytology on overall survival (OS) in resected patients and patients with locally advanced and metastatic disease. Of the 462 patients, 47% (217/462) underwent a pancreatic resection. The 21% (95/462) with locally advanced disease and 32% (150/462) with metastatic disease did not undergo resection. Peritoneal cytology was positive in 17% (77/462), and was associated with stage of disease (metastatic, 37%; locally advanced, 11%; resected, 5%; P=0.01). Positive cytology was not associated with OS in patients with metastatic disease or locally advanced disease, but was in resected patients (median, 16 months vs. 8 months; P<0.001). Node-positive disease was present in 8 of 10 patients resected with positive cytology (2 years OS, 12% positive cytology vs. 23% negative; P=0.006). In this study, patients who underwent resection in the presence of positive peritoneal cytology and absence of other identifiable metastatic disease had a similar survival as other patients with stage IV disease. abstract_id: PUBMED:15055843 Peritoneal cytology in patients with potentially resectable adenocarcinoma of the pancreas. The prognostic significance of malignant cells in the peritoneal washings of patients with pancreatic adenocarcinoma remains poorly defined. Prior reports suggest that positive peritoneal cytology (PPC) is associated with advanced disease and reduced survival. To determine the prognostic value of PPC in patients with pancreatic cancer, we retrospectively reviewed our database between July 1987 and September 2002 and identified 168 patients who had undergone exploration for potentially resectable pancreatic cancer with peritoneal washings performed at the time of exploration. One hundred thirty-five patients underwent resection; 33 were considered unresectable. PPC was reported for 27 patients (16.1%): 13 (9.6%) in the resected and 14 (42.4%) in the unresected group. Median time to macroscopically detected recurrence in the resected group was not significantly different in the PPC versus negative peritoneal cytology (NPC) patients (10 vs 12 months, P = 0.46). Median overall survival of patients with PPC versus NPC approached, but did not reach, significance (15 vs 19 months, P = 0.055). Peritoneal cytology status was not associated with administration of chemoradiation, margin status, antecedent fine-needle aspiration, stage, or site of recurrence. These data suggest that malignant cells in peritoneal washings of patients with potentially resectable pancreatic adenocarcinoma should not preclude resection. Long-term survival may be achieved, therefore aggressive treatment should strongly be considered. abstract_id: PUBMED:27840175 Prognostic significance of intraoperative peritoneal washing cytology for patients with potentially resectable pancreatic ductal adenocarcinoma. Background: The prognostic significance of intraoperative peritoneal washing cytology (IPWC) in pancreatic ductal adenocarcinoma (PDAC) remains controversial, and the treatment strategy for PDAC patients with positive cytology has not been established. Objectives: The objective of this study was to evaluate the clinical significance of IPWC in PDAC patients. Methods: This study included a retrospective cohort of 166 patients with curatively resected PDAC who underwent IPWC. Results: Overall, 17 patients (10%) had positive cytology (CY+), and 149 (90%) patients were negative (CY-). Tumor location in the pancreatic body and/or tail and pancreatic anterior capsular invasion were independent predictors of a CY+ status (P = 0.012 and 0.041, respectively). The initial recurrence occurred at the peritoneum with a significantly higher frequency in CY+ patients (50%) than in CY- patients (12%) (P = 0.003). The median overall survival (OS) for CY+ patients was 12 months. The OS rates at 1 and 3 years were significantly higher for CY- patients (75.1% and 35.3%, respectively) versus CY+ patients (47.1% and 17.6%, respectively; P = 0.012). However, one CY+ patient survived for 66 months, and another two CY+ patients have survived for more than three years after surgery without evidence of peritoneal recurrence. In the multivariate analysis, the independent predictors of OS were a CY+ status, lymph node metastasis, and adjuvant chemotherapy. Conclusions: This study demonstrates that positive IPWC predicts early peritoneal recurrence and a poor prognosis for PDAC patients. However, a small but not insignificant subset of CY+ patients with PDAC may avoid peritoneal carcinomatosis. abstract_id: PUBMED:24975140 Implications of peritoneal lavage cytology in resectable left-sided pancreatic cancer. Purpose: The aim of this study was to determine the utility of the peritoneal lavage cytology findings in predicting the clinical outcomes of resectable left-sided pancreatic cancer. Methods: Peritoneal lavage samples were collected from 39 consecutive patients who underwent surgery for left-sided pancreatic cancer between January 2000 and December 2010. We analyzed the correlations between the peritoneal lavage cytology findings and the clinical outcomes. Results: Five patients (12.8 %) had malignant cells in the cytology samples (positive cytology). This occurred more frequently in patients with vascular invasion (P = 0.008). Positive cytology was significantly associated with lower overall (P = 0.0007) and recurrent-free (P < 0.0001) survival rates. Positive cytology was an independent prognostic factor for recurrence (P = 0.022); it was closely associated with local recurrence, peritoneal recurrence and distant metastasis. Conclusion: Surgeons should carefully consider the intraoperative diagnosis of peritoneal lavage cytology in patients with resectable left-sided pancreatic cancers, because patients with positive cytology are likely to experience recurrence. Answer: Yes, peritoneal cytology is a predictor of unresectability in pancreatic carcinoma. Studies have shown that the presence of positive peritoneal cytology (PPC) is highly specific in predicting unresectability of pancreatic adenocarcinoma and is associated with advanced disease and decreased survival (PUBMED:10195727). The addition of diagnostic laparoscopy combined with peritoneal cytology to the staging protocol for pancreatic carcinoma can decrease operative lethality and avoid unnecessary laparotomies (PUBMED:15532836). Furthermore, PPC has been found to be associated with stage IV disease, and in resected patients, it is associated with a poorer overall survival compared to those with negative peritoneal cytology (NPC) (PUBMED:17175453). Moreover, other studies have identified factors that can predict positive peritoneal cytology, such as higher median Ca19-9 value, locally advanced disease, and a higher body mass index (BMI ≥ 25 kg/m^2) (PUBMED:33389074). Additionally, tumor size, abnormalities in the levels of DUPAN-2 and CA 125, and elevated serum CA724 levels have been identified as independent predictors of unresectability (PUBMED:24764208; PUBMED:26823852). The efficacy of staging laparoscopy for resectable pancreatic cancer on imaging has also been demonstrated, and systemic chemotherapy for patients with positive peritoneal cytology can lead to conversion to CY-negative status, potentially improving prognosis (PUBMED:37750024). However, it is also noted that a small subset of patients with positive peritoneal cytology may achieve long-term survival, suggesting that aggressive treatment should be considered (PUBMED:27840175; PUBMED:15055843). In summary, peritoneal cytology serves as a valuable predictor of unresectability in pancreatic carcinoma, and its assessment can guide clinical decision-making and treatment strategies.
Instruction: Prostate cancer follow-up needs: do patients and professionals agree? Abstracts: abstract_id: PUBMED:24820509 Prostate cancer follow-up needs: do patients and professionals agree? Background: With recent changes in the economy and the drive for cost-effective health care, current practices in the NHS need to be reviewed and revised. In light of this, the aim of this study is to investigate the perceived benefits of routine follow-up of patients with prostate cancer and to examine perceptions of prostate cancer patients and health professionals about who should follow-up patients and where and when this should take place. Methods: A service evaluation was carried out on a sample of patients receiving follow-up assessment following radical treatment for prostate cancer and health professionals in a specialist cancer urology unit in an acute NHS trust in south-east England. Data were collected through patient (n=47) and health professional (n=17) questionnaires and through a group discussion with a sample of patients (n=52). Results: There was agreement between patients and professionals that the main purpose of follow-up consultation was to review the prostate-specific antigen (PSA) blood test and assess urinary symptoms. The majority of patients and professionals wanted follow-up in the hospital with face-to-face consultation. There was general agreement that assessments should be carried out every six months. An overwhelming majority of patients (95%) felt that it was important to have easy access to the urology cancer nurses and doctors. Conclusion: Patients want to maintain the traditional model of hospital based follow-up in specialist urology cancer clinics. However, this approach is not economically sustainable. If this is to change we need to work with our patients to develop a service that meets their needs. For a new approach to follow-up to meet the needs of patients, they need to be confident that they have access to expert advice and support, and are assured of referral back to the urology team if required. Nurses are ideally placed to play a key role in the development of such a service. abstract_id: PUBMED:32277004 Baldness and Risk of Prostate Cancer in the Health Professionals Follow-up Study. Background: The association between male pattern baldness and prostate cancer has been inconsistent. We prospectively investigated the association between baldness at age 45 and prostate cancer risk in the Health Professionals Follow-up Study (HPFS), focusing on clinical and molecular markers. Methods: Baldness was self-reported on the 1992 questionnaire using the modified Norwood-Hamilton scale prior to diagnosis. We estimated HRs between baldness and prostate cancer risk among 36,760 men, with follow-up through 2014. We also investigated whether baldness was associated with prostate cancer defined by tumor protein expression of androgen receptor and the presence of the TMPRSS2:ERG fusion. Results: During 22 years, 5,157 prostate cancer cases were identified. Fifty-six percent of the men had either frontal or vertex baldness. No significant associations were found between baldness and prostate cancer risk. Among men younger than 60 years, there was a statistically significant association between frontal and severe vertex baldness and overall prostate cancer (HR: 1.74; 95% confidence interval: 1.23-2.48). Baldness was not significantly associated with expression of molecular subtypes defined by AR and TMPRSS2:ERG IHC of prostate tumors. Conclusions: This study showed no association between baldness at age 45 and prostate cancer risk, overall or for clinical or molecular markers. The association between baldness and overall prostate cancer among younger men is intriguing, but caution is warranted when interpreting this finding. Impact: The null findings from this large cohort study, together with previous literature's inconclusive findings across baldness patterns, suggest that baldness is not a consistent biomarker for prostate cancer risk or progression. abstract_id: PUBMED:25648070 Asthma and risk of lethal prostate cancer in the Health Professionals Follow-Up Study. Inflammation, and more generally, the immune response are thought to influence the development of prostate cancer. To determine the components of the immune response that are potentially contributory, we prospectively evaluated the association of immune-mediated conditions, asthma and hayfever, with lethal prostate cancer risk in the Health Professionals Follow-up Study. We included 47,880 men aged 40-75 years with no prior cancer diagnosis. On the baseline questionnaire in 1986, the men reported diagnoses of asthma and hayfever and year of onset. On the follow-up questionnaires, they reported new asthma and prostate cancer diagnoses. We used Cox proportional hazards regression to estimate relative risks (RRs). In total, 9.2% reported ever having been diagnosed with asthma. In all, 25.3% reported a hayfever diagnosis at baseline. During 995,176 person-years of follow-up by 2012, we confirmed 798 lethal prostate cancer cases (diagnosed with distant metastases, progressed to distant metastasis or died of prostate cancer [N = 625]). Ever having a diagnosis of asthma was inversely associated with risk of lethal (RR = 0.71, 95% confidence interval [CI] = 0.51-1.00) and fatal (RR = 0.64, 95% CI = 0.42-0.96) disease. Hayfever with onset in the distant past was possibly weakly positively associated with risk of lethal (RR = 1.10, 95% CI = 0.92-1.33) and fatal (RR = 1.12, 95% CI = 0.91-1.37) disease. Men who were ever diagnosed with asthma were less likely to develop lethal and fatal prostate cancer. Our findings may lead to testable hypotheses about specific immune profiles in the etiology of lethal prostate cancer. abstract_id: PUBMED:21693683 Views of health professionals on the role of primary care in the follow-up of men with prostate cancer. Background: Follow-up care for prostate cancer has traditionally been led by secondary care in hospital out-patient clinics. As the number of men with prostate cancer increases and secondary care resources face pressure, alternative follow-up models are being sought. Current National Institute for Health and Clinical Excellence guidance recommends follow-up outside the hospital setting for patients who are stable 2 years following radical treatment and for those undergoing 'watchful waiting'. Objective: To describe current practice in a sample of relevant health care professionals and to seek their views on the role of primary care in prostate cancer follow-up. Methods: Semi-structured telephone interviews with 38 UK health care professionals, from both secondary and primary care. Transcripts were analysed using the constant comparative method. Results: There are marked variations in current follow-up practice around the country, with hospital-based follow-up ranging from 6 months to lifetime. The predominant, although not universal, view expressed was that there is both scope and support for primary care to play a greater role, particularly for men with stable disease. This was qualified by the need for supporting education, including guidance on interpretation of prostate-specific antigen values, introduction of robust follow-up systems in primary care, easy access back into secondary (hospital) care, a mechanism for ensuring follow-up data can still be collected for audit purposes and appropriate resourcing. Conclusions: If primary care is to play a significant role in providing high-quality follow-up care for men with prostate cancer, then steps need to be taken to address the barriers to increased primary care involvement identified by this study. abstract_id: PUBMED:19058180 Diabetes mellitus and risk of prostate cancer in the health professionals follow-up study. History of diabetes may be associated with decreased prostate cancer (PCa) risk. Published studies have not always accounted for time since diabetes diagnosis or confounding and effect modification by lifestyle factors. The authors investigated the relationship between diabetes and PCa risk in men in the Health Professionals Follow-Up Study from 1986 to 2004. During that time, 4,511 new PCa cases were identified. Multivariate hazard ratios (HR) were estimated using Cox regression. The HR of PCa comparing men with vs. without diabetes was 0.83 and 95% confidence interval (CI): 0.74, 0.94. PCa risk was not reduced in the first year after diabetes diagnosis (HR: 1.30, CI: 0.97, 1.72), was lower for men diagnosed for 1-6 years (HR: 0.82, CI: 0.66, 1.02), and was even lower for men who had been diagnosed for 6-15 (HR: 0.75, CI: 0.61, 0.93) or >15 years (HR: 0.78, CI: 0.63, 0.96). Reduced PCa risk was stronger in men diagnosed before 1994 (pre-PSA era) vs. after 1994. The authors also demonstrated that obese and diabetic men had a lower HR for PCa than those who were either not obese and diabetic or obese and non-diabetic. Results are consistent with the hypothesis that diabetes is associated with reduced PCa risk. Potential biological mechanisms are discussed. abstract_id: PUBMED:37462694 Association between Outdoor Light at Night and Prostate Cancer in the Health Professionals Follow-up Study. Background: Circadian disruption is a potential risk factor for advanced prostate cancer, and light at night (LAN) exposure may disrupt circadian rhythms. We evaluated whether outdoor LAN increases the risk of prostate cancer. Methods: We prospectively followed 49,148 participants in the Health Professionals Follow-up Study from 1986 through 2016. We estimated baseline and cumulative time-varying outdoor LAN with ∼1 km2 resolution using data from the US Defense Meteorological Satellite Program's Operational Linescan System, which was assigned to participants' geocoded addresses. Participants reside in all 50 U.S. states and reported a work or home address. We used multivariable Cox models to estimate HRs and 95% confidence intervals (CI) for the association between outdoor LAN and risk of overall (7,175 cases) and fatal (915 cases) prostate cancer adjusting for individual and contextual factors. Results: There was no association between the interquartile range increase in cumulative LAN and total (HR, 1.02; 95% CI, 0.98-1.06) or fatal (HR, 1.05; 95% CI, 0.96-1.15) prostate cancer in adjusted models. However, there was a positive association between baseline LAN and total prostate cancer among non-movers (HR, 1.06; 95% CI, 1.00-1.14) including among highly screened participants (HR, 1.11; 95% CI, 1.01-1.23). Conclusions: There was a suggestive positive association between baseline outdoor LAN and total prostate cancer. Additional studies with different measures of outdoor LAN and in more diverse populations are necessary. Impact: To our knowledge, this is the first longitudinal cohort study exploring the relationship between outdoor LAN and prostate cancer. abstract_id: PUBMED:31026211 Alcohol Intake and Risk of Lethal Prostate Cancer in the Health Professionals Follow-Up Study. Purpose: It is unknown whether alcohol intake is associated with the risk of lethal (metastatic or fatal) prostate cancer. We examine (1) whether alcohol intake among men at risk of prostate cancer is associated with diagnosis of lethal prostate cancer and (2) whether intake among men with nonmetastatic prostate cancer is associated with metastasis or death. Methods: This prospective cohort study uses the Health Professionals Follow-Up Study (1986 to 2012). Our analysis of alcohol intake among men at risk of prostate cancer included 47,568 cancer-free men. Our analysis of alcohol intake among men with prostate cancer was restricted to 5,182 men diagnosed with nonmetastatic prostate cancer during follow-up. We examine the association of total alcohol, red and white wine, beer, and liquor with lethal prostate cancer and death. Multivariate Cox proportional hazards regression estimated hazard ratios (HRs) and 95% CIs. Results: Alcohol drinkers had a lower risk of lethal prostate cancer (any v none: HR, 0.84 [95% CI, 0.71 to 0.99]) without a dose-response relationship. Total alcohol intake among patients with prostate cancer was not associated with progression to lethal prostate cancer (any v none: HR, 0.99 [95% CI, 0.57 to 1.72]), whereas moderate red wine intake was associated with a lower risk (any v none: HR, 0.50 [95% CI, 0.29 to 0.86]; P trend = .05). Compared with none, 15 to 30 g/d of total alcohol after prostate cancer diagnosis was associated with a lower risk of death (HR, 0.71 [95% CI, 0.50 to 1.00]), as was red wine (any v none: HR, 0.74 [95% CI, 0.57 to 0.97]; P trend = .007). Conclusion: Cancer-free men who consumed alcohol had a slightly lower risk of lethal prostate cancer compared with abstainers. Among men with prostate cancer, red wine was associated with a lower risk of progression to lethal disease. These observed associations merit additional study but provide assurance that moderate alcohol consumption is safe for patients with prostate cancer. abstract_id: PUBMED:34707389 Screening Practices, Knowledge and Adherence Among Health Care Professionals at a Tertiary Care Hospital. Introduction: Screening, a routine procedure done on individuals with or without disease, results in the early detection of disease. The aim of this study was to assess healthcare professionals' (HCPs) level of knowledge related to and the adherence to screening. Methods: A survey was conducted in HCPs, using a self-administered questionnaire. Knowledge was defined based on the correct or incorrect responses to the questions. Adherence to screening was considered if a test was done at least once in the past one year. Results: Of the 379 participants, 61% were nurses, 34% physicians, and 5% pharmacists. The majority 68.78% were female. The average age of pharmacists was 29.17±7.09, physicians 35.57±10.08, and nurses 35.46±8.63 years. The knowledge related to breast cancer screening ranged between 50% and 57% and of a Pap smear, 41-54%. 94% nurses and 90% pharmacists had recorded an incorrect response to the required age of colon cancer screening. The overall screening adherence to diabetes was 46%, hypertension 68%, liver profile 43%, lipid profile 50%, breast cancer 10.38%, Pap smear 26%, prostate cancer 33%, and colon cancer 2.37%. HCPs aged ≥45 years had good adherence to diabetes screening. Pharmacists (88%) had the highest level of adherence to hypertension screening. Female HCPs poorly adhered to breast 38% and cervical cancer 26% screening. Only a third 33% of males, aged > 50 years, were screened for prostate cancer. Among HCPs aged ≥50 years (n=32), only three were screened for colon cancer. Conclusion: Despite the increased incidence of diabetes, breast and colon cancer in Saudi Arabia, HCPs displayed poor knowledge related to screening. The adherence to diabetes screening was good. However, HCPs in a high-risk group displayed poor adherence to screening, specifically for breast, cervical and colon cancer. The medical and cancer screening guidelines should be made available to all HCPs regardless of their specialty. abstract_id: PUBMED:26189767 Postdiagnostic Statin Use and the Risk of Lethal Prostate Cancer in the Health Professionals Follow-up Study. Background: Observational studies suggest potential chemopreventive benefits of statins on prostate cancer outcomes, but data on the impact of postdiagnostic use are sparse. Methods: We examined the association of postdiagnostic statin use and risk of lethal prostate cancer (metastases or prostate cancer death, N = 242) among 3,949 men diagnosed with localized prostate cancer from the Health Professionals Follow-Up Study between 1992 and 2008 and followed through 2010 (33,302 person years). We used Cox proportional hazards regression models to estimate relative risks and 95% confidence intervals (CI), adjusting for age, time period, time from diagnosis to questionnaire, body mass index, vigorous physical activity, smoking, aspirin use, clinical stage, PSA at diagnosis, Gleason score, primary treatment, and comorbidities. Results: We found no statistically significant association between postdiagnostic current use of statins or duration of statin usage and the outcome of lethal prostate cancer [N = 242 cases; multivariate HR = 0.97 (95% CI, 0.72-1.31) for current use yes/no; HR = 0.85 (95% CI, 0.59-1.22) for 1 to 5 years of use, 0.96 (95% CI, 0.66-1.38) for 6+ years of use vs. never use]. Conclusions: We observed little evidence that statin usage after diagnosis of localized prostate cancer reduces risk of progression to metastatic disease or prostate cancer-specific death. Impact: These results do not support statins as a chemopreventive agent for prostate cancer progression. abstract_id: PUBMED:19858401 Prospective study of prostate tumor angiogenesis and cancer-specific mortality in the health professionals follow-up study. Purpose: Tumor growth requires the development of independent vascular networks that are often primitive in morphology and function. We examined whether microvessel morphology contributes to the considerable biologic heterogeneity of prostate cancer. Methods: We evaluated microvessel morphology as a predictor of prostate cancer mortality among 572 men in the Health Professionals Follow-Up Study diagnosed with cancer during 1986 to 2000. We immunostained prostatectomy tumor block sections for endothelial marker CD34 and assessed microvessel density, vessel size (area and diameter), and irregularity of vessel lumen using image analysis. Proportional hazards models were used to assess microvessel density and morphology in relation to lethal prostate cancer. Results: Poorly differentiated tumors exhibited greater microvessel density, greater irregularity of the vessel lumen, and smaller vessels. During 20 years of follow-up, 44 men developed bone metastases or died of cancer. Men with tumors exhibiting the smallest vessel diameter, based on quartiles, were 6.0 times more likely (95% CI, 1.8 to 20.0) to develop lethal prostate cancer. Men with the most irregularly shaped vessels were 17.1 times more likely (95% CI, 2.3 to 128) to develop lethal disease. Adjusting for Gleason grade and prostate-specific antigen levels did not qualitatively change the results. Microvessel density was not linked to cancer-specific mortality after adjusting for clinical factors. Conclusion: Aggressive tumors form vessels that are primitive in morphology and function, with consequences for metastases. Vascular size and irregularity reflect the angiogenic potential of prostate cancer and may serve as biomarkers to predict prostate cancer mortality several years after diagnosis. Answer: According to the study conducted on a sample of patients receiving follow-up assessment following radical treatment for prostate cancer and health professionals in a specialist cancer urology unit in an acute NHS trust in south-east England, there was agreement between patients and professionals that the main purpose of follow-up consultation was to review the prostate-specific antigen (PSA) blood test and assess urinary symptoms. The majority of both groups wanted follow-up in the hospital with face-to-face consultation, and there was general agreement that assessments should be carried out every six months. An overwhelming majority of patients (95%) felt that it was important to have easy access to the urology cancer nurses and doctors. However, while patients want to maintain the traditional model of hospital-based follow-up in specialist urology cancer clinics, this approach is not economically sustainable. For a new approach to follow-up to meet the needs of patients, they need to be confident that they have access to expert advice and support, and are assured of referral back to the urology team if required. Nurses are ideally placed to play a key role in the development of such a service (PUBMED:24820509). In contrast, another study highlighted that there are marked variations in current follow-up practice around the country, with hospital-based follow-up ranging from 6 months to lifetime. The predominant view expressed was that there is both scope and support for primary care to play a greater role, particularly for men with stable disease. This was qualified by the need for supporting education, including guidance on interpretation of PSA values, introduction of robust follow-up systems in primary care, easy access back into secondary (hospital) care, a mechanism for ensuring follow-up data can still be collected for audit purposes, and appropriate resourcing (PUBMED:21693683). Therefore, while there is agreement on the importance and components of follow-up care, there is a need for a sustainable model that balances patient needs with economic considerations, and this may involve a greater role for primary care with the necessary support and resources.
Instruction: Do elderly patients benefit from enrollment into Phase I Trials? Abstracts: abstract_id: PUBMED:25779876 Do elderly patients benefit from enrollment into Phase I Trials? Background: Despite the significant burden of cancer in the older population, their outcomes in the context of phase I studies have been poorly studied. While the Royal Marsden Hospital (RMH) prognostic score (albumin, lactate dehydrogenase [LDH], number of metastatic sites) is validated in this setting, its utility among the elderly is uncertain. Methods: A total of 296 consecutive patients who were treated in 20 phase I trials from 2005 to 2012 in our unit were analysed. Clinical characteristics and outcomes between young (<65, n=202) and older patients (≥65, n=94) were compared. Results: The median age of the older patients was 69 years (65-84) and 71% were males. Although elderly patients had more co-morbidities and lower albumin levels at baseline, there was no significant difference in survival (8.8 months versus 9.9 months, p=0.68) and clinical benefit rate (69% versus 56%, p=0.07) compared to younger patients after median follow-up of 7.1 months (0.36-50.6 months). Age (p=0.23) did not have any bearing on occurrence of grade 3/4 toxicities. Twenty-six percent of elderly patients experienced grade 3/4 toxicities. The prognostic factors for overall survival (OS) identified in multivariate analysis were prior lines of chemotherapy (0-2 versus ≥3), baseline sodium levels (≥135 versus <135 mmol/L) and platelet levels (≤400 versus >400×10(9)). We developed a risk nomogram based on the factors prognostic of survival with concordance index of 0.65. The RMH model yielded a concordance index of 0.635. Conclusion: Elderly patients enrolled into phase I clinical trials had similar survival outcomes and toxicity profiles compared to younger patients. Risk scoring models to aid patient selection need further clarification. abstract_id: PUBMED:25077043 Promoting Factors and Barriers to Participation in Early Phase Clinical Trials: Patients Perspectives. Background: Inclusion of minorities in clinical research is an essential step to develop novel cancer treatments, improve health care overall, understand potential differences in pharmacogenomics and address minorities' disproportionate cancer burden. However, Latinos and other minority groups continue to be critically underrepresented, particularly in early-phase clinical trials (EPCTs). The objective of the present study was to explore barriers and promoting factors influencing patients' decisions to enroll or not in early phase clinical trials (EPCTs) and identify areas for intervention to increase minority enrollment into clinical research. Methods: An interviewer-administered survey was conducted with 100 cancer patients in the predominantly Latino region of South Texas. Exploratory factor analysis was conducted to identify underlying dimensions, and multiple logistic regression assessed significant factors that promote or deter patients enrollment to EPCTs. In addition, a separate subgroup mean analysis assessed differences by enrollment status and race/ethnicity. Results: For one standard deviation increase in the importance given to the possibility of symptoms improvement, the predicted odds of refusing enrollment were 3.20 times greater (OR=3.20, 95% CI=1.06-9.71, p 0.040). Regarding barriers, among patients who considered fear/uncertainty of the new treatment a deterrent to enrollment, one standard deviation increase in agreement with these barriers was associated with a 3.60 increase (OR=3.60, 95% CI=1.30-9.97h, p 0.014) in the odds of not being enrolled in an EPCT. In contrast, non-enrolled patients were less likely (OR=0.14, 95% CI=0.05-0.44, p 0.001) to consider fatalistic beliefs as an important barrier. Conclusion: This study, one of the first to identify South Texas patients' barriers to enroll in EPCTs, highlights potential focal areas to increase participation of both minority and non-minority patients in clinical research. Culturally tailored interventions promoting patient-centered care and bilingual, culturally competent study teams could solve common barriers and enhance Latinos' likelihood of joining clinical trials. These interventions may simultaneously increase opportunities to involve patients and physicians in clinical trials, while ensuring the benefits of participation are equitably distributed to all patients. abstract_id: PUBMED:37382190 Allocation of authorship and patient enrollment among global clinical trials in oncology. Background: Oncology randomized controlled trials (RCTs) are increasingly global in scope. Whether authorship is equitably shared between investigators from high-income countries (HIC) and low-middle/upper-middle incomes countries (LMIC/UMIC) is not well described. The authors conducted this study to understand the allocation of authorship and patient enrollment across all oncology RCTs conducted globally. Methods: A cross-sectional retrospective cohort study of phase 3 RCTs (published 2014-2017) that were led by investigators in HIC and recruited patients in LMIC/UMIC. Findings: During 2014-2017, 694 oncology RCTs were published; 636 (92%) were led by investigators from HIC. Among these HIC-led trials, 186 (29%) enrolled patients in LMIC/UMIC. One-third (33%, 62 of 186) of RCTs had no authors from LMIC/UMIC. Forty percent (74 of 186) of RCTs reported patient enrollment by country; in 50% (37 of 74) of these trials, LMIC/UMIC contributed <15% of patients. The relationship between enrollment and authorship proportion is very strong and is comparable between LMIC/UMIC and HIC (Spearman's ρ LMIC/UMIC 0.824, p < .001; HIC 0.823, p < .001). Among the 74 trials that report country enrollment, 34% (25 of 74) have no authors from LMIC/UMIC. Conclusions: Among trials that enroll patients in HIC and LMIC/UMIC, authorship appears to be proportional to patient enrollment. This finding is limited by the fact that more than half of RCTs do not report enrollment by country. Moreover, there are important outliers as a significant proportion of RCTs had no authors from LMIC/UMIC despite enrolling patients in these countries. The findings in this study reflect a complex global RCT ecosystem that still underserves cancer control outside high-income settings. abstract_id: PUBMED:24153239 Barriers to study enrollment in patients with advanced cancer referred to a phase I clinical trials unit. Unlabelled: We conducted this retrospective study to identify reasons that patients referred to a phase I clinical trial failed to enroll or delayed enrollment onto the trial. Materials And Methods: Outcome analyses were conducted independently on data collected from electronic medical records of two sets of consecutive patients referred to a phase I clinical trial facility at MD Anderson Cancer Center. Data from the first set of 300 patients were used to determine relevant variables affecting enrollment; data from the second set of 957 patients were then analyzed for these variables. Results: Results from the two sets of patients were similar. Approximately 55% of patients were enrolled in a phase I trial. Patients referred from within MD Anderson were more likely to be enrolled than patients seen originally outside the institution (p = .006); black patients were more likely than white patients to enroll (69% vs. 43%; p = .04). The median interval from the initial visit to initiation of treatments was 19 days. Major reasons for failure to enroll included failure to return to the clinic (36%), opting for treatment in another clinic (17%), hospice referral (11%), early death (10%), and lack of financial clearance (5%). Treatment was delayed for three weeks or more in 250 patients; in 85 patients (34%), the delay was caused by financial and insurance issues. Conclusion: Failure to return to the clinic, pursuit of other therapy, and rapid deterioration were the major reasons for failure to enroll; lengthy financial clearance was the most common reason for delayed enrollment onto a phase I trial. abstract_id: PUBMED:32974073 Phase 1 Clinical Trials in the Elderly: Enrollment Challenges. Cancer mostly affects older adults. Despite the increased incidence of cancer among older adults, they are underrepresented in oncology clinical trials. Such trials can provide patients with early access to promising interventions. Clinical trials are changing the future of cancer treatments. This article provides advanced practitioners in oncology an understanding of potential barriers to enrollment of older adults in oncology clinical trials. This article also summarizes the literature comparing tolerance, toxicity, and clinical benefit in the elderly compared with the nonelderly. Enrollment of elderly patients is essential. It is therefore important to create strategies to increase their enrollment. Advanced practitioners, along with other members of the health-care team, play an important role to advocate for elderly patients in phase I clinical trials. abstract_id: PUBMED:26383917 Enrollment and Stopping Rules for Managing Toxicity Requiring Long Follow-Up in Phase II Oncology Trials. Monitoring of toxicity is often conducted in Phase II trials in oncology to avoid an excessive number of toxicities if the wrong dose is chosen for Phase II. Existing stopping rules for toxicity use information from patients who have already completed follow-up. We describe a stopping rule that uses all available data to determine whether to stop for toxicity or not when follow-up for toxicity is long. We propose an enrollment rule that prescribes the maximum number of patients that may be enrolled at any given point in the trial. abstract_id: PUBMED:35533427 Access to early-phase clinical trials in older patients with cancer in France: the EGALICAN-2 study. Background: Access to clinical trials and especially early-phase trials (ECT) is an important issue in geriatric oncology. As cancer can be considered an age-related disease because the incidence of most cancers increases with age, new drugs should also be evaluated in older patients to assess their safety and efficacy. The EGALICAN-2 study was primarily designed to identify social and/or regional inequalities regarding access to ECT. We focused on the factors of inequalities in access to ECT in older patients. Patients And Methods: During a 1-year period (2015-2016), a survey was conducted in 11 early-phase units certified by the French National Cancer Institute. Results: A total of 1319 patients were included in the analyses: 1086 patients (82.3%) were <70 years and 233 patients (17.7%) were >70 years. The most common tumor types at referral in older patients were gastrointestinal (19.3%), hematological (19.3%), and thoracic tumors (18.0%). Most patients referred to the phase I unit had signed informed consent and the rate was similar across age (92.7% in younger patients versus 90.6% in older patients; P = 0.266). The rate of screening failure was also similar across age (28.5% in younger patients versus 24.3% in older patients; P = 0.219). Finally, in older patients, univariate analyses showed that initial care received in the hospital having a phase I unit was statistically associated with first study drug administration (odds ratio 0.49, 90% confidence interval 0.27-0.88; P = 0.045). Conclusions: Older patients are underrepresented in early clinical trials with 17.7% of patients aged ≥70 years compared with the number of new cases of cancer in France (50%). However, when invited to participate, older patients were prone to sign informed consent. abstract_id: PUBMED:34042623 i2b2 to Optimize Patients Enrollment. i2b2 data-warehouse could be a useful tool to support the enrollment phase of clinical studies. The aim of this work is to evaluate its performance on two clinical trials. We developed also an i2b2 extension to help in suggesting eligible patients for a study. The work showed good results in terms of ability to implement inclusion/exclusion criteria, but also in terms of identified patients actually enrolled and high number of patients suggested as potentially enrollable. abstract_id: PUBMED:29605046 Consideration of older patients for enrollment in phase 1 clinical trials: Exploring treatment related toxicities and outcomes. Objectives: Age imposes a disparity in the treatment of and outcomes with gynecologic cancer. Older patients are underrepresented in primary treatment trials, but little is known about their ability to withstand trial-based treatment for recurrent or refractory disease. This study sought to examine treatment-related toxicities and outcomes of older versus younger patients participating in phase 1 clinical trials. Methods: A retrospective analysis of patients enrolled in phase 1 clinical trials for gynecologic malignancies from 2010 to 2016 was performed. Demographic and clinic-pathologic data was abstracted. Toxicities were defined as either grade III or IV by CTCAE criteria. Best response was calculated using RECIST criteria. Associations between categorical variables were determined using Fisher's exact test and continuous variables using Wilcoxon rank sum test. Survival was estimated using the Kaplan-Meier method. Results: 237 patients were included with 22% (n=51) comprising the older cohort (≥70years). The vast majority (98%) were treated for recurrent disease. Older patients incurred similar grade III/IV hematologic (21% vs 16%, p=0.38) and non-hematologic toxicities (26% vs 29%, p=0.64). Older patients discontinued treatment due to toxicity only 8% of the time. Median survival was 13.0 and 10.3months in the <70 and ≥70 groups, respectively (p=0.35). 63% of patients ≥70 achieved clinical benefit. Conclusions: Although historically older patients have not been routinely considered for enrollment in phase 1 trials, our data demonstrates similar toxicity profiles to that of younger patients and 63% clinical benefit rate. Thus, with careful selection, patients ≥70 should be considered when facing recurrent or refractory gynecologic cancer. abstract_id: PUBMED:37661408 Comparison of the Speed of Enrollment for Clinical Trials Conducted by Japanese and Global Pharmaceutical Companies. Since the establishment of the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH), many countries in the world have rapidly improved their clinical trial performance, and the era has come to compare the clinical trial performance of each country. Japan's clinical trials are considered excellent quality, but costly and slow. In this study, we examined the speed of enrollment period in clinical trials. We surveyed clinical trials from January 1, 2010, to December 31, 2019, covering the top 10 pharmaceutical companies in each global sales ranking (Global 10) and the Japanese sales ranking (Japan 10). Clinical trial data were obtained from ClinicalTrials.gov, a clinical trial registration information database, and the speed of participant enrollment (cases/month) was compared for each phase of the trials. The number of clinical trials conducted during the 10 years was 8938 trials for Global 10 and 1439 trials for Japan 10. Comparing the speed of participant enrollment by phase, Japan 10 was significantly faster in phase 1 for both healthy subjects and oncology patients. [Japan 10: Global 10; 15.1 : 12.0 cases/month (healthy subjects) and 5.5 : 1.8 cases/month (oncology), respectively. p < 0.001]. Global 10 was also significantly faster in phase 3. [Japan 10: Global 10; 12.4: 36.9 cases/month, p < 0.001). No significant difference was observed in phase 2 and phase 4. There was a possibility that the speed of enrollment differed by phase between global companies and Japanese domestic companies. Answer: Elderly patients do benefit from enrollment into Phase I clinical trials. According to the study by PUBMED:25779876, elderly patients enrolled in Phase I clinical trials had similar survival outcomes and toxicity profiles compared to younger patients. This suggests that age alone should not be a barrier to enrollment in early-phase clinical trials. Additionally, the study by PUBMED:32974073 emphasizes the importance of enrolling elderly patients in clinical trials, as cancer predominantly affects older adults, and they are often underrepresented in oncology clinical trials. The study also highlights the need for strategies to increase their enrollment and the role of advanced practitioners in advocating for elderly patients in Phase I clinical trials. Furthermore, the study by PUBMED:29605046 supports the notion that older patients can withstand trial-based treatment for recurrent or refractory disease, showing similar toxicity profiles to younger patients and a 63% clinical benefit rate. This indicates that with careful selection, older patients can be considered for enrollment in Phase I trials even when facing recurrent or refractory gynecologic cancer. In summary, the evidence suggests that elderly patients do benefit from enrollment into Phase I trials, and efforts should be made to include them in such studies to ensure they have access to potential new treatments and to better understand the safety and efficacy of these treatments in this population.
Instruction: Sleep-disordered breathing does not affect nocturnal dipping, as assessed by pulse transit time, in preschool children: evidence for early intervention to prevent adverse cardiovascular effects? Abstracts: abstract_id: PUBMED:24684978 Sleep-disordered breathing does not affect nocturnal dipping, as assessed by pulse transit time, in preschool children: evidence for early intervention to prevent adverse cardiovascular effects? Objective: Sleep-disordered breathing (SDB) is associated with reduced nocturnal dipping of blood pressure (BP) and sleep disruption in adults, and these features confer an increased risk of cardiovascular events. As SDB prevalence in children peaks during the preschool years, we investigated nocturnal dipping and sleep fragmentation in preschool children with SDB. Methods: Children (3-5 years; n=163) grouped by obstructive apnoea hypopnoea index (OAHI): control, no snoring history and OAHI ≤1 event/h; primary snoring, OAHI ≤1 event/h; mild SDB, >1-≤5 events/h; moderate-severe SDB, >5 events/h. Pulse transit time (PTT), an inverse continuous indicator of BP changes, and heart rate (HR) during total sleep time and the first period of rapid eye movement (REM), non-REM (NREM)1/2 and NREM3/4 sleep were expressed as percentage change from wake before sleep onset. The sleep fragmentation index (SFI) was calculated as the number of sleep stage transitions or awakenings per hour of sleep. Results: There were no group differences in the change in PTT or HR from wake to total sleep time or to individual sleep stages or in the proportion of children in the quartile with the smallest change in PTT during total sleep. Children with moderate-severe SDB had higher SFI than primary snoring (PS) or mild SDB groups (p<0.05 for both) and controls (p=0.07). Conclusions: In contrast to adults, nocturnal dipping is preserved in young children with SDB, despite increased sleep fragmentation. As there is evidence that nocturnal dipping is similarly preserved at the school age, childhood may pose a window of opportunity for resolution of SDB when the cardiovascular effects are less marked. abstract_id: PUBMED:33872947 Nocturnal dipping of heart rate is impaired in children with Down syndrome and sleep disordered breathing. Background: Children with Down syndrome (DS) are at increased risk for sleep disordered breathing (SDB), which can have adverse effects on the cardiovascular system. In adults with SDB, nocturnal dipping of heart rate (HR) and blood pressure (BP) is reduced, and this is associated with an increased risk of future cardiovascular events. We aimed to compare nocturnal dipping of HR and pulse transit time (PTT) (a surrogate inverse measure of BP change) in children with DS and SDB to those of typically developing (TD) children with and without SDB. Methods: 19 children with DS (3-18 years) were age and sex matched with 19 TD children without SDB (TD-) and with 19 TD children with matched severity of SDB (TD+). Nocturnal dipping was assessed as the percentage change in HR and PTT from wake before sleep onset to total sleep, N2, N3 and REM sleep across the night and to the first cycle of sleep. Results: Children with DS exhibited reduced nocturnal dipping of HR during total sleep, N2, N3 and REM sleep and increased PTT (reduced BP dipping) in N2 sleep. Fewer children with DS exhibited a greater than 10% fall in HR between wake and N2 or REM sleep compared to TD+ children. Conclusions: Our findings demonstrate significantly reduced nocturnal dipping of HR in children with DS compared to TD children matched for SDB severity, suggesting SDB has a greater cardiovascular effect in these children. Further studies are required to fully understand the mechanisms involved and to assess if treatment of SDB improves nocturnal dipping. abstract_id: PUBMED:24047534 Characterization of the acute pulse transit time response to obstructive apneas and hypopneas in preschool children with sleep-disordered breathing. Background: Surges in heart rate (HR) and blood pressure (BP) at apnea termination contribute to the hypertension seen in obstructive sleep apnea (OSA). Because childhood OSA prevalence peaks in the preschool years, we aimed to characterize the cardiovascular response to obstructive events in preschool-aged children. Methods: Clinically referred children aged 3-5 years were grouped by obstructive apnea-hypopnea index (OAHI) into the following: primary snoring (PS) (OAHI≤1 event/h [n=21]), mild OSA (OAHI>1-≤5 [n=32]), and moderate to severe (MS) OSA (OAHI>5 [n=28]). Beat-to-beat pulse transit time (PTT), an inverse continuous indicator of BP changes, and HR were averaged during the two halves (early and late) and during the peak after (post) each obstructive event and were expressed as percentage change from late- to post-event. Results: We analyzed 422 events consisting of 55 apneas and 367 hypopneas. A significant post-event increase in HR and fall in PTT occurred in all severity groups (P<.05 for all). A greater response was associated with OSA, nonrapid eye movement sleep (NREM), cortical arousal, hypopneas, and oxygen desaturation (P<.05 for all). Conclusions: Obstructive events elicit acute cardiovascular changes in preschool children. Such circulatory perturbations have been implicated in the development of hypertension, and our findings complement previous studies to suggest a cumulative impact of snoring on the cardiovascular system from childhood into adulthood. abstract_id: PUBMED:27993286 Pulse transit time changes in subjects exhibiting sleep disordered breathing. Introduction: Pulse Transit Time (PTT) represents a non-invasive marker of sleep fragmentation in OSAS. Little is known regarding PTT in sleepy subjects exhibiting nocturnal Inspiratory Flow Limitation (IFL) in the absence of apneas or desaturation. Materials And Methods: The IFL cohort was gender and age matched to subjects with OSAS and a cohort where Sleep Disordered Breathing (SBD)/IFL was absent ("Non Flow Limited" or NFL cohort); PTT Arousal index (PTT Ar) defined by number of PTT arousals per hour. Results: 20 subjects meeting criteria for the IFL cohort were aged and gender matched with OSAS and "NFL" subjects. Females comprised 65% of the IFL cohort; the mean BMI of the IFL cohort was significantly higher than the NFL cohort (34.25 v 28.90; p = 0.016) but not when compared to the OSAS cohort (34.25 v 36.31; p = 0.30). The PTT Ar in the IFL cohort (33.67 h) was significantly higher than the NFL cohort (23.89 h) but significantly lower than the OSAS cohort (55.21 h; F = 8.76; p < 0.001). PTT Ar was found to positively correlate with AHI (CC = 0.46; p < 0.001), ODI (CC = 0.47; p < 0.001) and RDI (CC = 0.49; p < 0.001). Within the IFL cohort, PTT Ar positively correlated with age (CC = 0.501; p = 0.024) but not gender and BMI. Conclusion: The PTT Arousal Index increased proportionately with severity of SDB with significantly higher markers of arousal in sleepy subjects exhibiting nocturnal IFL when compared to controls. Subjects exhibiting IFL were predominantly female with an elevated BMI. IFL may thus represent a significant pathogenic entity in the development of daytime sleepiness. abstract_id: PUBMED:24468130 Can pulse transit time be useful for detecting hypertension in patients in a sleep unit? Introduction: Pulse transit time (PTT) is the time that a pulse wave takes to travel between two different arterial points, and may be useful in estimating blood pressure. This noninvasive technique, which does not add any cost to the procedure, offers the advantage of avoiding 'arousals' during sleep measurement as occurs with ambulatory blood pressure monitoring (ABPM). We aim to confirm the usefulness of PTT for the detection of hypertension, and to study the correlation between both measurements. Methods: Prospective observational study in a multidisciplinary sleep unit. We recruited 30consecutive patients attending a sleep clinic and ran a baseline polysomnography followed by an ABPM the following day. Average systolic and diastolic blood pressure (SBP, DBP) by PTT were calculated and compared with ABMP results. In accordance with international guidelines, patients with mean nocturnal ABMP ≥ 120/70 mmHg were diagnosed as having arterial hypertension. Results: Mean age of 60years; 66% male, 80% suffered from sleep apnoea (OSAS). Taking the ABPM as the reference technique, we found that the diagnostic sensitivity of PTT is 85% with a specificity of 88% in the case of SBP, with a positive predictive value of 85% and negative predictive value of 88%. By studying the relationship between mean SBP measured by ABPM and PTT, we found a linear correlation coefficient (R) of 0.88, showing a distribution of all subjects with a difference of between ±15mmHg between tests. There is also a positive correlation between mean DBP measured for the two tests, with a weaker linear correlation. Conclusions: Pulse transit time shows a strong correlation with blood pressure (measured by ABPM). PTT provides continuous, non-invasive, cuffless blood pressure monitoring free of additional cost and could be an alternative for screening hypertension. abstract_id: PUBMED:19724229 A critical review of peripheral arterial tone and pulse transit time as indirect diagnostic methods for detecting sleep disordered breathing and characterizing sleep structure. Purpose Of Review: Sympathetic activity varies continuously across sleep stages. During rapid eye movement sleep, sympathetic tone increases substantially but is highly variable. Microarousals are associated with momentary bursts of sympathetic activity. Abnormal respiratory events progressively elevate sympathetic activity in proportion to the severity of oxyhemoglobin desaturation. These phenomena imply that cardiovascular markers of sympathetic activity such as peripheral arterial tone (PAT) and pulse transit time could be indirect tools for diagnosing sleep disordered breathing and characterizing sleep structure and fragmentation. Recent Findings: Measurement of variations in PAT coupled with pulse rate accelerations and desaturations in oximetry can be used to diagnose sleep apnea. Good agreement between both manually and automatically analyzed PAT recordings and polysomnography has been demonstrated during in-laboratory or at-home studies. Numerous validation studies against esophageal pressure have demonstrated that pulse transit time is the best noninvasive method for measurement of respiratory effort. Pulse transit time and PAT are sensitive techniques for arousal recognition, particularly in children and infants. There are specific sleep stage-dependent PAT patterns that allow for the recognition of rapid eye movement sleep and, in the case of nonrapid eye movement sleep, the separation of lighter stages from deeper, slow wave sleep. Elevated nocturnal sympathetic activity as documented by PAT attenuations is linked with chronically elevated blood pressure in humans. Summary: Cardiovascular markers of autonomic control during sleep permit not only the diagnosis of obstructive sleep apnea and estimation of sleep structure but are also linked with the prevalence of daytime hypertension. abstract_id: PUBMED:32457689 Observational Study of Pulse Transit Time in Children With Sleep Disordered Breathing. Background: Pulse transit time (PTT) is a non-invasive measure of arousals and respiratory effort for which we aim to identify threshold values that detect sleep disordered breathing (SDB) in children. We also compare the sensitivity and specificity of oximetry with the findings of a multi-channel study. Methods: We performed a cross-sectional observational study of 521 children with SDB admitted for multi-channel sleep studies (pulse oximetry, ECG, video, sound, movement, PTT) in a secondary care centre. PTT data was available in 368 children. Studies were categorised as normal; primary snoring; upper airway resistance syndrome (UARS); obstructive sleep apnoea (OSA), and "abnormal other." Receiver operator characteristic curves were constructed for different PTT (Respiratory swing; Arousal index) thresholds using a random sample of 50% of children studied (training set); calculated thresholds of interest were validated against the other 50% (test set). Study findings were compared with oximetry categories (normal, inconclusive, abnormal) using data (mean and minimum oxygen saturations; oxygen desaturations > 4%) obtained during the study. Results: Respiratory swing of 17.92 ms identified SDB (OSA/UARS) with sensitivity: 0.80 (C.I. 0.62-0.90) and specificity 0.79 (C.I. 0.49-0.87). PTT arousal index of 16.06/ hour identified SDB (OSA/UARS) with sensitivity: 0.85 (95% C.I. 0.67-0.92) and specificity 0.37 (95% C.I. 0.17-0.48). Oximetry identified SDB (OSA) with sensitivity: 0.38 (C.I. 0.31-0.46) and specificity 0.98 (C.I. 0.97-1.00). Conclusions: PTT is more sensitive but less specific than oximetry at detecting SDB in children. The additional use of video and sound enabled detection of SDB in twice as many children as oximetry alone. abstract_id: PUBMED:35388176 Pulse transit time-estimated blood pressure: a comparison of beat-to-beat and intermittent measurement. Pulse transit time (PTT), which refers to the travel time between two arterial sites within the same cardiac cycle, has been developed as a novel cuffless form of continuous blood pressure (BP) monitoring. The aim of this study was to investigate differences in BP parameters, including BP variability, between those assessed by beat-to-beat PTT-estimated BP (eBPBTB) and those assessed by intermittent PTT-estimated BP at fixed time intervals (eBPINT) in patients suspected of having sleep disordered breathing (SDB). In 330 patients with SDB (average age, 66.8 ± 11.9 years; 3% oxygen desaturation index [ODI], 21.0 ± 15.0/h) from 8 institutes, PTT-estimated BP was continuously recorded during the nighttime. The average systolic eBPBTB, maximum systolic and diastolic eBPBTB, standard deviation (SD) of systolic and diastolic eBPBTB, and coefficient variation (CV) of systolic and diastolic eBPBTB were higher than the respective values of eBPINT (all P < 0.05). Bland-Altman analysis showed a close agreement between eBPBTB and eBPINT in average systolic BP and SD and CV of systolic BP, while there were disagreements in both minimum and maximum values of eBPBTB and eBPINT in patients with high systolic BP (P < 0.05). Although systolic BP variability incrementally increased according to the tertiles of 3%ODI in both eBPBTB and eBPINT (all P < 0.05), there was no difference in this tendency between eBPBTB and eBPINT. In patients with suspected SDB, the difference between eBPBTB and eBPINT was minimal, and there were disagreements regarding both the minimum and maximum BP. However, there were agreements in regard to the index of BP variability between eBPBTB and eBPINT. abstract_id: PUBMED:31119474 Disturbed Sleep as a Mechanism of Race Differences in Nocturnal Blood Pressure Non-Dipping. Purpose Of Review: Disturbed sleep may be a mechanism of race differences in nocturnal blood pressure non-dipping. In support of this proposal, we summarize recent research from three literatures: (1) race differences (Black compared with White individuals) in nocturnal blood pressure non-dipping, (2) the association between disturbed sleep and nocturnal blood pressure non-dipping, and (3) race differences in disturbed sleep. Recent Findings: Black individuals are nearly twice as likely to have blood pressure non-dipping profiles compared with White individuals. This may be explained, in part, by sleep; shorter sleep duration, greater sleep fragmentation, less slow-wave sleep, and obstructive sleep apnea have each been associated with nocturnal blood pressure non-dipping. These sleep disturbances, in turn, are more common in Black compared with White individuals. Studies focused on nocturnal blood pressure non-dipping rarely assess sleep, and experimental evidence linking disturbed sleep with nocturnal blood pressure non-dipping in Black individuals is lacking. While mounting evidence from independent literatures suggests that disturbed sleep is a plausible, modifiable mechanism of race differences in nocturnal blood pressure non-dipping, definitive conclusions are premature given the current state of science. abstract_id: PUBMED:32234001 Cardiovascular Benefits of Extended-Time Nocturnal Hemodialysis. Hemodialysis (HD) remains the most utilized treatment for End-Stage Kidney Disease (ESKD) globally, mainly as conventional HD administered in 4 h sessions thrice weekly. Despite advances in HD delivery, patients with ESKD carry a heavy cardiovascular morbidity and mortality burden. This is associated with cardiac remodeling, left ventricular hypertrophy (LVH), myocardial stunning, hypertension, decreased heart rate variability, sleep apnea, coronary calcification and endothelial dysfunction. Therefore, intensive HD regimens closer to renal physiology were developed. They include longer, more frequent dialysis or both. Among them, Nocturnal Hemodialysis (NHD), carried out at night while asleep, provides efficient dialysis without excessive interference with daily activities. This regimen is closer to the physiology of the native kidneys. By providing increased clearance of small and middle molecular weight molecules, NHD can ameliorate uremic symptoms, control hyperphosphatemia and improve quality of life by allowing a liberal diet and free time during the day. Lastly, it improves reproductive biology leading to successful pregnancies. Conversion from conventional to NHD is followed by improved blood pressure control with fewer medications, regression of LVH, improved LV function, improved sleep apnea, and stabilization of coronary calcifications. These beneficial effects have been associated, among others, with better extracellular fluid volume control, improved endothelial- dependent vasodilation, decreased total peripheral resistance, decreased plasma norepinephrine levels and restoration of heart rate variability. Some of these effects represent improvements in outcomes used as surrogates of hard outcomes related to cardiovascular morbidity and mortality. In this review, we consider the cardiovascular effects of NHD. Answer: The evidence suggests that sleep-disordered breathing (SDB) does not affect nocturnal dipping in preschool children when assessed by pulse transit time (PTT). A study involving children aged 3-5 years found that despite increased sleep fragmentation in children with moderate-severe SDB, nocturnal dipping was preserved across all groups, including control, primary snoring, mild SDB, and moderate-severe SDB. There were no significant differences in the change in PTT or heart rate (HR) from wake to total sleep time or to individual sleep stages. This contrasts with adults, where SDB is associated with reduced nocturnal dipping of blood pressure (BP) and HR, which confers an increased risk of cardiovascular events. The findings suggest that childhood may offer a window of opportunity for the resolution of SDB when the cardiovascular effects are less marked, indicating a potential benefit for early intervention to prevent adverse cardiovascular effects (PUBMED:24684978). However, it is important to note that other studies have found impaired nocturnal dipping of HR in children with Down syndrome and SDB compared to typically developing children matched for SDB severity. This suggests that SDB may have a greater cardiovascular effect in children with Down syndrome, highlighting the need for further research to understand the mechanisms involved and assess if treatment of SDB improves nocturnal dipping in this population (PUBMED:33872947). Overall, while preschool children with SDB may preserve nocturnal dipping as assessed by PTT, there may be population-specific differences, such as in children with Down syndrome, that warrant further investigation and consideration in clinical practice.
Instruction: Early aggressive care and delayed recovery from whiplash: isolated finding or reproducible result? Abstracts: abstract_id: PUBMED:17530688 Early aggressive care and delayed recovery from whiplash: isolated finding or reproducible result? Objective: To test the reproducibility of the finding that early intensive care for whiplash injuries is associated with delayed recovery. Methods: We analyzed data from a cohort study of 1,693 Saskatchewan adults who sustained whiplash injuries between July 1, 1994 and December 31, 1994. We investigated 8 initial patterns of care that integrated type of provider (general practitioners, chiropractors, and specialists) and number of visits (low versus high utilization). Cox models were used to estimate the association between patterns of care and time to recovery while controlling for injury severity and other confounders. Results: Patients in the low-utilization general practitioner group and those in the general medical group had the fastest recovery even after controlling for important prognostic factors. Compared with the low-utilization general practitioner group, the 1-year rate of recovery in the high-utilization chiropractic group was 25% slower (adjusted hazard rate ratio [HRR] 0.75, 95% confidence interval [95% CI] 0.54-1.04), in the low-utilization general practitioner plus chiropractic group the rate was 26% slower (HRR 0.74, 95% CI 0.60-0.93), and in the high-utilization general practitioner plus chiropractic combined group the rate was 36% slower (HRR 0.64, 95% CI 0.50-0.83). Conclusion: The observation that intensive health care utilization early after a whiplash injury is associated with slower recovery was reproduced in an independent cohort of patients. The results add to the body of evidence suggesting that early aggressive treatment of whiplash injuries does not promote faster recovery. In particular, the combination of chiropractic and general practitioner care significantly reduces the rate of recovery. abstract_id: PUBMED:34755234 Early identification of patients at risk for delayed recovery of ambulation after elective abdominal surgery. Background: Recovery of preoperative ambulation levels 1 month after surgery represents an important patient-centered outcome. The objective of this study is to identify clinical factors associated with the inability to regain baseline preoperative ambulation levels 28 days postoperatively. Methods: This is a prospective cohort study enrolling patients scheduled for elective inpatient abdominal operations. Daily ambulation (steps/day) was measured with a wristband accelerometer. Preoperative steps were recorded for at least 3 full calendar days before surgery. Postoperatively, daily steps were recorded for at least 28 days. The primary outcome was delayed recovery of ambulation, defined as inability to achieve 50% of preoperative baseline steps at 28 days postoperatively. Results: A total of 108 patients were included. Delayed recovery (< 50% of baseline preoperative steps/day) occurred in 32 (30%) patients. Clinical factors associated with delayed recovery after multivariable logistic regression included longer operative time (OR 1.37, 95% CI 1.05-1.79), open operative approach (OR 4.87, 95% CI 1.64-14.48) and percent recovery on POD3 (OR 0.73, 95% CI 0.56-0.96). In addition, patients with delayed ambulation recovery had increased rates of postoperative complications (16% vs 1%, p < 0.01) and readmission (28% vs 5%, p < 0.01). Conclusion: After elective inpatient abdominal operations, nearly one in three patients do not recover 50% of their baseline preoperative steps 28 days postoperatively. Factors that can be used to identify these patients include longer operations, open operations and low ambulation levels on postoperative day #3. These data can be used to target rehabilitation efforts aimed at patients at greatest risk for poor ambulatory recovery. abstract_id: PUBMED:30223994 Early Mobilization in the Intensive Care Unit to Improve Long-Term Recovery. This article outlines the effect of early mobilization on the long-term recovery of patients following critical illness. It investigates the safety of performing exercise in this environment, the differing types of rehabilitation that can be provided, and the gaps remaining in evidence around this area. It also attempts to assist clinicians in prescription of exercise in this cohort while informing all readers about the impact that mobilization can have for the outcomes of intensive care patients. abstract_id: PUBMED:32775338 Risk Factors for Delayed Surgical Recovery and Massive Bleeding in Skull Base Surgery. Background: To determine factors that delay surgical recovery and increase intraoperative hemorrhage in skull base surgery. Methods: Factors related to delayed postoperative recovery were retrospectively reviewed in 33 patients who underwent open skull base surgery. Early and late recovery phases were assessed as "days required to walk around the ward (DWW)" and "length of hospital stay (LHS)," respectively. Intraoperative blood loss was cal-culated every hour and analyzed in 4 steps, i.e., craniotomy and intracranial manipulation, cranial fossa osteotomy, extracranial osteotomy, and reconstruction. Results: More than 4,000 mL of blood loss (B = 2.7392, Exp[B] = 15.4744; 95% CI 1.1828-202.4417) and comorbidi-ty (B = 2.3978, Exp[B]) = 10.9987; 95% CI 1.3534-98.3810) significantly prolonged the DWW; the occurrence of postoperative complications significantly delayed the LHS (p = 0.0316). Tumor invasion to the hard palate, the maxillary sinus, the pterygopalatine fossa, the base of the pterygoid process, the sphenoid sinus, the middle cranial fossa, and the cavernous sinus and a long operation time (>13 h) were associated with increased total hemorrhage. The optimal cut-off hemorrhage volume associated with total massive blood loss in craniotomy and intracranial manipulation (AUC = 0.8364), cranial fossa osteotomy (AUC = 0.8000), and extracranial osteotomy (AUC = 0.8545) was 1,111, 750, and 913 mL, respectively. Persistent infection (6%) and neuropsychiatric disorder (6%) are direct causes of delayed LHS. Conclusion: Blood loss, comorbidity, and postoperative complications were risk factors for delayed surgical recovery. Meticulous preoperative planning, intraoperative surefire hemostasis, and perioperative holistic management are prerequisites for safe skull base surgery. abstract_id: PUBMED:29145820 Demographic and need factors of early, delayed and no mental health care use in major depression: a prospective study. Background: Despite the availability of evidence based treatments, many people with major depression receive no or delayed professional treatment, which may put them at risk for adverse outcomes. The aim of this study was to examine which demographic and need factors distinguish early, delayed and no treatment use. Methods: Data were obtained from the Netherlands Mental Health Survey and Incidence Study-2 (NEMESIS-2). People with a diagnosis of major depression in the past 12 months were included (N = 434). Mental health care use was assessed during this same period and at follow up (three years later). Multinomial regression analysis was used to distinguish early, delayed and no mental health care users with respect to demographic and need factors. Results: The majority of participants accessed treatment early (62%). Early treatment users were characterized by more severe and persistent symptoms and were more likely not to have a partner compared to no treatment users. The majority of those without treatment reached remission in three years (85%). Delayed treatment users were, compared to early users, characterized by relatively mild symptoms and a persistent or new major depressive disorder at follow up. Conclusions: Early access to treatment and the finding that need factors determine mental health care use among people with depression show that the filters along the pathway to treatment are not influenced by unfavorable determinants like education or age. abstract_id: PUBMED:38076537 The impact of cognitive reserve on delayed neurocognitive recovery after major non-cardiac surgery: an exploratory substudy. Introduction: Delayed neurocognitive recovery is a common and severe complication after surgery and anesthesia with an adverse impact on daily living, morbidity, and mortality. High cognitive reserve may mitigate the development of delayed neurocognitive recovery, however, supporting data is lacking. We aimed to assess the association between cognitive reserve and delayed neurocognitive recovery in the early postoperative period. Methods: This is a substudy of two prospective observational studies. Adult patients undergoing elective major non-cardiac surgery, who were fluent in German, were eligible for study participation. Patients with any pre-existing central nervous system disorders were excluded. Cognitive reserve was assessed using the Cognitive Reserve Index questionnaire. Delayed neurocognitive recovery was defined as a decline in cognitive function compared with baseline assessments and was evaluated with a battery of neuropsychological tests on the day of hospital admission and between day three post procedure and before hospital discharge. Results: A total of 67 patients with a median age of 67 [IQR: (63-73)] years were included in our analysis. We found delayed neurocognitive recovery in 22.4% of patients. There was a significant association between Cognitive Reserve Index questionnaire total score and the occurrence of delayed neurocognitive recovery in the early postoperative period [OR = 0.938, (95% CI, 0.891; 0.988), p = 0.015]. Conclusion: Higher cognitive reserve in elderly patients undergoing major non-cardiac surgery decreases the risk for subsequent delayed neurocognitive recovery in the early postoperative period. abstract_id: PUBMED:30338259 Enhanced Recovery in Surgical Intensive Care: A Review. Patients are admitted to the surgical intensive care (SICU) unit after emergency and elective surgery. After elective surgery, for further support, or to manage coexisting comorbidities. The implementation of the ERAS (Enhanced recovery after surgery) protocols in surgery should decrease the need for ICU beds, but there will always be unpredicted complications after surgery. These will require individual management. What we can do for our surgical patients in ICU to further enhance their recovery? To promote early enhanced recovery in surgical intensive care-SICU, three areas need to be addressed, sedation, analgesia, and delirium. Tools for measurement and protocols for management in these three areas should be developed to ensure best practice in each SICU. The fourth important area is Nutrition. Preoperative screening and post-operative measurement of the state of nutrition also need to be developed in the SICU. The fifth important area is early mobilization. ERAS protocols encourage early mobilization of the critically ill patients, even if on mechanical ventilation. Early mobilization is possible and should be implemented by special multidisciplinary ICU team. All team members must be familiar with protocols to be able to implement them in their field of expertise. Personal and professional attitudes are critical for implementation. In the core of all our efforts should be the patient and his well-being. abstract_id: PUBMED:27160548 Case finding in early prevention networks - a heuristic for ambulatory care settings Background: One goal of early prevention is the support of families with small children up to three years who are exposed to psychosocial risks. The identification of these cases is often complex and not well-directed, especially in the ambulatory care setting. Objective: Development of a model of a feasible and empirical based strategy for case finding in ambulatory care. Methods: Based on the risk factors of postpartal depression, lack of maternal responsiveness, parental stress with regulation disorders and poverty a lexicographic and non-compensatory heuristic model with simple decision rules, will be constructed and empirically tested. Therefore the original data set from an evaluation of the pediatric documentary form on psychosocial issues of families with small children in well-child visits will be used and reanalyzed. Results: The first diagnostic step in the non-compensatory and hierarchical classification process is the assessment of postpartal depression followed by maternal responsiveness, parental stress and poverty. The classification model identifies 89.0 % cases from the original study. Compared to the original study the decision process becomes clearer and more concise. Conclusions: The evidence-based and data-driven model exemplifies a strategy for the assessment of psychosocial risk factors in ambulatory care settings. It is based on four evidence-based risk factors and offers a quick and reliable classification. A further advantage of this model is that after a risk factor is identified the diagnostic procedure will be stopped and the counselling process can commence. For further validation of the model studies, in well suited early prevention networks are needed. abstract_id: PUBMED:26309485 Rehabilitation protocol after arthroscopic rotator cuff repair: early versus delayed motion. Purpose: To evaluate the effectiveness of early and delayed motion in rehabilitation after arthroscopic rotator cuff repair using a meta-analysis from randomized controlled trials. Materials And Methods: Electronic searches of the CENTRAL, PUBMED, and EMBASE were used to identify randomized controlled trials that evaluated the effectiveness and safety of early and delayed motion for rehabilitation after arthroscopic rotator cuff repair. The methodological quality of the studies was assessed by the Cochrane Collaboration tool for assessing risk of bias. Results: Four randomized controlled trials involving a total of 348 shoulders were included. Of these, two were rated as high quality and two were rated as moderate quality. No significant publication bias was detected by Egger's test and sensitivity analysis demonstrates a statistically robust result. Our meta-analysis indicated that early motion after arthroscopic rotator cuff repair resulted in a significantly greater recovery of external rotation from pre-operation to 3, 6, and 12 months post-operation (P < 0.05) and forward elevation ability from pre-operation to 6 months post-operation (P < 0.05), as compared to when motion was delayed. However, early motion resulted in non-significant excess (P > 0.05) in the rate of recurrence, compared to delayed motion. In addition, there were statistically higher rating scale of the American Shoulder and Elbow Surgeons (ASES) scores at 12 months post-operation (P < 0.05) and healing rates (P < 0.05) with delayed motion after arthroscopic rotator cuff repair, compared with early motion. Conclusion: Our meta-analysis included data from randomized controlled trials and demonstrated that delayed motion after arthroscopic rotator cuff repair resulted in higher healing rates and ASES scores than early motion. Alternatively, early motion increased range of motion (ROM) recovery, but also increased the rate of recurrence compared to delayed motion. abstract_id: PUBMED:35028197 Fight or Flee: An Interesting Case of Snakebite With Delayed Recovery. Snakebite is a neglected tropical disease, which is very common in the Indian subcontinent. The severity of respiratory muscle paralysis and the delay in recovery depend upon the dose of the venom injected, the severity of the venom, the species of the snake, the duration of presentation to the hospital, and the time and dose of administration of anti-snake venom (ASV). The reasons for this delayed neuromuscular recovery still remain an enigma. We highlight such a case of a young adult who had delayed neuromuscular recovery and prolonged ventilatory support following a neurotoxic snakebite. Answer: The study detailed in abstract PUBMED:17530688 aimed to test the reproducibility of the finding that early intensive care for whiplash injuries is associated with delayed recovery. The analysis of data from a cohort study of 1,693 Saskatchewan adults who sustained whiplash injuries showed that patients who received low-utilization care from general practitioners had the fastest recovery. In contrast, high-utilization care, particularly when combining general practitioner and chiropractic care, was associated with a significantly slower rate of recovery. This suggests that the observation that intensive health care utilization early after a whiplash injury is associated with slower recovery is not an isolated finding but a reproducible result. The study concludes that early aggressive treatment of whiplash injuries does not promote faster recovery and that the combination of chiropractic and general practitioner care significantly reduces the rate of recovery.
Instruction: Evaluation of arterial stiffness in children with Williams syndrome: Does it play a role in evolving hypertension? Abstracts: abstract_id: PUBMED:11526372 Evaluation of arterial stiffness in children with Williams syndrome: Does it play a role in evolving hypertension? Background: Pathologic studies and surgical observations of thickened aortic walls have suggested an increase in aortic stiffness in patients with Williams syndrome. However, in vivo objective evaluation of aortic and arterial stiffness in Williams syndrome are lacking. Moreover, systemic hypertension, although prevalent in Williams syndrome, does not have a well-defined mechanism in this syndrome. Therefore, the purpose of this study was to quantitate aortic stiffness and arterial compliance in an objective manner, as well as to determine their roles in development of hypertension, in children with Williams syndrome. Methods: We studied 13 patients with Williams syndrome (aged 3-12 years) and 16 age-matched control subjects. Aortic stiffness was calculated from the beta index as follows: beta = (ln[P(s)/P(d)])/ ([D(s) - D(d)]/D(d)), where P(s) and P(d) are systolic and diastolic blood pressures and D(s) and D(d) are systolic and diastolic aortic dimensions, respectively. Arterial compliance (C) was calculated by the area method: C= (A(d) x CO x CL) / (A(t) x [P(es) - P(d)]), where A(t) is the total area and A(d) is the area under the diastolic portion of the arterial pulse tracing, CO is the cardiac output, CL is the cycle length, and P(es) is aortic end-systolic pressure. Results: In patients with Williams syndrome, the beta index was 2-fold higher than in control patients (9.02 +/- 3.15 vs 4.43 +/- 0.96, P <.005). Moreover, there was a strong positive correlation between the beta index and the systolic blood pressure (r = 0.8 and P <.0001). Compliance was decreased by 42% (0.41 +/- 0.11 vs 0.71 +/- 0.10 mL/mm Hg, P <.05), suggesting decreased arterial compliance. Conclusions: Our study indicates that in vivo arterial stiffness is increased in patients with Williams syndrome. We speculate that increased arterial stiffness may be the predisposing cause of systemic hypertension in Williams syndrome. abstract_id: PUBMED:26232234 Chronic antihypertensive treatment improves pulse pressure but not large artery mechanics in a mouse model of congenital vascular stiffness. Increased arterial stiffness is a common characteristic of humans with Williams-Beuren syndrome and mouse models of elastin insufficiency. Arterial stiffness is associated with multiple negative cardiovascular outcomes, including myocardial infarction, stroke, and sudden death. Therefore, identifying therapeutic interventions that improve arterial stiffness in response to changes in elastin levels is of vital importance. The goal of this study was to determine the effect of chronic pharmacologic therapy with different classes of antihypertensive medications on arterial stiffness in elastin insufficiency. Elastin-insufficient mice 4-6 wk of age and wild-type littermates were subcutaneously implanted with osmotic micropumps delivering a continuous dose of one of the following: vehicle, losartan, nicardipine, or propranolol for 8 wk. At the end of treatment period, arterial blood pressure and large artery compliance and remodeling were assessed. Our results show that losartan and nicardipine treatment lowered blood pressure and pulse pressure in elastin-insufficient mice. Elastin and collagen content of abdominal aortas as well as ascending aorta and carotid artery biomechanics were not affected by any of the drug treatments in either genotype. By reducing pulse pressure and shifting the working pressure range of an artery to a more compliant region of the pressure-diameter curve, antihypertensive medications may mitigate the consequences of arterial stiffness, an effect that is drug class independent. These data emphasize the importance of early recognition and long-term management of hypertension in Williams-Beuren syndrome and elastin insufficiency. abstract_id: PUBMED:20729725 Increased arterial stiffness in children with Williams syndrome and normal blood pressure. Objective: High blood pressure (BP), probably related to the loss of arterial elasticity, frequently arises in patients affected by Williams syndrome (WS). This study aimed at proving that arterial compliance, evaluated by automated recording of the QKd interval, was lower even in young normotensive patients with WS than in a control group of healthy individuals with normal BP values. Methods: Twenty-nine children (12.8±4.1 years) suffering from WS (both hypertensive and normotensive) were enroled. Arterial stiffness was measured by the standardized noninvasive QKd100-60 method. A 24-h ambulatory BP monitoring and a transthoracic echocardiography were also performed. Results: The WS hypertensive children showed significant differences in ambulatory BP monitoring profile in comparison with WS normotensive individuals (systolic BP: P<0.0001, diastolic BP: P<0.0001, mean BP: P<0.0001). All of the 29 WS patients in the study showed disadvantageous differences in QKd100-60 value (P<0.0001) compared with control group. No statistical significant differences were found in QKd100-60 value between the hypertensive and normotensive children affected by WS (P=not significant). Conclusion: Our data show that arterial stiffness is increased in WS, even in patients with normal BP. It might signify a poor future outcome even in those WS patients with apparently no severe cardiovascular involvement. abstract_id: PUBMED:17581225 Genetic influences on the arterial wall. 1. Arterial stiffness, which has independent predictive value for cardiovascular events, seems to have a genetic component, largely independent of the influence of blood pressure and other cardiovascular risk factors. 2. In animal models of essential hypertension (stroke-prone spontaneously hypertensive rats and spontaneously hypertensive rats), structural modifications of the arterial wall include an increase in the number of elastin-smooth muscle cell connections and smaller fenestrations of the internal elastic lamina, possibility leading to redistribution of the mechanical load towards elastic materials. These modifications may give rise to mechanisms explaining why changes in arterial wall material accompanying wall hypertrophy in these animals are not associated with an increase in arterial stiffness. 3. In monogenic connective tissue diseases (Marfan, Williams and Ehlers-Danlos syndromes) and the corresponding animal models, precise characterization of the arterial phenotype makes it possible to determine the influence of abnormal, genetically determined, wall components on arterial stiffness. 4. Such studies have highlighted the role of extracellular matrix signalling in the vascular wall and have shown that elastin and collagen not only display elasticity or rigidity, but are also involved in the control of smooth muscle cell function. 5. These data provide strong evidence that arterial stiffness is affected by the amount and density of stiff wall material and the spatial organization of that material. abstract_id: PUBMED:24126171 Williams syndrome predisposes to vascular stiffness modified by antihypertensive use and copy number changes in NCF1. Williams syndrome is caused by the deletion of 26 to 28 genes, including elastin, on human chromosome 7. Elastin insufficiency leads to the cardiovascular hallmarks of this condition, namely focal stenosis and hypertension. Extrapolation from the Eln(+/-) mouse suggests that affected people may also have stiff vasculature, a risk factor for stroke, myocardial infarction, and cardiac death. NCF1, one of the variably deleted Williams genes, is a component of the nicotinamide adenine dinucleotide phosphate (NADPH) oxidase complex and is involved in the generation of oxidative stress, making it an interesting candidate modifier for vascular stiffness. Using a case-control design, vascular stiffness was evaluated by pulse wave velocity in 77 Williams cases and matched controls. Cases had stiffer conducting vessels than controls (P<0.001), with increased stiffness observed in even the youngest children with Williams syndrome. Pulse wave velocity increased with age at comparable rates in cases and controls, and although the degree of vascular stiffness varied, it was seen in both hypertensive and normotensive Williams participants. Use of antihypertensive medication and extension of the Williams deletion to include NCF1 were associated with protection from vascular stiffness. These findings demonstrate that vascular stiffness is a primary vascular phenotype in Williams syndrome and that treatment with antihypertensives or agents inhibiting oxidative stress may be important in managing patients with this condition, potentially even those who are not overtly hypertensive. abstract_id: PUBMED:9235796 Arterial hypertension and blood pressure profile in patients with Williams-Beuren syndrome The prevalence of hypertension and the diurnal blood pressure pattern were investigated in patients with Williams-Beuren syndrome (WBS) by blood pressure measurements in 142 children, adolescents, and young adults (female n = 62, male n = 80; median age 6.5 years (0.1-34.3 years)) and evaluation of ambulatory blood pressure data from 45 patients (female n = 21, male n = 24; median age 7.8 years (1-23.8 years)). Measurements revealed systolic hypertension in 46.5% of 142 patients, diastolic hypertension occurred in 36.6% (i.e. actual pressure > 95 percentile). According to the ambulatory data 42.2% of 45 patients had hypertension (mean arterial pressure > normal + 2SD). The nocturnal decline of the blood pressure was normal in hypertensive patients but reduced in normotensives (p < 0.01 vs normals). Males were more often hypertensive than females (46% vs 38%). Hypertensives had a higher body mass index than normotensives (19.5 vs 16.6 kg/m2, p < 0.05). In normo- and hypertensive WBS patients mean heart rates were elevated during day- and nighttime (p < 0.02 vs normals) the latter due to a reduced nocturnal decline. The prevalence of hypertension in WBS patients amounts to about 40%, thus being four- to eight-fold in comparison to healthy young adults or children. The diurnal blood pressure pattern and the elevated heart rates indicate that an increased arterial stiffness due to the vascular disease in the WBS and augmented sympathetic activity might play a role in the genesis of hypertension. Thus, effective antihypertensive treatment is likely to become difficult. From our experience beta-blocking agents are often successful in hypertensive WBS patients. abstract_id: PUBMED:35818378 Coarctation of the Aorta with Arterial Tortuosity Syndrome - A Case Report with Cinematic Rendering. Coarctation of the aorta (CoA) with arterial tortuosity syndrome (ATS) is a rare hereditary condition characterized by elongation and tortuosity of the medium- and large-sized arteries, pulmonary hypertension, pulmonary artery stenosis, and constriction or narrowing of the aorta. Considering the non-specific indications and the overlap of ATS with other disorders of the connective tissue such as Williams Beuren syndrome (elastin), Marfan syndrome (fibrillin-1) and Ehlers Danlos syndrome (Type III collagen), it is vital to keep a high diagnostic suspicion in the analysis of the condition. We therefore report on a rare occurrence of coarctation of the aorta with arterial tortuosity syndrome in a teenage female who initially presented with severe resistant hypertension, providing additional literature to aid in the differential diagnosis of COA with ATS. Cinematic rendering delivered a markedly enhanced view of the condition, allowing for greater accuracy in the diagnosis. abstract_id: PUBMED:29227713 A Case Report of in Utero Williams Syndrome Arterial Malformation. Introduction: Williams syndrome (WS), an autosomal dominant condition linked to gene deletions on chromosome 7, can cause supravalvular aortic narrowing and death. WS-associated mutations are believed to disrupt arterial elastin fibers, causing smooth muscle malformation, endomysial fibrosis and severe hypertension. Previous studies demonstrated arterial ultrastructural anomalies in adult WS patients. It is not presently known if the arterial phenotype of WS is also present in utero. Case Report: A 34-week stillborn was delivered to a 28-year-old with genetically confirmed WS. Aortic tissue from the patient was compared with non-WS fetal aorta of similar gestational age using EM and light microscopy. Both sections were taken from the proximal aortic root. This demonstrated internal elastic lamina disruption, malformed elastic fibers, smooth muscle proliferation and abnormal collagen fibers, consistent with adult WS phenotype. Conclusion: Our analysis indicated the cardiovascular changes of WS in a fetus as young as 34 weeks. abstract_id: PUBMED:12219191 Arterial hypertension in a child with Williams-Beuren syndrome (7q11.23 chromosomal deletion). We report the case of a 7-year-old male child diagnosed with Williams-Beuren syndrome and arterial hypertension refractory to clinical treatment. The diagnosis was confirmed by genetic study. Narrowing of the descending aorta and stenosis of the renal arteries were also diagnosed. Systemic vascular alterations caused by deletion of the elastin gene may occur early in individuals with Williams-Beuren syndrome, leading to the clinical manifestation of systemic arterial hypertension refractory to drug treatment. abstract_id: PUBMED:11760021 Anomalies of the abdominal aorta in Williams-Beuren syndrome--another cause of arterial hypertension. Unlabelled: Vascular disease in Williams-Beuren syndrome is based on an elastin arteriopathy which may cause stenoses in small and great vessels. This study presents the pattern of stenotic lesions of the abdominal aorta and the incidence of arterial hypertension. From 112 patients with Williams-Beuren syndrome followed since 1975, 25 patients were studied by aortography. The diameter of the thoracic aorta and the change in diameter to the iliac bifurcation were compared with normal data. Renal artery stenosis was suspected when the proximal vessel diameter was less than 50% of the distal diameter. Of the 25 patients, 20 had vascular stenosis of whom 19 patients were affected by segmental narrowing either of the thoracic aorta (n=9) or the abdominal aorta (n = 7) or both (n = 3). Hypoplasia of the abdominal aorta was characterised by the smallest diameters at the renal artery level and an increased diameter of the infrarenal abdominal aorta. A total of 11 patients had renal arterial stenosis, associated with narrowing of other aortic segments in 10 cases. Only one patient had a solitary stenosis of the renal artery. Arterial hypertension was diagnosed in 17 patients, 2 of them had no vascular lesions; in the remaining 15 patients stenosis was present in more than one segment (aorta 6, renal artery stenosis 1, both 8). Conclusion: Narrowing of the abdominal aorta in patients with Williams-Beuren syndrome is a frequent morphological manifestation of the arteriopathy. Isolated renal arterial stenosis was rare, since it was more frequently combined with a narrowed aorta. Hypertension is a common symptom in the affected group and must be regarded as a manifestation of generalised arteriopathy rather than renal hypoperfusion. Answer: The evaluation of arterial stiffness in children with Williams syndrome suggests that it does play a significant role in the development of hypertension in this population. Studies have shown that patients with Williams syndrome exhibit increased arterial stiffness compared to control subjects. For instance, one study found that the beta index, a measure of aortic stiffness, was two-fold higher in patients with Williams syndrome, and there was a strong positive correlation between the beta index and systolic blood pressure (PUBMED:11526372). This suggests that increased arterial stiffness may be a predisposing cause of systemic hypertension in Williams syndrome. Further supporting this, another study reported that arterial stiffness is increased in Williams syndrome even in patients with normal blood pressure, indicating that arterial stiffness might signify a poor future outcome even in those patients without severe cardiovascular involvement (PUBMED:20729725). This finding is critical as it implies that arterial stiffness is a primary vascular phenotype in Williams syndrome, which can be present even before the onset of hypertension. Moreover, genetic influences on the arterial wall have been identified, with arterial stiffness having a genetic component that is largely independent of blood pressure and other cardiovascular risk factors. In monogenic connective tissue diseases like Williams syndrome, abnormalities in genetically determined wall components, such as elastin, significantly influence arterial stiffness (PUBMED:17581225). Additionally, the use of antihypertensive medication has been associated with protection from vascular stiffness in Williams syndrome, suggesting that early recognition and long-term management of hypertension in this condition are important (PUBMED:24126171). This is further corroborated by the observation that antihypertensive treatments, such as beta-blocking agents, have been successful in managing hypertension in Williams syndrome patients (PUBMED:9235796). In summary, the available evidence strongly indicates that arterial stiffness is a key factor in the evolution of hypertension in children with Williams syndrome, and it may be present even in the absence of elevated blood pressure. Early intervention and management of hypertension, potentially including the use of antihypertensive medications, are crucial in this patient population.
Instruction: Severe reactions to iodinated contrast agents: is anaphylaxis responsible? Abstracts: abstract_id: PUBMED:26001221 Anaphylactoid reactions to the nonvascular administration of water-soluble iodinated contrast media. Objective: Anaphylactoidlike reactions occur during the nonvascular administration of iodinated contrast media. Many of these reactions have been severe. These reactions have occurred with many procedures, including gastrointestinal imaging, cystography, sialography, and hysterosalpingography. Conclusion: This article reviews reports of these reactions. It also reviews what the literature recommends concerning how to deal with individuals undergoing these procedures who are at a higher risk for anaphylactoidlike reactions. abstract_id: PUBMED:9657021 Mechanism of severe, immediate reactions to iodinated contrast material Immediate reactions to iodinated contrast material are usually supposed to be due to complement activation. Histamine has not been proved responsible for signs. However the results described in the literature are controversial: they have been obtained from non-reacting patients or during minor or moderate reactions. We suggest to study mediator release during unexpected severe reactions and to perform skin tests and IgE measurements in patients with proved liberation of mediators. abstract_id: PUBMED:19294866 Suggestions for prevention of adverse reactions after intravasal administration of iodinated contrast media Iodinated contrast media are widely used in computed tomography and angiography. Adverse reactions such as contrast-medium induced nephropathy (CIN), anaphylactoid reactions and iodine-induced thyrotoxicosis are associated with intravasal administration of contrast agents. Iodinated contrast agents are generally considered to be safe, but in rare cases they can cause severe life threatening situations. In this review we present an overview about the incidence, pathways, and risk factors of adverse reactions. Simple schemes including hydration protocols for prevention of CIN, medication for prophylaxis of iodine-induced thyrotoxicosis with thyreostatics and anaphylactoid reactions with histamine antagonists and corticosteroids are suggested. abstract_id: PUBMED:11591924 Severe reactions to iodinated contrast agents: is anaphylaxis responsible? Purpose: The etiology of severe reactions following injection of iodinated contrast agents is the subject of controversy. No consensus has been established regarding the management of patients at risk, risk factors and premedication because in most cases published no diagnostic exploration has been carried out on patients who have experienced a severe reaction. Material: and Methods. Diagnosis of drug anaphylaxis is based on clinical history, proof of mediator release and drug-specific IgE antibodies (when the technique is available) or cutaneous tests (when direct technique is not available). Results: This approach has been adopted for etiologic diagnosis of 5 clinical cases of severe anaphylactoid reactions (including one death) following the injection of ionic and non ionic contrast agents. Clinical symptoms, biology and cutaneous tests are consistent with anaphylaxis. Conclusion: Any patient who has had a severe anaphylactoid reaction following injection of a contrast agent should undergo an allergological assessment to confirm the diagnosis and identify the culprit contrast agent. Indeed, no premedication has proved efficient for the prevention of subsequent allergic reactions. abstract_id: PUBMED:9769830 Mechanisms of severe, immediate reactions to iodinated contrast material. Purpose: To measure and elucidate the mechanisms of presumed mediators of unexpected severe, immediate reactions to iodinated contrast materials. Materials And Methods: In a multicenter study, 20 patients with mild to severe reactions to iodinated contrast material and 20 control subjects without reactions were evaluated. Ionic contrast material was associated with 18 (90%) of 20 reactions. Concentrations of plasma histamine, tryptase, urinary methylhistamine, specific immunoglobulin E (IgE) against ioxitalamate or ioxaglate, and the anaphylatoxins C3a and C4a were measured with radioimmunoassays; complement C3 and C4 levels were measured with nephelometry. Results: Histamine levels were increased in 14 patients; tryptase levels, in 16; and methylhistamine levels, in six. Histamine and tryptase values correlated with the severity of the reaction (P < .02 and P < .004, respectively). Significantly higher levels of specific IgE against ioxaglate (P < .005) and ioxitalamate (P = .045) were found in patients. No differences were found for complement fractions. Skin test results in two patients with life-threatening reactions were positive for the administered contrast material. Conclusion: Histamine release and mast cell triggering are related to severe reactions. An IgE-related mechanism is strongly suspected. Radiologists should be trained to identify and treat anaphylactic shock in patients who react to iodinated contrast material. abstract_id: PUBMED:37427172 Knowledge assessment of radiologists, radiology residents, and radiographers regarding contrast materials and management of adverse drug reactions occurring due to contrast materials: a cross-sectional study. The purpose of this study was to assess the knowledge of the radiology personnel regarding contrast media used in radiology and the management of associated adverse drug reactions. Methods: A questionnaire-based cross-sectional study was conducted from 21 February to 31 March 2019 in five major hospitals of Peshawar, Pakistan. A 30-item questionnaire was adopted from the existing literature containing both open and closed-ended questions and the authors conducted a pilot study among 25 participants to assess the face validity of the tool. A universal sampling technique was adopted. Descriptive statistics were used to summarize the findings of the study. Results: Less than half of the participants could correctly classify iodinated contrast media used in radiology on the basis of ionicity and osmolaity. Sixty-three percent chose severe contrast material-induced allergic reaction as type I hypersensitivity reaction while almost half of them correctly identified the features of iodinated contrast media associated with lesser side effects. Very few of them (6.7%) had read the ACR 2018 manual on contrast media. Regarding the risk factors for acute adverse reactions and signs/symptoms of anaphylaxis few could answer satisfactorily. Twenty-eight percent of participants correctly identified epinephrine as the initial medication in an anaphylactic reaction. Regarding the preferred route of administration, concentration and dose of epinephrine, the participants' correct response was quite poor (43.8%, 6.7%, and 8.6%, respectively). More than 65% of participants could name a single intravenous corticosteroid and antihistamine. Conclusion: Radiology personnel's knowledge regarding contrast material and management of severe contrast material-induced allergic reactions is unsatisfactory. abstract_id: PUBMED:32617458 Zero-iodinated contrast retrograde percutaneous coronary interventions of chronic total occlusions using gadolinium and imaging guidance: a case report of a patient with severe anaphylaxis to iodinated contrast. Background: Gadolinium-based contrast agents have been used as an alternative to iodinated contrast agents for simple percutaneous coronary interventions (PCIs) in patients with obstructive coronary artery disease. Their use has been limited in complex PCI due to poor image quality with gadolinium contrast. Significant dilution of gadolinium is required in an effort to avoid malignant ventricular arrhythmias. Further, the recommended amount of gadolinium contrast that can be used is very limited. Case Summary: We describe a case of patient with severe anaphylaxis to iodinated contrast agents despite pre-exposure prophylaxis with steroids and histamine blockers who underwent intravascular imaging-guided PCI of chronic total occlusions (CTOs) of left anterior descending and dominant left circumflex arteries using gadolinium contrast. Discussion: In patients with anaphylaxis to iodinated contrast agents (i) complex coronary interventions including CTO PCI can be successfully performed without use of iodinated contrast, and (ii) combination of intravascular ultrasound guidance and gadolinium-based contrast agents can be safely and effectively used to perform complex PCI. abstract_id: PUBMED:79585 Anaphylactoid reactions to iodinated contrast material. A review of the literature involving anaphylactoid reactions to iodinated contrast material (ICM) suggests that the reactions are nonantibody-mediated but that a complex activation of inflammatory mediators occurs. Histamine release and/or complement activation has been demonstrated in both in vitro and in vivo experimental systems. It appears that pretreatment of selected cases (those patients previously exhibiting an anaphylactoid reaction) is effective in reducing the frequency and severity of subsequent reactions when readministration is necessary. abstract_id: PUBMED:33071787 Hypersensitivity Reactions to Multiple Iodinated Contrast Media. The incidence of hypersensitivity reactions (HSRs) to iodinated contrast media (ICM) has risen over last years, representing an important health problem. HSRs to ICMs are classified into immediate reactions (IRs) and non-immediate reactions (NIRs) according to if they occur within 1 h or longer after ICM administration. The diagnosis of HSRs to ICM is complex as skin test (ST) sensitivity ranges widely, and drug provocation test (DPT) protocols are heterogeneous. In this manuscript, we describe the clinical characteristics of a series of patients confirmed as HSR to ICM and the diagnosis procedure carried out, looking into those cases confirmed as HSRs to multiple ICMs. For this purpose, we prospectively evaluated patients suggestive of HSRs to ICMs and classified them as IRs or NIRs. STs were carried out using a wide panel of ICMs, and in those with a negative ST, a single-blind placebo controlled DPT was performed with the culprit. If ST or DPT were positive, then tolerance was assessed with an alternative negative ST ICM. We included 101 cases (12 IRs and 89 NIRs) confirmed as allergic. Among them, 36 (35.64%) cases were allergic to more than one ICM (8 IRs and 28 NIRs). The most common ICM involved were iomeprol and iodixanol. Although not statistically significant, the percentage of patients reporting anaphylaxis was higher in patients allergic to multiple ICMs compared with patients allergic to a single ICM (50 vs. 25%). Likewise, the percentage of positive results in STs was higher in patients allergic to multiple ICMs compared with those allergic to a single ICM (for IR 62.5 vs. 25%, p > 0.05; and for NIR, 85.71 vs. 24.59%, p < 0.000). In cases allergic to more than one ICM, DPT with negative-ST ICM was positive in more than 60% (24/36) of cases. Therefore, allergy to multiple ICMs is common, associated to severe reactions in IRs, and confirmed frequently by positive STs. The allergological work-up should include DPT not only to establish the diagnosis but also to identify safe alternative ICM, even if ICM is structurally unrelated and ST is negative. More studies are needed to clarify mechanisms underlying cross-reactivity among ICMs. abstract_id: PUBMED:19863909 Immediate and delayed hypersensitivity reactions to iodinated radiographic contrast agents: an update Diagnostic and interventional radiology of patients is nowadays crucial with increasing requirement for iodinated contrast agents infusion. Besides adverse reactions after administration of the iodinated contrast agents due to their toxicity, immediate hypersensitivity reactions and reactions resembling delayed hypersensitivity appearing from 1 hour to several days later, have been reported. Patients at high risk to develop such adverse events have to be detected on the basis of their risk factors in order to prevent or limit serious outcomes. Previous reactions to contrast media, asthma, atopy and cardiovascular disorders are risk factors for anaphylactic or anaphylactoid reactions. Female gender, age and beta-blockers increase the severity. This article aims to summarize the risk of allergic reactions related to the use of iodinated contrast agents and to suggest a way for diagnosis, treatment and prevention according to each clinical situation. Answer: Severe reactions to iodinated contrast agents can indeed be due to anaphylaxis. The etiology of severe reactions following the injection of iodinated contrast agents has been a subject of controversy, but there is evidence suggesting that anaphylaxis is responsible for some of these reactions. For instance, a study that adopted an approach based on clinical history, proof of mediator release, and drug-specific IgE antibodies or cutaneous tests for etiologic diagnosis of severe anaphylactoid reactions found that clinical symptoms, biology, and cutaneous tests were consistent with anaphylaxis (PUBMED:11591924). Another study measuring presumed mediators of unexpected severe, immediate reactions to iodinated contrast materials found that histamine release and mast cell triggering were related to severe reactions, and an IgE-related mechanism was strongly suspected (PUBMED:9769830). These findings indicate that anaphylactic mechanisms can be involved in severe reactions to iodinated contrast agents. Moreover, the literature suggests that immediate reactions to iodinated contrast material are usually supposed to be due to complement activation, but histamine has not been proven responsible for signs. However, the results described in the literature are controversial, and there is a suggestion to study mediator release during unexpected severe reactions and to perform skin tests and IgE measurements in patients with proved liberation of mediators (PUBMED:9657021). This further supports the notion that anaphylaxis, involving the release of mediators like histamine, could be responsible for some severe reactions to iodinated contrast agents. In conclusion, while not all severe reactions to iodinated contrast agents are due to anaphylaxis, there is evidence to suggest that anaphylaxis is responsible for a subset of these reactions, and allergological assessment is recommended for patients who have experienced severe anaphylactoid reactions to confirm the diagnosis and identify the culprit contrast agent (PUBMED:11591924).
Instruction: Is local stiffness, as measured by radio frequency, more sensitive than intima-media thickness? Abstracts: abstract_id: PUBMED:24212290 Is local stiffness, as measured by radio frequency, more sensitive than intima-media thickness? Aim: The aim of our study was to explore the changes in common carotid arterial intima-media thickness (CCA IMT) and local arterial stiffness to evaluate, non-invasively, early vascular disease in patients with cardiovascular (CV) risk factors and "normal" carotid IMT (<0.9 mm). Methods: We studied 50 patients (mean age 42 years±14), with no history of cardio-cerebrovascular events and "normal" IMT. We assessed the traditional cardiovascular risk factors, the CCA IMT, and the local stiffness with Quality Arterial Stiffness technology, based on Radio frequency signal (RFQAS-ESAOTE, Italy). CCA distensibility coefficient (DC), compliance coefficient (CC), pulse wave velocity (PWV) and β parameter were measured in patients, with and without traditional cardiovascular risk factors. 25 subjects with risk factors (mean age 49±13) were compared with 25 controls (mean age 36±12). Results: We did not find any significant differences in the IMT measurement between subjects with CV risk factors compared to controls (0.530±0.99 mm vs. 0.626±0.127 mm; P=5.68). The mean DC (0.030±0.014 1/kPa vs. 0.0221±0.016 1/kPa; P<0.05) and CC (1.087±0.47 mm²/kPA vs. 0.864±0.41 mm²/kPA; P<0.05), were significantly lower, while PWV (6.21±1.74 m/s vs. 7.68±2.07 m/s; P<0.05) and β (7.67±4.09 m/s vs. 10.45±5.58 m/s; P<0.05) were significantly higher in subjects with CV risk factors. ROC curves showed that PWV>6.05 m/s better identified, among patients with IMT <0.9 mm, those with cardiovascular risk factors (sensitivity 82.0% specificity 62.0%; AUC 0.73). Conclusion: Increased stiffness is a result of change both in quantity and quality of the arterial wall. Arterial functional changes and distention alterations may herald the onset of vascular disease before manifestation of symptoms or detection of preclinical atherosclerotic lesions. abstract_id: PUBMED:26684310 Evaluation of carotid intima-media thickness and carotid arterial stiffness in children with adenotonsillar hypertrophy. Background: Adenotonsillar hypertrophy can produce cardiopulmonary disease in children. However, it is unclear whether adenotonsillar hypertrophy causes atherosclerosis. This study evaluated carotid intimamedia thickness and carotid arterial stiffness in children with adenotonsillar hypertrophy. Methods: The study included 40 children with adenotonsillar hypertrophy (age: 5-10 years) and 36 healthy children with similar age and body mass index. Systolic blood pressure, diastolic blood pressure, and pulse pressure were measured in all subjects. Carotid intima-media thickness, carotid arterial systolic diameter, and carotid arterial diastolic diameter were measured using a high-resolution ultrasound device. Based on these measurements, carotid arterial strain, carotid artery distensibility, beta stiffness index, and elasticity modulus were calculated. Results: Carotid intima-media thickness was greater in children with adenotonsillar hypertrophy (0.36±0.05 mm vs. 0.34±0.04 mm, P=0.02) compared to healthy controls. Beta stiffness index (3.01±1.22 vs. 2.98±0.98, P=0.85), elasticity modulus (231.39±99.23 vs. 226.46±83.20, P=0.88), carotid arterial strain (0.17±0.06 vs. 0.17±0.04, P=0.95), and carotid artery distensibility (13.14±3.88 vs. 12.92±3.84, P=0.75) were similar between children with adenotonsillar hypertrophy and the healthy controls. Conclusions: The present study revealed increased carotid intima-media thickness in children with adenotonsillar hypertrophy. The risk of subclinical atherosclerosis may be higher in children with adenotonsillar hypertrophy. abstract_id: PUBMED:26177274 Increased intima-media thickness is not associated with stiffer arteries in children. Objective: Subclinical atherosclerosis can be assessed via sonographic measurement of intima-media thickness and carotid artery distensibility, both may already be pathologically altered in childhood. Therefore, the purpose of this study was to provide reference percentiles and investigate possible associations between alterations of intima-media thickness and distensibility. Methods: Carotid intima-media thickness and distensibility was measured via B- and M-mode ultrasound. Distensibility was defined by arterial compliance, elastic modulus, stiffness parameter β, and local pulse wave velocity β. Age- and height-dependent reference values were calculated separately for boys and girls among 690 (intima-media thickness) and 870 (distensibility) non-obese children aged 7-17 years. Results: Intima-media thickness and distensibility did not increase significantly with age or differ between boys and girls. Systolic blood pressure and body mass index were independent predictors of intima-media thickness, while an increased systolic blood pressure or pulse pressure was associated with stiffer arteries. Increased intima-media thickness was accompanied by higher arterial compliance and lower stiffness. Conclusion: Using this healthy cohort, we describe a functional and non-pathological arterial adaptation wherein an increase in intima-media thickness is not associated with stiffer arteries. abstract_id: PUBMED:35637577 Ultra-high frequency ultrasound delineated changes in carotid and muscular artery intima-media and adventitia thickness in obese early middle-aged women. Obesity is linked to increased arterial size, carotid intima-media thickness and arterial stiffness. The effects of obesity and body composition on muscular artery intima-media and adventitia thickness has previously not been established. The aim of this study was to explore associations between carotid and muscular artery wall layer thickness with body composition and cardiovascular risk factors in early middle-aged women. This is a cross-sectional study including 199 women aged 40±4 years. Arterial lumen (LD), intima-media (IMT) and adventitia thickness (AT) were measured from carotid, brachial and radial arteries using ultra-high frequency ultrasound (22-71 MHz). Women with obesity had increased IMT in carotid (0.47 vs 0.45 mm), brachial (0.19 vs 0.17 mm) and radial arteries (0.16 vs 0.15 mm) and increased brachial AT (0.14 vs 0.13 mm). In multiple regression models all arterial LD (β-range 0.02-0.03 mm/kg/m2), IMT (β-range 0.91-3.37 µm/kg/m2), AT (β-range 0.73-1.38 µm/kg/m2) were significantly associated with BMI. The IMT of all arteries were significantly associated with systolic blood pressure (β-range 0.36-0.85 µm/mmHg), attenuating the association between IMT and BMI (β-range 0.18-2.24 µm/kg/m2). Obese early middle-aged women have increased arterial intima media thickness and brachial artery adventitia thickness compared to non-obese counterparts. The association between BMI and intima-media thickness is partly mediated through blood pressure levels. abstract_id: PUBMED:25637396 Carotid intima-media thickness in the Congolese hypertensive in South Kivu Objective: The present work aims to establish the frequency, determinants, and the influence of the increased carotid intima-media thickness on the assessment of cardiovascular risk in hypertensive Congolese subjects. Methodology: The carotid intima-media thickness was systematically measured using B-mode ultrasound in 254 hypertensive patients aged≥25years, all native of South Kivu Province. The association between carotid intima-media thickness and various assumed risk factors was modeled in linear and logistic regressions. Results: In the entire study population, 28% had an increased carotid intima-media thickness (≥0.9mm) and 22.4% a plaque (>1.2mm). Carotid intima-media thickness was significantly correlated only with age (partial r=0.289, P<0.0001) and duration of hypertension (partial r=0.154, P=0.01). In addition, the increase in carotid intima-media thickness was associated with an increased global cardiovascular risk according to the European and Framingham criteria, respectively, and independently, the risk of stroke [fourth cIMT quartile adjusted OR=5.4 (95% CI: 1.6 to 18.3), P=0.0007]. However, carotid intima-media thickness did not significantly influence the cardiovascular risk categorization (P>0.05). Conclusion: The increase in carotid intima-media thickness in hypertensive Congolese subjects would be a marker of arterial remodeling associated with a long history of uncontrolled hypertension rather than early atherosclerosis. Its systematic detection would pinpoint those at risk of stroke and optimize their treatment. abstract_id: PUBMED:25882083 Changes in arterial stiffness, carotid intima-media thickness, and epicardial fat after L-thyroxine replacement therapy in hypothyroidism. Objective: To assess the relationship between primary hypothyroidism and subclinical atherosclerosis and its potential changes with L-thyroxine replacement therapy. Methods: A prospective cohort study including 101 patients with primary hypothyroidism and 101 euthyroid patients as controls was conducted from July 2011 to December 2013. Clinical, anthropometrical, biochemical, and ultrasonographic parameters were assessed at baseline and after one year of L-thyroxine replacement therapy. Results: At baseline, hypothyroid patients had significantly greater values of blood pressure, total cholesterol, VLDL cholesterol, left ventricular mass, epicardial fat, and carotid intima-media thickness as compared to controls. Total cholesterol, VLDL cholesterol, ventricular diastolic function, epicardial fat, carotid intima-media thickness, carotid local pulse wave velocity, pressure strain elastic modulus, and β arterial stiffness index showed a significant and positive correlation with TSH levels. After one year of replacement therapy, patients with hypothyroidism showed changes in total cholesterol, VLDL cholesterol, TSH, carotid intima-media thickness, and arterial stiffness parameters. Conclusions: Primary hypothyroidism is characterized by an increased cardiovascular risk. In these patients, L-thyroxine replacement therapy for one year is related to decreased dyslipidemia and improvement in markers of subclinical carotid atherosclerosis. abstract_id: PUBMED:22852004 Carotid intima-media thickness and arterial stiffness in type 1 diabetic patients with and without microangiopathy. Introduction: The aim of the study was to assess carotid intima-media thickness (CIMT) as a subclinical marker of atherosclerosis and arterial stiffness in type 1 diabetic patients in relation to microangiopathy. Material And Methods: We included 87 type 1 diabetic patients (44 women, 43 men), median age 34 years (interquartile range [IQR] 29-43), median disease duration 10 years (IQR: 9-14), mean ± standard deviation (SD) glycated haemoglobin (HbA(1c)) 8.4 ±1.4%. Fifty patients had at least one microangiopathic complication. Intima-media thickness (IMT) of the common carotid artery was measured using high resolution ultrasonography. Arterial stiffness was assessed using digital volume pulse analysis and tonometric measurement of wave reflection and central haemodynamics. Results: SUBJECTS WITH MICROANGIOPATHY COMPARED WITH THOSE WITHOUT HAD HIGHER VALUES OF CIMT (MEDIAN [IQR]: 0.53 mm [0.45-0.60 mm] vs 0.47 mm [0.34-0.52 mm], p = 0.002), higher central augmentation index (CAI(x)) (mean ± SD: 120.2 ±19.4% vs. 110.5 ±17.1%, p = 0.016) and higher peripheral augmentation index (PAI(x)) (65.7 ±18.1% vs. 57.2 ±14.9%, p = 0.023). In the logistic regression analysis, the duration of diabetes, systolic and diastolic blood pressure, postprandial glycaemia, HbA(1c) and triglycerides predicted the presence of diabetic microangiopathy independently of age and sex. The CIMT, CAI(x) and PAI(x) were associated with the presence of diabetic microangiopathy only in the univariate model. Conclusions: In type 1 diabetic patients with microangiopathic complications, increased carotid IMT and arterial stiffness were observed. The study confirms the role of traditional risk factors for late diabetic complications, such as the duration of the disease and metabolic control in the development of microangiopathy. abstract_id: PUBMED:25990366 Ambulatory arterial stiffness index and carotid intima-media thickness in hypertensive rheumatoid patients: a comparative cross-sectional study. Aim: Rheumatoid arthritis is associated with accelerated atherosclerosis. However, little is known about preclinical atherosclerosis in hypertensive rheumatoid arthritis patients. In this cross-sectional study we assessed the expression of preclinical atherosclerosis in hypertensive rheumatoid arthritis patients in comparison with matched hypertensive non-rheumatoid arthritis patients. Methods: The study included 52 hypertensive rheumatoid arthritis patients and 42 hypertensive non-rheumatoid arthritis patients. The patients were extensively examined clinically and laboratory tested. The expression of preclinical atherosclerosis was estimated by assessing ambulatory arterial stiffness index and common carotid intima-media thickness. Results: Arterial stiffness index and common carotid intima-media thickness were higher in hypertensive rheumatoid arthritis patients than in hypertensive non-rheumatoid arthritis patients. There was no correlation between arterial stiffness index and common carotid intima-media thickness with markers of inflammation and disease activity in hypertensive rheumatoid arthritis patients. Conclusion: The expression of subclinical atherosclerosis is more pronounced in hypertensive rheumatoid arthritis than in hypertensive non- rheumatoid arthritis patients. abstract_id: PUBMED:24670446 Ultrasonic radio frequency analysis technique for noninvasive evaluation of carotid artery intima-media thickness and elasticity in patients with dyslipidemia Objective: To assess the value of ultrasonic radio frequency analysis technique for noninvasively evaluation of carotid artery intima-media thickness (IMT) and elasticity in patients with dyslipidemia. Methods: Radio frequency quality intima-media thickness (RFQIMT) technique and radio frequency quality arterial stiffness ((RF)QAS) technique were used for measurement of IMT and pulse wave velocity (PWV) of the left carotid artery in 86 patients with dyslipidemia and 60 healthy volunteers. Results: Apart from the patients' age, body mass index, systolic blood pressure, diastolic blood pressure, pulse pressure, total cholesterol, triglyceride, low density lipoprotein, high density lipoprotein, fasting blood glucose, IMT and PWV differed significantly between the control and case groups. IMT was positively correlated with PWV (r=0.521, P<0.05). In patients with IMT<1.0 mm (n=81), PWV differed significantly between patients with different number of other atherogenic risk factors (P=0.004), but IMT showed no such variation (P=0.079). In patients with more than 3 and those with 2 other risk factors, PWV increased significantly as compared with those with one risk factor (P=0.002). Multiple linear regression analysis showed that systolic blood pressure, age and fasting blood glucose were independently associated with PWV. Conclusion: Ultrasonic radio frequency analysis technique allows accurate measurement of IMT and arterial elasticity, and may thus become a valuable method for evaluating early structural and arterial functional change of the carotid artery. abstract_id: PUBMED:38214274 Arterial Stiffness and Carotid Intima-Media Thickness in Children Exposed to Smokeless Tobacco in Fetal Life. Background: Arterial stiffening and increased intima-media thickness can be seen as early as childhood and are associated with increased risk of cardiovascular events in adult life. The authors hypothesized that exposure to prenatal smokeless tobacco (Swedish snus) without additional nicotine exposure after the breastfeeding period would be associated with increased arterial stiffness and intima-media thickening in preschool children. Methods And Results: This was a longitudinal follow-up cohort study of children aged 5 to 6 years exposed to high doses of nicotine in utero. Women exclusively using snus and unexposed controls were enrolled in early pregnancy (gestational age range, 6-12 weeks). Exposure data were collected during and after pregnancy with questionnaires from both groups. For this study, only children of women using >48 mg nicotine per day during their entire pregnancy were included in the exposure group. Outcomes were determined in 40 healthy children (21 exposed to snus in utero). Ultrasonography of the common carotid artery was used to determine carotid intima-media thickness and calculate arterial stiffness index from the relationship between pulsatile changes in arterial diameter and arterial pressure. Children exposed to snus in fetal life had higher carotid stiffness (median 4.1 [interquartile range (IQR), 2.4-5] versus 2.9 [IQR, 2.1-3.5]; P=0.014) than tobacco-free controls. Carotid strain (relative diameter change) was lower in children exposed to snus (mean 16% [SD, 5.7%] versus 21% [SD, 6.6%]) than in controls (P=0.015). Carotid intima-media thickness did not differ significantly between children exposed to snus and controls. Conclusions: Exposure to snus during fetal life was associated with a stiffer carotid artery in preschool children. Answer: Local stiffness, as measured by radio frequency, appears to be more sensitive than intima-media thickness (IMT) in detecting early vascular changes associated with cardiovascular risk factors. A study exploring changes in common carotid arterial intima-media thickness (CCA IMT) and local arterial stiffness found that while there were no significant differences in IMT measurements between subjects with cardiovascular risk factors and controls, there were significant differences in local stiffness parameters. Distensibility coefficient (DC), compliance coefficient (CC), pulse wave velocity (PWV), and β parameter were significantly altered in subjects with cardiovascular risk factors, indicating that arterial functional changes and distention alterations may occur before the manifestation of symptoms or detection of preclinical atherosclerotic lesions (PUBMED:24212290). Furthermore, the study demonstrated that PWV greater than 6.05 m/s could better identify patients with IMT less than 0.9 mm who had cardiovascular risk factors, with a sensitivity of 82.0% and specificity of 62.0%. This suggests that local arterial stiffness measurements, such as PWV, could serve as a more sensitive indicator for early vascular disease in patients with "normal" IMT (PUBMED:24212290). In contrast, another study found that increased intima-media thickness was not associated with stiffer arteries in children, indicating that an increase in IMT does not necessarily reflect increased arterial stiffness (PUBMED:26177274). This further supports the notion that local stiffness measurements may provide additional information beyond what IMT alone can offer. Overall, these findings suggest that local stiffness measurements, as assessed by radio frequency technology, may be more sensitive than IMT in detecting early vascular changes, particularly in individuals with cardiovascular risk factors who have not yet developed overt atherosclerotic lesions (PUBMED:24212290).
Instruction: Do prostate size and urinary flow rates predict health care-seeking behavior for urinary symptoms in men? Abstracts: abstract_id: PUBMED:7529448 Do prostate size and urinary flow rates predict health care-seeking behavior for urinary symptoms in men? Objectives: To estimate the association between health care-seeking behavior for urinary dysfunction and clinical, physiologic, and anatomic measures of disease. Methods: A randomly selected sample (n = 475) of men aged 40 to 79 years from Olmsted County, Minnesota, was administered a previously validated questionnaire that assessed the frequency of and bother associated with urinary symptoms and health care-seeking behavior in the past year. Peak urinary flow rates were measured with a standard urometer and prostatic volume was determined by transrectal ultrasound. Results: Overall, 21 of the 475 men (4%) had seen a doctor in the past year for urinary symptoms. Men with moderate to severe symptoms (American Urological Association [AUA] Symptom Scores > 7) were 3.4 times as likely (95% confidence interval [CI] = 1.4, 8.3) to have sought medical care in the past year as men with none to mild symptoms. Men with enlarged prostates (> 40 mL) were 3.9 times as likely to have sought health care (95% CI = 1.6, 9.6), whereas men with depressed peak urine flow rates (< 10 mL/s) were only slightly more likely to have sought health care for urinary symptoms (odds ratio = 2.1, 95% CI = 0.7, 6.5). Overall, 76% of men who had sought medical care had prostatic enlargement, depressed peak urine flow rates, or moderate-severe symptoms (sensitivity). In contrast, only 55% of men who did not seek health care for urinary symptoms in the past year had mild symptoms, normal prostatic volume, and normal peak urine flow rates (specificity). Conclusions: These data suggest that clinical, physiologic, and anatomic measures of prostatism do not adequately distinguish the men who seek medical care for their urinary symptoms from those who do not. There remain some factor(s) that apparently lead some men with minor disease to seek care and that prevent men with measurable disease from seeking care. abstract_id: PUBMED:16632189 Determinants of seeking of primary care for lower urinary tract symptoms: the Krimpen study in community-dwelling men. Objectives: To determine which factors predict seeking of primary care among men with lower urinary tract symptoms (LUTS). Methods: A longitudinal, population-based study with a follow-up period of 6.5 yr was conducted among 1688 men aged 50-78 yr. The following data were collected: prostate volume (using transrectal ultrasonometry), urinary flow rate, ultrasound-estimated postvoid residual urine volume, generic and disease-specific quality of life (QOL), and symptom severity (International Prostate Symptom Score [IPSS]). Information on the seeking of primary care by all participants during 2 yr of follow-up was collected from the general practitioner's (GP) record using a computerised search engine and a manual check of electronically selected files. Results: Prostate volume, postvoid residual volume, IPSS, and social generic QOL are important determinants of first GP consultation in men with LUTS. Measurements (physical urologic parameters) and self-reported items (symptom severity and QOL) contribute almost equally to GP consultation in these men. Conclusions: Both measurements of prostate volume and postvoid residual urine volume and self-reported information about symptoms or QOL can help to select those who will benefit from medical care and to reassure those men not likely to need help in the near future. abstract_id: PUBMED:18583022 Tolterodine extended release with or without tamsulosin in men with lower urinary tract symptoms including overactive bladder symptoms: effects of prostate size. Background: Some men with lower urinary tract symptoms (LUTS) including overactive bladder (OAB) symptoms may benefit from antimuscarinic therapy, with or without an alpha-adrenergic antagonist. Objectives: To evaluate the safety and efficacy of tolterodine extended release (ER), tamsulosin, or tolterodine ER+tamsulosin in men meeting symptom entry criteria for OAB and prostatic enlargement trials, stratified by prostate size. Design, Setting, And Participants: Subjects with an International Prostate Symptom Score (IPSS) >or=12; frequency and urgency, with or without urgency urinary incontinence; postvoid residual volume (PVR) <200 mL; and maximum urinary flow rate (Q(max)) >5 mL/s were randomized to receive placebo, tolterodine ER (4 mg), tamsulosin (0.4 mg), or tolterodine ER+tamsulosin for 12 wk. Data were stratified by median baseline prostate volume (<29 mL vs >or=29 mL). Measurements: Endpoints included week 12 changes in bladder diary variables, IPSS scores, and safety variables. Results And Limitations: Among men with larger prostates, tolterodine ER+tamsulosin significantly improved frequency (p=0.001); urgency (p=0.006); and IPSS total (p=0.001), storage (p<0.001), and voiding scores (p<0.013). Tamsulosin significantly improved IPSS voiding scores (p=0.030). Among men with smaller prostates, tolterodine ER significantly improved frequency (p=0.016), UUI episodes (p=0.036), and IPSS storage scores (p=0.005). Tolterodine ER+tamsulosin significantly improved frequency (p=0.001) and IPSS storage scores (p=0.018). Tamsulosin significantly improved nocturnal frequency (p=0.038) and IPSS voiding (p=0.036) and total scores (p=0.044). There were no clinically or statistically significant changes in Q(max) or PVR; incidence of acute urinary retention (AUR) was low in all groups (<or=2%). Conclusions: Men with smaller prostates and moderate-to-severe LUTS including OAB symptoms benefited from tolterodine ER. Therapy with tolterodine ER+tamsulosin was effective regardless of prostate size. Tolterodine ER, with or without tamsulosin, was well tolerated and not associated with increased incidence of AUR. abstract_id: PUBMED:25807886 Clinical significance of peripheral zone thickness in men with lower urinary tract symptoms/benign prostatic hyperplasia. Objective: To evaluate the clinical impact of prostate peripheral zone thickness (PZT), based on presumed circle area ratio (PCAR) theory, on urinary symptoms in men with lower urinary tract symptoms/benign prostatic hyperplasia (LUTS/BPH) as a novel prostate parameter. Patients And Methods: Medical records were obtained from a prospective database of first-visit men with LUTS/BPH. Age, international prostate symptom score (IPSS), overactive bladder symptom score (OABSS), maximum urinary flow rate (Qmax ), and post-void residual urine volume (PVR) were assessed. Total prostate volume (TPV), transition zone volume (TZV), transition zone index (TZI), and PZT were measured by transrectal ultrasonography. Reliability analysis was also performed. Results: In all, 1009 patients were enrolled for the analysis. The mean (sd) PZT was 11.10 (2.50) mm, and patients were classified into three groups PZT thickness groups; PZT <9.5 mm, ≥9.5 to <13 mm, and ≥13 mm. As the PZT became smaller, all urinary symptom scores including IPSS, quality of life (QoL), and OABSS significantly increased. Uroflowmetry variables, such as Qmax and PVR, also showed significant differences. PZT showed a high intra-class correlation coefficient (0.896). Multivariate analysis revealed that the PZT was independently associated with IPSS (P < 0.001), QoL (P = 0.003), OABSS (P = 0.001), and PVR (P = 0.001), but PZT influence on Qmax was only of borderline significance (P = 0.055). Conclusion: PZT is a novel, easy-to-measure prostate parameter that is significantly associated with urinary symptoms. Our present findings suggest that clinical usefulness of PZT should be further validated for managing men with LUTS/BPH. abstract_id: PUBMED:32622397 Thulium laser transurethral vaporesection of the prostate versus transurethral resection of the prostate for men with lower urinary tract symptoms or urinary retention (UNBLOCS): a randomised controlled trial. Background: Transurethral resection of the prostate (TURP) is the standard operation for benign prostatic obstruction. Thulium laser transurethral vaporesection of the prostate (ThuVARP) is a technique with suggested advantages over TURP, including reduced complications and hospital stay. We aimed to investigate TURP versus ThuVARP in men with lower urinary tract symptoms or urinary retention secondary to benign prostatic obstruction. Methods: In this randomised, blinded, parallel-group, pragmatic equivalence trial, men in seven UK hospitals with bothersome lower urinary tract symptoms or urinary retention secondary to benign prostatic obstruction were randomly assigned (1:1) at the point of surgery to receive ThuVARP or TURP. Patients were masked until follow-up completion. Centres used their usual TURP procedure (monopolar or bipolar). All trial surgeons underwent training on the ThuVARP technique. Co-primary outcomes were maximum urinary flow rate (Qmax) and International Prostate Symptom Score (IPSS) at 12-months post-surgery. Equivalence was defined as a difference of 2·5 points or less for IPSS and 4 mL per s or less for Qmax. Analysis was done according to the intention-to-treat principle. The trial is registered with the ISRCTN Registry, ISRCTN00788389. Findings: Between July 23, 2014, and Dec 30, 2016, 410 men were randomly assigned to ThuVARP or TURP, 205 per study group. TURP was superior for Qmax (mean 23·2 mL per s for TURP and 20·2 mL per s for ThuVARP; adjusted difference in means -3·12, 95% CI -5·79 to -0·45). Equivalence was shown for IPSS (mean 6·3 for TURP and 6·4 for ThuVARP; adjusted difference in means 0·28, -0·92 to 1·49). Mean hospital stay was 48 h in both study groups. 91 (45%) of 204 patients in the TURP group and 96 (47%) of 203 patients in the ThuVARP group had at least one complication. Interpretation: TURP and ThuVARP were equivalent for urinary symptom improvement (IPSS) 12-months post-surgery, and TURP was superior for Qmax. Anticipated laser benefits for ThuVARP of reduced hospital stay and complications were not observed. Funding: UK National Institute for Health Research Health Technology Assessment Programme. abstract_id: PUBMED:35125344 Influence of Prostate Artery Embolization on Different Qualities of Lower Urinary Tract Symptoms Due to Benign Prostatic Obstruction. Background: Prostate artery embolization (PAE) is an increasingly used minimally invasive treatment for lower urinary tract symptoms secondary to benign prostatic obstruction (BPO) OBJECTIVE: To analyze the impact of PAE on voiding and storage symptoms. Design, Setting, And Participants: Between July 2014 and May 2019, 351 consecutive men with BPO who underwent PAE were included in a single-center study. Intervention: PAE is an interventional radiological procedure embolizing the prostatic arteries with microspheres. Outcome Measurements And Statistical Analysis: The primary endpoint represented assessment of the International Prostatic Symptom Score (IPSS) at baseline and at 1, 3, 6, 12, and 24 mo after PAE. Secondary endpoints comprised assessment of IPSS quality of life (QoL), International Index of Erectile Function, peak urinary flow rate, postvoid residual volume, prostate volume, and prostate-specific antigen at the same time points. Data were analyzed using standard statistical methods, generalized estimating equations (symptom improvement over time as odds ratios), and McNemar-Bowker test (degree of improvement compared between symptoms). Results And Limitations: Clinical success rates for PAE were 68%, 73%, and 66% at 1, 12, and 24 mo, respectively. The median IPSS improved significantly from 22 to 10 points after 2 yr (p < 0.001). Storage (-50%) and voiding (-58%) symptoms improved similarly (each p < 0.001), with nocturia decreasing least frequently but significantly (p < 0.001). After 1 and 2 yr, 35% (95% confidence interval [CI] 29-41%) and 30% (95% CI 21-40%) of patients reported alleviated storage, and 39% (95% CI 33-45%) and 38% (95% CI 29-49%) reported alleviated voiding symptoms, respectively. QoL improved from 5 to 2 points (p < 0.001). The main limitation is the number of patients lost during follow-up. Conclusions: PAE significantly improved voiding and storage symptoms to a similar extent. This study may aid in counseling patients about this minimally invasive BPO treatment. Patient Summary: Prostate artery embolization (PAE) is a minimally invasive treatment option for patients with voiding and storage symptoms from benign prostate enlargement. Our analysis shows that PAE improves relevant lower urinary tract symptoms. abstract_id: PUBMED:29135135 Ultrasound indicators of the prostate, urinary bladder and uroflowmetry parameters in the diagnosis of infravesical obstruction in men with benign prostatic hyperplasia Relevance: Benign prostatic hyperplasia (BPH) is one of the most common urologic diseases of males. In patients who failed drug therapy of lower urinary tract symptoms (LUTS) caused by BPH, the most important indication for surgery is the presence of infravesical obstruction (IVO). IVO is detected by pressure-flow studies and is characterized by high detrusor pressure with decreased urinary flow rate. The invasiveness and high cost of this investigation has stimulated a search for non-invasive techniques that could reliably characterize the presence of IVO secondary to BPH. Aim: To determine the value of ultrasound indicators of the prostate, urinary bladder and uroflowmetry parameters in the diagnosis of IVO in men with BPH. Materials And Methods: Seventy-six men with moderate and severe LUTS secondary to BPH underwent a comprehensive urological examination, including a clinical history, digital rectal examination, International Prostate Symptom Score (I-PSS), serum prostate-specific antigen (PSA), various ultrasound indicators of the prostate and urinary bladder, uroflowmetry and a pressure/flow study. Results: Infravesical obstruction (IVO) was detected in 73.1% of men with BPH. IVO was found to have the strongest correlation with ultrasound signs of the intravesical prostatic protrusion (IPP, r = 0,667, p <0,05) compared with the prostate volume, prostate transition zone volume, prostate transition zone index, prostatic urethral angle, urethral length of the transition zone, estimated prostate circumference, prostate peripheral zone thickness, residual urine volume, weight of urinary bladder, the thickness of the detrusor, maximum and average urine flow rates and I-PSS score. The incidence of IVO increased in parallel with the increase of IPP. We established a cut-off value for IPP of 10 mm for the diagnosis of IVO with a sensitivity of 68.2%, a specificity of 100%, a positive predictive value of 100%, and a negative predictive value of 53.3%. Conclusion: The ultrasound findings of IPP measuring more than 10 mm strongly correlates with the IVO in men with LUTS and BPH as compared with other ultrasound parameters of the prostate, urinary bladder and urinary flow rates. The IPP is anatomical parameter that do not require urination, and is easily detected by both transabdominal and transrectal ultrasound. However, at the PPI values less than 10 mm, the main method for detecting IVO remains the pressure/ flow study. abstract_id: PUBMED:30168300 Postmicturition Dribble Is Associated with Erectile Dysfunction in Middle-Aged and Older Men with Lower Urinary Tract Symptoms. Purpose: Lower urinary tract symptoms (LUTS) are correlated with erectile dysfunction (ED), but research on whether postmicturition dribble (PMD) is related to ED is limited. We assessed the correlation between PMD and ED in middle-aged and older Korean men with LUTS. Materials And Methods: In our prospective, cross-sectional, observational, multicenter study, we enrolled 205 men (age >40 years) with LUTS. LUTS and ED were assessed using the International Prostate Symptom Score (IPSS) and International Index of Erectile Function-5 (IIEF-5), respectively. PMD was assessed using the Hallym PostMicturition Dribble Questionnaire, Question 1, and the PMD volume was calculated in a paper test. Age, prostate volume, serum prostate-specific antigen, maximum urinary flow rate, and postvoid residual urine were also evaluated. Results: There were significant differences in the total and voiding IPSS between men with and without ED (p=0.042 and 0.043, respectively). The Hallym PostMicturition Dribble Questionnaire 1 score was inversely well correlated with the IIEF-5 score (r=-0.388, p<0.001). Also, the PMD volume was inversely correlated with the IIEF-5 score (r=-0.138, p=0.042). ED prevalence increased as the Hallym PostMicturition Dribble Questionnaire 1 score increased (p=0.002). Further, incorporating the Hallym PostMicturition Dribble Questionnaire 1 into the IPSS tended to increase the predictive accuracy of LUTS by 4.2% in ED patients (p=0.082). Conclusions: PMD was significantly correlated with ED and reinforced the relationship between LUTS and ED in middle-aged and older men. PMD might be an important component of the association between LUTS and ED. abstract_id: PUBMED:25733282 Is new visual prostate symptom score useful as International Prostate Symptom Score in the evaluation of men with lower urinary tract symptoms? A prospective comparison of 2 symptom scores in Turkish society. Objective: To evaluate the correlation between the visual prostate symptom score (VPSS) and the International Prostate Symptom Score (IPSS) and uroflowmetry parameters in Turkish men with lower urinary tract symptoms. Material: Between September 2013 and March 2014, a total of 191 patients who were admitted to the hospital with lower urinary tract symptoms were enrolled in the study. After the routine tests were performed, the patients were requested to fill the IPSS questionnaire consisting of 8 questions and the VPSS questionnaire consisting of 4 questions. Peak (Qmax) and average (Qave) urinary flow rates were measured. Results: Mean age value was 62.8 years (range, 43-84 years). The education level was divided into 3 groups as follows: groups 1, 2, and 3 consisting of 98 patients (51.3%) with formal schooling (1-8 years education), 61 patients (31.9%) with 9-12 years education, and 32 patients (16.8%) with college or university education, respectively. There was a positive correlation between the total IPSS and the total VPSS (r = 0.72; P <.001). There was a negative correlation between the Qmax values and both the total IPSS (r = -0.53; P <.001) and the total VPSS (r = -0.3; P <.01), separately. The questionnaire was completed without any assistance by 25 (25.5%), 33 (54.1%), and 27 (84.4%) men for the IPSS and by 76 (77.6%), 54 (88.5%), and 31 (96.9%) men for the VPSS in groups 1, 2, and 3, respectively. Conclusion: The VPSS correlates significantly with the IPSS and can be reliably used in the elderly and low-educated patients for the assessment of symptom severity in men with lower urinary tract symptoms. abstract_id: PUBMED:26663649 Men with Hypertension are More Likely to Have Severe Lower Urinary Tract Symptoms and Large Prostate Volume. Objective: Patients with lower urinary tract symptoms (LUTS) have a higher prevalence of cardiovascular disease. We evaluated the correlation between LUTS and cardiovascular risk factors in men presenting with benign prostatic hyperplasia (BPH). Methods: We retrospectively reviewed the medical records of 295 men with transurethral resection of the prostate for the treatment of BPH and LUTS. Risk factors for cardiovascular disease included: hypertension, diabetes mellitus (DM), smoking, and dyslipidemia. The severity of LUTS measured by the International Prostatic Symptom Score (IPSS), prostate volume, prostate specific antigen (PSA), maximal urinary flow rate (Qmax), and postvoid residual urine (PVR) in subjects with or without cardiovascular risk factors were compared. Results: IPSS-total (22.9 ± 7.8 vs. 21.2 ± 7.3, P = 0.01) and obstructive symptom score (13.3 ± 5.2 vs. 11.9 ± 4.7, P = 0.01) was significantly different between men with hypertension and without cardiovascular risk factors. There was no significant difference of variables between subjects with DM, smoking or dyslipidemia and without cardiovascular risk factors. In the Pearson correlation, the systolic and diastolic blood pressure (BP) were related with prostate volume (r = 0.138, P = 0.040; r = 0.163, P = 0.020), IPSS-total (r = 0.139, P = 0.043; r = 0.138, P = 0.043), and an obstructive symptom score (r = 0.168, P = 0.014; r = 0.143, P = 0.037), respectively. Conclusions: Men with hypertension are more likely to have a higher IPSS and large prostate volume than men without hypertension. This finding implicates a pathophysiological association between hypertension and LUTS, and the need to manage comorbid symptoms simultaneously. Answer: Prostate size and urinary flow rates are factors that can influence health care-seeking behavior for urinary symptoms in men, but they do not consistently predict whether men will seek medical attention for these issues. According to a study conducted in Olmsted County, Minnesota, men with moderate to severe urinary symptoms or enlarged prostates were more likely to have sought medical care in the past year compared to men with none to mild symptoms or normal prostatic volume. However, men with depressed peak urine flow rates were only slightly more likely to have sought health care for urinary symptoms. The study concluded that clinical, physiologic, and anatomic measures of prostatism do not adequately distinguish men who seek medical care for their urinary symptoms from those who do not, suggesting that other factors may influence the decision to seek care (PUBMED:7529448). Another study, the Krimpen study, found that prostate volume, postvoid residual volume, symptom severity, and social generic quality of life are important determinants of first general practitioner consultation in men with lower urinary tract symptoms (LUTS). This study indicated that both physical urologic parameters and self-reported items contribute almost equally to general practitioner consultation in these men (PUBMED:16632189). In summary, while prostate size and urinary flow rates are associated with health care-seeking behavior for urinary symptoms in men, they are not definitive predictors. Other factors, including symptom severity and quality of life, also play significant roles in influencing whether men seek medical attention for urinary symptoms.
Instruction: Computed tomography hepatic arteriography has a hepatic falciform artery detection rate that is much higher than that of digital subtraction angiography and 99mTc-MAA SPECT/CT: implications for planning 90Y radioembolization? Abstracts: abstract_id: PUBMED:11747982 Malondialdehyde-acetaldehyde-protein adducts increase secretion of chemokines by rat hepatic stellate cells. Findings obtained from our recent studies have demonstrated that malondialdehyde, a product of lipid peroxidation, and acetaldehyde can react together with proteins in a synergistic manner and form hybrid protein conjugates, which have been designated as malondialdehyde-acetaldehyde (MAA)-protein adducts. These adducts have been detected in livers of ethanol-fed rats and are immunogenic because significant increases in circulating antibody titers against MAA-adducted proteins have been observed in ethanol-fed rats and more recently in human alcoholics. Although immunological factors may tend to perpetuate liver injury, little is known about the direct functional consequences of MAA-adducted proteins on the different cellular populations of the liver. Hepatic stellate cells (HSCs) have been shown to be pivotal in the pathogenesis of fibrosis and in the amplification and self-perpetuation of the inflammatory process. The present study was conducted to determine the effects of MAA-adducted proteins on the function of HSCs. Rat HSCs were exposed to various amounts of MAA-protein adducts and their unmodified controls, and the secretion of two chemokines, monocyte chemoattractant protein (MCP)-1 and macrophage inflammatory protein (MIP)-2, that are involved in the chemotaxis of monocytes/macrophages and neutrophils, respectively, was determined. We observed that bovine serum albumin-MAA induced a dose- and time-dependent increase in the secretion of both of these chemokines. These findings indicate that MAA-adducted proteins may play a role in the modulation of the hepatic inflammatory response and could contribute to the pathogenesis of alcoholic liver disease. abstract_id: PUBMED:3622556 Immunoscintigraphy with 99mTc labelled F(ab')2 fragments of an anti melanoma monoclonal antibody (225.28S) in patients with metastatic malignant melanoma. Immunoscintigraphy was performed in 25 patients with malignant melanoma using a 99mTc labelled monoclonal antibody (225.28S). In four cases, imaging was repeated following treatment with recombinant gamma interferon. In 11 cases, tissue samples of metastatic lesions were investigated for the expression of melanoma associated antigens by immunohistochemical techniques. Overall, 59% of known lesions were detected by imaging. The highest rates of detection were in bone, liver and lymph nodes and the lowest in lung, stomach and bowel. Six lesions were detected which had not shown up on routine clinical and radiological examination. There was no evidence of enhanced or induced HMW-MAA expression following interferon treatment. This technique has been found to be useful in staging patients and has been free from side effects. abstract_id: PUBMED:25210746 Unique antibody responses to malondialdehyde-acetaldehyde (MAA)-protein adducts predict coronary artery disease. Malondialdehyde-acetaldehyde adducts (MAA) have been implicated in atherosclerosis. The purpose of this study was to investigate the role of MAA in atherosclerotic disease. Serum samples from controls (n = 82) and patients with; non-obstructive coronary artery disease (CAD), (n = 40), acute myocardial infarction (AMI) (n = 42), or coronary artery bypass graft (CABG) surgery due to obstructive multi-vessel CAD (n = 72), were collected and tested for antibody isotypes to MAA-modifed human serum albumin (MAA-HSA). CAD patients had elevated relative levels of IgG and IgA anti-MAA, compared to control patients (p<0.001). AMI patients had a significantly increased relative levels of circulating IgG anti-MAA-HSA antibodies as compared to stable angina (p<0.03) or CABG patients (p<0.003). CABG patients had significantly increased relative levels of circulating IgA anti-MAA-HSA antibodies as compared to non-obstructive CAD (p<0.001) and AMI patients (p<0.001). Additionally, MAA-modified proteins were detected in the tissue of human AMI lesions. In conclusion, the IgM, IgG and IgA anti-MAA-HSA antibody isotypes are differentially and significantly associated with non-obstructive CAD, AMI, or obstructive multi-vessel CAD and may serve as biomarkers of atherosclerotic disease. abstract_id: PUBMED:35732827 Serum anti-malondialdehyde-acetaldehyde IgA antibody concentration improves prediction of coronary atherosclerosis beyond traditional risk factors in patients with rheumatoid arthritis. Patients with rheumatoid arthritis (RA) have increased atherosclerosis; oxidative stress may be a contributor. Oxidative stress produces immunogenic malondialdehyde-acetaldehyde (MAA) protein adducts and anti-MAA antibodies are detectable in human serum. We hypothesized that anti-MAA antibody concentrations are associated with coronary atherosclerosis in RA patients. Serum concentrations of anti-MAA antibodies (IgA, IgG, and IgM) were measured in 166 RA patients using ELISA cross-sectionally. Relationship between anti-MAA antibody concentrations and cardiovascular and metabolic measures and predictive accuracy of anti-MAA antibodies for presence of coronary artery calcium (CAC) and high CAC (≥ 300 Agatston units or ≥ 75th percentile) were assessed. Only serum IgA anti-MAA antibody concentration was associated with increased CAC, insulin resistance, and decreased high-density lipoprotein particle number. When added as an interaction term with ACC/AHA 10-year risk score plus high-sensitivity C-reactive protein, IgA anti-MAA antibody concentration improved the C-statistic for prediction of any CAC and high CAC compared to ACC/AHA 10-year risk score plus hs-CRP alone. IgA anti-MAA concentration is associated with multiple cardiovascular risk factors and modifies the relationship between ACC/AHA 10-year risk score and CAC in RA patients. IgA anti-MAA concentration could assist in prediction of atherosclerotic CVD and risk stratification when added to standard measures of cardiovascular risk. abstract_id: PUBMED:11841919 Role of malondialdehyde-acetaldehyde adducts in liver injury. Malondialdehyde and acetaldehyde react together with proteins in a synergistic manner and form hybrid protein adducts, designated as MAA adducts. MAA-protein adducts are composed of two major products whose structures and mechanism of formation have been elucidated. MAA adduct formation, especially in the liver, has been demonstrated in vivo during ethanol consumption. These protein adducts are capable of inducing a potent immune response, resulting in the generation of antibodies against both MAA epitopes, as well as against epitopes on the carrier protein. Chronic ethanol administration to rats results in significant circulating antibody titers against MAA-adducted proteins, and high anti-MAA titers have been associated with the severity of liver damage in humans with alcoholic liver disease. In vitro exposure of liver endothelial or hepatic stellate cells to MAA adducts induces a proinflammatory and profibrogenic response in these cells. Thus, during excessive ethanol consumption, ethanol oxidation and ethanol-induced oxidative stress result in the formation of acetaldehyde and malondialdehyde, respectively. These aldehydes can react together synergistically with proteins and generate MAA adducts, which are very immunogenic and possess proinflammatory and profibrogenic properties. By virtue of these potentially toxic effects, MAA adducts may play an important role in the pathogenesis of alcoholic liver injury. abstract_id: PUBMED:15305155 Expression of Melan-A/MART-1 in primary melanoma cell cultures has prognostic implication in metastatic melanoma patients. The lack of melanoma-associated antigen (MAA) expression has been associated with the reduced overall survival in melanoma patients. In order to investigate whether the MAA expression detected on cell cultures established from melanoma patients might relate to the overall survival in these patients, we screened primary cell cultures derived from 37 melanoma metastases for the expression of five known MAA: Melan-A, tyrosinase, gp-100, MAGE-1 and MAGE-3 by polymerase chain reaction (PCR) and fluorescence-activated cell sorting (FACS). MAA expression detected by PCR was found at a high percentage in evaluated melanoma cell lines: 25 of 28 (89%) were positive for Melan-A, 22 of 28 (79%) were positive for tyrosinase, 26 of 28 (93%) were positive for gp-100, and 18 of 28 (64%) were positive for MAGE-3 expression. Using the FACS method the percentage of MAA-positive cell lines was much lower: 14 of 31 (45%) cell lines were positive for Melan-A, eight of 31 (26%) were positive for tyrosinase, 13 of 31 (42%) were positive for gp-100, six of 31 (19%) were positive for MAGE-1, and 14 of 31 (45%) were positive for MAGE-3 expression. Kaplan-Meier survival analysis demonstrated that the patients whose cell lines were positive for Melan-A expression by PCR had significantly longer overall survival time as Melan-A PCR-negative cases (P=0.0038). This could not be shown for any of the markers tested by FACS. Our results suggest that the expression of Melan-A/MART-1 in patient-derived cell cultures may help to identify a group of melanoma patients with prolonged survival. abstract_id: PUBMED:7807999 Expression of myeloid antigens by blast cells in acute lymphoblastic leukemia of adults. The Southwest Oncology Group experience. A subset of adult acute lymphoblastic leukemia (ALL) patients have blast cells which co-express myeloid-associated antigens (MY+ ALL). We have analyzed 113 adult ALL cases for expression of MY-associated antigens (MAA). ALL was diagnosed by standard morphology, cytochemistry, and immunophenotype in central review. MY+ ALL was diagnosed when > or = 20% of lymphoblasts co-expressed CD13 and/or CD33. Overall incidence of MY+ was 31/113 (27%). MAA expression was not significantly correlated with WBC, blast count, hemoglobin, or hematocrit. MY+ cases were more likely to express B-associated antigens, especially CALLA, and to be FAB L2, Ph+, or to have the BCR-ABL translocation by PCR, but these differences were not statistically significant. All patients were induced with a L10M regimen, and 67 (59%) achieved CR: 43/66 (65%) of B MY neg; 14/29 (48%) of B MY+; 10/16 (63%) T MY neg; and 0/2 T MY+. In age-adjusted analyses CR rate did not differ significantly between MY+ and MY neg patients or between B- and T-cell patients. Of the 113 patients, 84 have died and the remaining 29 patients have been followed for a median of 49 months. In proportional hazards regression analyses adjusting for age and WBC, heterogeneity of survival among the four groups was statistically significant (p = 0.021), largely due to MY status. The mortality rate was 85% greater for MY+ patients compared to MY neg patients (two-tailed p = 0.013). By contrast, survival did not vary significantly between B- and T-cell patients. The data indicate that MAA expression is useful for predicting overall survival of adult patients with ALL treated in a L10M protocol. As a predictive factor MAA expression is comparable to the WBC and superior to the more standard stratification by B- or T-cell markers for this group of patients. abstract_id: PUBMED:25724613 Antibodies against malondialdehyde-acetaldehyde adducts can help identify patients with abdominal aortic aneurysm. Objective: Abdominal aortic aneurysm (AAA) is a pathologic dilation of the aorta. Inflammation of the aortic wall has been shown to be involved in AAA formation. Malondialdehyde-acetaldehyde (MAA) adducts are MAA/protein hybrids with immunogenic, proinflammatory, and profibrotic properties. Levels of MAA adducts are elevated in patients with coronary artery disease; however, the role of MAA adducts in AAA is unclear. We hypothesize that levels of circulating antibodies against MAA adducts are increased in patients with AAA. Methods: Plasma samples were collected from mice and patients with AAA and control patients with atherosclerosis but not AAA. AAA was induced in mice by a standard CaCl2 protocol, with matching sham mice. Plasma levels of anti-MAA antibodies were quantified by enzyme-linked immunosorbent assay. Results: Patients with AAA exhibited higher levels of immunoglobulin G and immunoglobulin A anti-MAA antibody subtypes (P = .049 and .026, respectively) compared with control patients. Conversely, immunoglobulin M anti-MAA antibodies in AAA patients were lower compared with control patients (P = .018). In CaCl2-treated mice, immunoglobulin G anti-MAA antibodies were elevated after AAA formation (P = .006). Conclusions: The pattern of anti-MAA antibodies is able to distinguish between patients with AAA and patients with atherosclerosis but no AAA. These results demonstrate that MAA adducts are associated with AAA and suggest that they may play a role in either initiating or propagating chronic inflammation in AAA. abstract_id: PUBMED:36122512 Next generation sequencing targeted detection of somatic mutations in patients with mucinous adenocarcinoma of the appendix. The aim of this study was to investigate the mutations in mucinous adenocarcinoma of the appendix (MAA). SNV was detected in 15 patients with MAA, Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG) and reactome pathway analyses were performed. Tumor mutational burden (TMB), mutant-allele tumor heterogeneity (MATH), microsatellite instability (MSI) was analysis. Finally, the human leukocyte antigen (HLA) typing of the samples was detected. The results showed that TP53 (27 %) and KRAS (20 %) were the highest mutation frequency in the sample, mainly occur in p53 pathway and RTK-RAS pathway. GO analysis reveals mutated genes are closely related to the regulation of GTPase activity, regulation of small GTPase mediated signal transduction and other BP, related to the CC and MF. Analysis of KEGG pathways indicated that the top canonical pathways associated with SNV was Wnt signaling pathway. Reactome pathway analysis further revealed that the mutant genes were closely related to muscle contraction. Only one patient had moderate TMB level and one patient with high MSI. In conclusion, the most common mutated genes and the signaling pathways closely related to MAA development were detected in this study, which will contribute to the development of immunotherapy for patients with MAA. abstract_id: PUBMED:9305579 Epitope characterization of malondialdehyde-acetaldehyde adducts using an enzyme-linked immunosorbent assay. Malondialdehyde (MDA) and acetaldehyde react together with proteins in a synergistic manner and form hybrid protein adducts, designated as MAA adducts. In a previous study, a polyclonal antibody specific for MAA-protein adducts was used in an immunoassay to detect the presence of MAA adducts in livers of ethanol-fed rats. In the present study, the specific epitope recognized by the antibody was defined and the chemistry of MAA adduct formation was further characterized. When several synthetic analogs were tested for their ability to inhibit antibody binding in a competitive ELISA, the results indicated that the major determinant of antibody binding was a highly fluorescent cyclic adduct composed of two molecules of MDA and one of acetaldehyde. The structure of this adduct was shown to be a 4-methyl-1,4-dihydropyridine-3,5-dicarbaldehyde derivative of an amino group of a protein. Examination of MAA adduct formation with a variety of proteins indicated that in addition to this specific fluorescent adduct, MAA adducts were also comprised of other nonfluorescent products. The amount of fluorescent epitopes present on a given protein was the major determinant of antibody binding as assessed in a competitive ELISA, although the efficiency of inhibition of antibody binding by these fluorescent epitopes on MAA-adducted proteins varied depending upon the particular protein. However, when these MAA-adducted proteins were hydrolyzed with Pronase, the concentration of these modified proteins necessary to achieve 50% inhibition of antibody binding in a competitive ELISA fell into a much narrower range of values, indicating that protein hydrolysis equalized the accessibility of the antibody to bind the epitope on these various derivatized proteins. In summary, a cyclic fluorescent adduct of defined structure has been identified as the epitope recognized by our MAA adduct antibody. In addition to this specific adduct, MAA adducts are also comprised of other nonfluorescent products. Answer: The provided abstracts do not contain information regarding the detection rates of hepatic falciform artery using computed tomography hepatic arteriography (CTHA), digital subtraction angiography (DSA), or 99mTc MAA SPECT CT, nor do they discuss the implications for planning 90Y radioembolization. The abstracts focus on various topics such as the immunogenicity and pathogenic role of malondialdehyde-acetaldehyde (MAA) adducts in diseases like alcoholic liver disease, coronary artery disease, and abdominal aortic aneurysm (PUBMED:11747982, PUBMED:25210746, PUBMED:35732827, PUBMED:11841919, PUBMED:25724613), the use of 99mTc labelled monoclonal antibody in immunoscintigraphy for metastatic malignant melanoma (PUBMED:3622556), expression of melanoma-associated antigens (MAA) and their prognostic implications in melanoma patients (PUBMED:15305155), expression of myeloid antigens in acute lymphoblastic leukemia (PUBMED:7807999), and the detection of somatic mutations in mucinous adenocarcinoma of the appendix (PUBMED:36122512). Additionally, one abstract discusses the epitope characterization of MAA adducts using an enzyme-linked immunosorbent assay (PUBMED:9305579). To answer the question regarding the detection rates of hepatic falciform artery and the implications for planning 90Y radioembolization, one would need to refer to studies or clinical trials that specifically compare the efficacy of CTHA, DSA, and 99mTc MAA SPECT CT in the context of hepatic vascular anatomy and pre-treatment planning for radioembolization procedures. Such information is not provided in the abstracts given.
Instruction: Apolipoprotein-E genotyping of diabetic dementia patients: is diabetes rare in Alzheimer's disease? Abstracts: abstract_id: PUBMED:8708297 Apolipoprotein-E genotyping of diabetic dementia patients: is diabetes rare in Alzheimer's disease? Objectives: To determine whether diabetes is rare in Alzheimer disease (AD) relative to other types of dementia and whether diabetics with dementia have a low frequency of the Apolipoprotein-E E4 genotype. Design: Observational survey study. Setting: An Irvine, California, outpatient dementia assessment center. Participants: A total of 123 patients with AD, 51 with vascular dementia, 57 with "mixed" vascular dementia and AD, and 34 with "other" dementias (non-vascular non-AD). Measurements: Demographic data; histories and evidence of diabetes, hypertension, heart disease, stroke; and Apolipoprotein-E genotype for 95 cases distributed across the groups. Results: There were 15 diabetics in the sample (5.7%), all of whom had extensive vascular disease. Diabetes was rare in AD patients (0.8%) relative to vascular dementia (11.8%), mixed vascular/AD dementia (8.8%), and "other" dementia patients (8.8%). In addition, the E4 allele of apolipoprotein-E, associated with high risk for AD, was frequent in the AD group (71.4%), but in the diabetic group it was only as frequent as in the general population (38.5%). In the diabetics with E4, 60% (3/5) had mixed dementia. Conclusions: Diabetics with dementia rarely have AD except as a component of mixed dementia. Apo-E genotyping showed only average E4 allele frequency in diabetics compared with the high E4 frequency found in AD patients. However, mixed dementia in diabetics may be associated with the E4 allele, suggesting that close control of diabetes may be particularly important for those with E4 since they may be more likely than others to develop both diseases. abstract_id: PUBMED:24650067 Differential subtypes of diabetic older adults diagnosed with Alzheimer's disease. Aim: The clinical management of diabetic elderly patients with Alzheimer's disease (AD) is hindered by several difficulties. The present study aimed to clarify the clinical characteristics and pathophysiological properties of AD in diabetic older adults. Methods: A total of 91 patients with type 2 diabetes mellitus and 161 non-diabetic individuals who were diagnosed with AD were recruited. Diabetic patients were classified into two groups with glycated hemoglobin (HbA1c) < 7.0% or ≥ 7.0%. The demographics, cognition, daily-life function, metabolic changes, treatment, and behavioral and psychological symptoms of dementia (BPSD), as well as brain pathophysiology, were compared among the three groups. Results: Patients with higher HbA1c had increased diabetic vascular complications and impaired activities of daily living with decreased levels of serum high-molecular-weight adiponectin and 25-hydroxyvitamin D. Although cognitive status was similar among the three groups, BPSD, including apathy, overeating and excessive daytime sleeping appeared to be increased in the patients with HbA1c ≥ 7.0%. The frequency of apolipoprotein E4 carriers and of posterior cerebral hypoperfusion (AD-pattern) on single-photon emission computed tomography in poorly controlled diabetic subjects was similar to that in non-diabetic AD patients, whereas diabetic patients with HbA1c <7.0% included fewer apolipoprotein E4 carriers and fewer patients with an AD pattern on single-photon emission computed tomography. Conclusion: Subtypes of older diabetic patients with AD were identified based on clinical features and brain pathophysiology. Physical and psychological complications of dementia are prevalent in patients with higher HbA1c. It seems likely that difficulties in the management of diabetes with AD are due not only to non-adherence to diabetes treatment, but also several symptoms and pathophysiological characteristics of dementia. abstract_id: PUBMED:23299282 Risk factors for dementia with type 2 diabetes mellitus among elderly people in China. Objective: to investigate the prevalence and risk factors of diabetes to dementia and its subtypes such as Alzheimer disease (AD) and vascular dementia (VD) among the Chinese elderly population. Methods: among 8,213 Chinese people aged 65 years and older, 1,109 subjects with type 2 diabetes millitus (T2DM) were ascertained by interview and direct glucose testing. All diabetic subjects were initially screened with the "DSM-IV criteria" for dementia and its subtypes. We compared the prevalence of dementia in diabetic patients with that in ordinary subjects, and analyzed the association of the status of diabetes with dementia including AD and VD. Logistic regression was used to assess OR of dementia and its subtypes with T2DM. Results: after a comprehensive geriatric and cognitive assessment, 132 diabetic subjects were diagnosed with all-cause dementia. The prevalence rate for dementia with T2DM was 1.61%. Of demented subjects, 37 subjects had AD, 30 subjects had VD. The prevalence rates for AD with T2DM and VD with T2DM were 4.51% and 3.65%, respectively. In the univariate analyses, among all diabetic subjects, compared with cognitive intactly subjects, the demented subjects, including AD and VD subjects, were older, more female had higher percentage of current smoking, had a greater duration of diabetes, took more frequent use of diabetes medications, and stronger effect of APOE ε4 status. In multivariate logistic regression analyses, variables including age, APOE ε4 allele, duration from onset of diabetes, using oral hypoglycemic agents, HMG-CoA reductase (statins) were significantly associated with increased risk for dementia with T2DM (all P < 0.05). Conclusions: the present study shows that T2DM is associated with dementia and its subtypes amongst elderly people in the Chinese population. abstract_id: PUBMED:18346114 Beta-amyloid deposition in brains of subjects with diabetes. Aim: A causative association between diabetes mellitus (DM) and Alzheimer's disease (AD) has been suggested based on clinical and epidemiological studies. One hypothesis is that the link between DM and AD is related to the function of insulin-degrading enzyme (IDE), an enzyme that degrades not only insulin and pancreatic amylin but also beta-amyloid (Abeta). Thus, in diabetics, insulin and Abeta might compete for IDE and this might lead to an increase in Abeta. The objective of this study was to test the hypothesis that hyperinsulinaemia can elevate Abeta levels and thus contribute to AD pathology in humans. Methods: Neuropathological examination was carried out employing conventional and immunohistochemical (IHC) methods of the brains obtained post mortem from 701 aged subjects. Results: The loads of IHC/Abeta, silver stained neuritic plaques (NP) and neurofibrillary tangles (NFT) were significantly higher in subjects carrying the Apolipoprotein E e4 allele. In contrast, the loads of Abeta, NPs and NFT in the brains were not influenced by hyperglycaemia when comparing 134 diabetic with 567 non-diabetic subjects. Conclusions: We conclude that the hypothesis that hyperinsulinaemia would significantly elevate the Abeta load and thus increase the extent of AD pathology cannot be supported. Our result challenges the claim that DM is a direct risk factor of developing AD. Thus further studies on pathological lesions in demented diabetics should be conducted. abstract_id: PUBMED:11916953 Type 2 diabetes, APOE gene, and the risk for dementia and related pathologies: The Honolulu-Asia Aging Study. Type 2 diabetes may be a risk factor for dementia, but the associated pathological mechanisms remains unclear. We evaluated the association of diabetes alone or combined with the apolipoprotein E (APOE) gene with incident dementia and neuropathological outcomes in a population-based cohort of 2,574 Japanese-American men enrolled in the Honolulu-Asia Aging Study, including 216 subjects who underwent autopsy. Type 2 diabetes was ascertained by interview and direct glucose testing. Dementia was assessed in 1991 and 1994 by clinical examination and magnetic resonance imaging and was diagnosed according to international guidelines. Logistic regression was used to assess the RR of developing dementia, and log-linear regression was used to estimate the incident rate ratio (IRR) of neuropathological outcomes. Diabetes was associated with total dementia (RR 1.5 [95% CI 1.01-2.2]), Alzheimer's disease (AD; 1.8 [1.1-2.9]), and vascular dementia (VsD; 2.3 [1.1-5.0]). Individuals with both type 2 diabetes and the APOE epsilon4 allele had an RR of 5.5 (CI 2.2-13.7) for AD compared with those with neither risk factor. Participants with type 2 diabetes and the epsilon4 allele had a higher number of hippocampal neuritic plaques (IRR 3.0 [CI 1.2-7.3]) and neurofibrillary tangles in the cortex (IRR 3.5 [1.6-7.5]) and hippocampus (IRR 2.5 [1.5-3.7]), and they had a higher risk of cerebral amyloid angiopathy (RR 6.6, 1.5-29.6). Type 2 diabetes is a risk factor for AD and VsD. The association between diabetes and AD is particularly strong among carriers of the APOE epsilon4 allele. The neuropathological data are consistent with the clinical results. abstract_id: PUBMED:15933386 Type 2 diabetes is negatively associated with Alzheimer's disease neuropathology. Background: In cross-sectional and longitudinal studies, type 2 diabetes has been positively associated with the risk of Alzheimer's disease (AD). The present descriptive study compared diabetic and nondiabetic subjects on the severity of neuritic plaques and neurofibrillary tangles (NFTs) in the cerebral cortex and in the hippocampus. Methods: The study included specimens from 385 consecutive autopsies of residents of a nursing home (15.8% diabetics). Mean age at death = 84 years [standard deviation (SD) = 10], 66% were female, Clinical Dementia Rating mean = 3.0 (SD = 1.6), and 32.5% had an APOE4 allele. Additional analyses limited the sample to 268 subjects (14.1% diabetics) without neuropathology other than AD. Results: Analyses of covariance controlling for age at death, dementia severity (Clinical Dementia Rating score), and APOE4 allele indicated that diabetics had significantly fewer neuritic plaques (p =.008) and NFTs (p =.047) in the cerebral cortex than did nondiabetics. In the hippocampus, diabetics had significantly lower plaque ratings than did nondiabetics (p =.019), but the lower ratings of NFTs did not achieve statistical significance (p =.082). In the entire sample, diabetics had significantly less AD-associated neuropathology in all four analyses. Conclusions: These results raise the possibility that the varied associations observed between diabetes and AD may be specific to as yet ill-defined subgroups of dementia and diabetic patients or may be more characteristic of younger patients than of those who survive to a mean age of 84 years. Future studies are encouraged to examine a variety of other characteristics such as age that may interact with diabetes affecting the incidence of AD. abstract_id: PUBMED:22701197 Dementia and diabetes mellitus: association with apolipoprotein e4 polymorphism from a hospital in southern India. Objective. To evaluate the association of Apolipoprotein E4 (ApoE4) in Alzheimer's dementia (AD) with comorbid diabetes mellitus (DM). Methods. The study included subjects with Alzheimer's dementia (AD) (n = 209), individuals with non-Alzheimer's dementia (nAD) (n = 122), individuals with parental history of AD (f/hAD) (n = 70), and control individuals who had normal cognitive functions and no parental history of dementia (NC) (n = 193). Dementia was diagnosed using International Classification of Diseases-10 revision (ICD-10) criteria. DM was assessed on the basis of self-report and/or use of antidiabetic medications. ApoE genotyping was done using sequence-specific primer polymerase chain reaction. Results. ApoE4 allele frequencies were highest among AD with comorbid DM (0.35) followed by AD without DM (0.25), nAD with DM (0.13), nAD without comorbid DM (0.12), and NC (0.08). Frequency of ApoE4 in persons with f/hAD was 0.13. The association of AD with co-morbid DM in ApoE4 carriers was more in comparison to NC with DM (OR = 5.68, P = 0.04). Conclusion. There is a significant association between AD with co-morbid DM and ApoE4 genotype. abstract_id: PUBMED:32210695 Diabetes as a risk factor for Alzheimer's disease in the Middle East and its shared pathological mediators. The incidence of Alzheimer's disease (AD) has risen exponentially worldwide over the past decade. A growing body of research indicates that AD is linked to diabetes mellitus (DM) and suggests that impaired insulin signaling acts as a crucial risk factor in determining the progression of this devastating disease. Many studies suggest people with diabetes, especially type 2 diabetes, are at higher risk of eventually developing Alzheimer's dementia or other dementias. Despite nationwide efforts to increase awareness, the prevalence of Diabetes Mellitus (DM) has risen significantly in the Middle East and North African (MENA) region which might be due to rapid urbanization, lifestyle changes, lack of physical activity and rise in obesity. Growing body of evidence indicates that DM and AD are linked because both conditions involve impaired glucose homeostasis and altered brain function. Current theories and hypothesis clearly implicate that defective insulin signaling in the brain contributes to synaptic dysfunction and cognitive deficits in AD. In the periphery, low-grade chronic inflammation leads to insulin resistance followed by tissue deterioration. Thus insulin resistance acts as a bridge between DM and AD. There is pressing need to understand on how DM increases the risk of AD as well as the underlying mechanisms, due to the projected increase in age related disorders. Here we aim to review the incidence of AD and DM in the Middle East and the possible link between insulin signaling and ApoE carrier status on Aβ aggregation, tau hyperphosphorylation, inflammation, oxidative stress and mitochondrial dysfunction in AD. We also critically reviewed mutation studies in Arab population which might influence DM induced AD. In addition, recent clinical trials and animal studies conducted to evaluate the efficiency of anti-diabetic drugs have been reviewed. abstract_id: PUBMED:31270743 Consequences of Metabolic Disruption in Alzheimer's Disease Pathology. Alzheimer's disease (AD) is an irreversible, progressive disease that slowly destroys cognitive function, such as thinking, remembering, and reasoning, to a level that one cannot carry out a daily living. As people live longer, the risk of developing AD has increased to 1 in 10 among people who are older than 65 and to almost 1 in 2 among those who are older than 85 according to a 2019 Alzheimer's Association report. As a most common cause of dementia, AD accounts for 60-80% of all dementia cases. AD is characterized by amyloid plaques and neurofibrillary tangles, composed of extracellular aggregates of amyloid-β peptides and intracellular aggregates of hyperphosphorylated tau, respectively. Besides plaques and tangles, AD pathology includes synaptic dysfunction including loss of synapses, inflammation, brain atrophy, and brain hypometabolism, all of which contribute to progressive cognitive decline. Recent genetic studies of sporadic cases of AD have identified a score of risk factors, as reported by Hollingworth et al. (Nat Genet 43:429-435, 2001) and Lambert et al. (Nat Genet 45:1452-1458, 2013). Of all these genes, apolipoprotein E4 (APOE4) still presents the biggest risk factor for sporadic cases of AD, as stated in Saunders et al. (Neurology 43:1467-1472, 1993): depending on whether you have 1 or 2 copies of APOE4 allele, the risk increases from 3- to 12-fold, respectively, in line with Genin et al. (Mol Psychiatry 16:903-907, 2011). Besides these genetic risk factors, having type 2 diabetes (T2D), a chronic metabolic disease, is known to increase the AD risk by at least 2-fold when these individuals age, conforming to Sims-Robinson et al. (Nat Rev Neurol 6:551-559, 2010). Diabetes is reaching a pandemic scale with over 422 million people diagnosed worldwide in 2014 according to World Health Organization. Although what proportion of these diabetic patients develop AD is not known, even if 10% of diabetic patients develop AD later in their life, it would double the number of AD patients in the world. Better understanding between T2D and AD is of paramount of importance for the future. The goal of this review is to examine our current understanding on metabolic dysfunction in AD, so that a potential target can be identified in the near future. abstract_id: PUBMED:20031276 Insulin resistance and hippocampal volume in women at risk for Alzheimer's disease. Insulin resistance (IR) is the main pathological condition underlying vascular disorders, such as diabetes and cardiovascular disease, which are well established risk factors for cognitive decline and Alzheimer disease (AD). Hippocampal atrophy has been associated with cognitive decline, but little is known about the influence of IR on hippocampus integrity in non-diabetic, cognitively intact individuals. Herein, 50 women ages 50-65, current users of hormone therapy, underwent magnetic resonance imaging, cognitive testing, and homeostatic assessment of insulin resistance (HOMA-IR), as part of a longitudinal study examining brain structure and function in postmenopausal women at risk for AD. Results demonstrated a significant negative relationship between HOMA-IR and right and total hippocampal volume, overall cognitive performance, and selective tests of verbal and non-verbal memory. The main effect of HOMA-IR on brain structure and cognition was not altered by the presence of APOE-ε4 allele or by reproductive history, such as duration of endogenous and exogenous estrogen exposure. These results suggest that IR in middle-aged individuals at risk for AD may be biomarker for dementia risk. Answer: The relationship between diabetes and Alzheimer's disease (AD) has been explored in various studies, with some suggesting that diabetes may be less common in AD compared to other types of dementia. One observational survey study found that diabetes was rare in AD patients (0.8%) relative to vascular dementia (11.8%), mixed vascular/AD dementia (8.8%), and "other" dementia patients (8.8%). Additionally, the E4 allele of apolipoprotein-E, which is associated with a high risk for AD, was frequent in the AD group (71.4%), but in the diabetic group, it was only as frequent as in the general population (38.5%). In diabetics with E4, 60% had mixed dementia, suggesting that diabetics with dementia rarely have AD except as a component of mixed dementia (PUBMED:8708297). Other studies have identified subtypes of older diabetic patients with AD based on clinical features and brain pathophysiology, noting that physical and psychological complications of dementia are prevalent in patients with higher glycated hemoglobin (HbA1c) levels. The frequency of apolipoprotein E4 carriers and of posterior cerebral hypoperfusion on single-photon emission computed tomography in poorly controlled diabetic subjects was similar to that in non-diabetic AD patients (PUBMED:24650067). In contrast, some studies have found that type 2 diabetes is associated with dementia and its subtypes, including AD, among elderly people in the Chinese population, with variables such as age, APOE ε4 allele, duration from onset of diabetes, and use of certain medications being significantly associated with increased risk for dementia with type 2 diabetes mellitus (T2DM) (PUBMED:23299282). However, there are also studies that challenge the direct association between diabetes and AD. For instance, one study concluded that hyperinsulinaemia does not significantly elevate the beta-amyloid load and thus increase the extent of AD pathology, challenging the claim that diabetes is a direct risk factor for developing AD (PUBMED:18346114). Another study found that type 2 diabetes is negatively associated with AD neuropathology, suggesting that the associations observed between diabetes and AD may be specific to subgroups of dementia and diabetic patients or may be more characteristic of younger patients (PUBMED:15933386).
Instruction: Adult outcomes of pediatric recurrent abdominal pain: do they just grow out of it? Abstracts: abstract_id: PUBMED:35927524 Presentation and clinical course of pediatric-onset versus adult-onset Takayasu arteritis-a systematic review and meta-analysis. Takayasu arteritis (TAK) is a less common large-vessel vasculitis which can occur in either children or adults. However, differences between pediatric-onset and adult-onset TAK have not been systematically analyzed. We undertook a systematic review (pre-registered on PROSPERO, identifier CRD42022300238) to analyze differences in clinical presentation, angiographic involvement, treatments, and outcomes between pediatric-onset and adult-onset TAK. We searched PubMed (MEDLINE and PubMed Central), Scopus, major recent international rheumatology conference abstracts, Cochrane database, and clinicaltrials.gov, and identified seven studies of moderate to high quality comparing pediatric-onset and adult-onset TAK. Meta-analysis of 263 pediatric-onset and 981 adult-onset TAK suggested that constitutional features (fever, and in subgroup analyses, weight loss), hypertension, headache, and sinister features of cardiomyopathy, elevated serum creatinine, and abdominal pain were more frequent in pediatric-onset TAK, whereas pulse loss/pulse deficit and claudication (particularly upper limb claudication) were more frequent in adult-onset TAK. Hata's type IV TAK was more common in pediatric-onset TAK, and Hata's type I TAK in adult-onset TAK. Children with TAK also appeared to require more intense immunosuppression with more frequent use of cyclophosphamide, biologic DMARDs, tumor necrosis factor alpha inhibitors, and, in subgroup analyses, tocilizumab in pediatric-onset TAK than in adult-onset TAK. Surgical or endovascular procedures, remission, and risk of mortality were similar in both children and adults with TAK. No studies had compared patient-reported outcome measures between pediatric-onset and adult-onset TAK. Distinct clinical features and angiographic extent prevail between pediatric-onset and adult-onset TAK. Clinical outcomes in these subgroups require further study in multicentric cohorts. abstract_id: PUBMED:32745921 Trends in adult patients presenting to pediatric emergency departments. Objective: Adults presenting to pediatric emergency departments (PEDs) include those with complex chronic conditions (CCCs) often still followed by pediatric providers, and those without CCCs (non-CCCs). This paper describes recent trends in adults seen within PEDs, both by age subgroups and CCC status. Methods: Data were retrospectively reviewed from the Pediatric Health Information System for PED visits between Jan 1, 2013 and Dec 31, 2017. Yearly visit rates were trended for all adult visits, age subgroups (18-21, 22-25, 26-40, and 41-95 years), and by CCC status. The most frequent diagnoses were reviewed for each adult age group and CCC category. Results: Rates of adult PED visits significantly increased from 3.7% in 2013 to 4.2% in 2017 (P < 0.0001). While the overall majority (88% overall and >70% for each age group) of adult PED patients were non-CCC, the rate of CCC patients increased overall (P < 0.0001), especially among older patients (41-95 years) from 8.7% in 2013 to 29% in 2017 (P < 0.0001). The most frequent CCC diagnoses were sickle cell disease (18-21 and 22-25 years), cardiac pathology (26-40 years), and hyperlipidemia (41-95 years). The most frequent non-CCC diagnoses were abdominal pain (18-21 years), pregnancy-related (22-25 and 26-40 years), and chest pain (41-95 years). Conclusion: Visits to PEDs by adult patients have increased over time, with the majority of the population being represented by non-CCC adults. The results of this study can be used to guide specifics in adult medicine training during Pediatric Emergency Medicine fellowship and assist in continuing education efforts. abstract_id: PUBMED:30853574 The Care of Adult Patients in Pediatric Emergency Departments. Objective: Adult patients are increasingly receiving care in pediatric emergency departments (PEDs), but little is known about the epidemiology of these visits. The goals of this study were to examine the characteristics of adult patients (≥21 years) treated in PEDs and to describe the variation in resource utilization across centers. Methods: We conducted a cross-sectional study examining visits to 30 PEDs (2012-2016) using the Pediatric Health Information System. Visits were categorized using All Patient Refined Diagnosis Related Groups and compared between age cohorts. We used multivariable logistic models to examine variation in demographics, utilization, testing, treatment, and disposition. Results: There were 12,958,626 visits to the 30 PEDs over 5 years; 70,636 (0.6%) were by adults. Compared with children, adult patients had more laboratory testing (49% vs 34%), diagnostic imaging (32% vs 29%), and procedures (48% vs 31%), and they were more often admitted (17% vs 11%) or transferred (21% vs 0.7%) (P < .001 for all). In multivariable analysis, older age, black race, Hispanic ethnicity, and private insurance were associated with decreased odds of admission in adults seen in PEDs. Across PEDs, the admission rates (7%-25%) and transfer rates (6%-46%) for adults varied. Conclusions: Adult patients cared for at PEDs have higher rates of testing, diagnostic imaging, procedures, and admission or transfer. There is wide variation in the care of adults in PEDs, highlighting the importance of further work to identify the optimal approach to adults who present for care in pediatric centers. abstract_id: PUBMED:26140101 Obtaining research biopsies during pediatric colonoscopy: Safety and adverse events. Aim: To investigate the safety profile of acquiring additional intestinal biopsies for research purposes in children undergoing a medically indicated colonoscopy. Methods: A retrospective review of 122 pediatric patients who underwent colonoscopy over a 9 mo time period was completed. 38/122 participants consented to a research study in which 4 additional biopsies were obtained, in addition to routine biopsies. The outcomes after colonoscopy were measured in the research participants, and compared to 84 control participants who did not consent for the study. Groups were compared with regard to number of biopsies obtained, underlying diagnosis, and both serious and minor adverse outcomes. Data was collected including: age, gender, race, indication, diagnosis, number of biopsies obtained per case and post procedure adverse events. Medical records were reviewed and a questionnaire was completed by each of the ten gastroenterologists who performed procedures during the study. Physicians were asked about individual patient outcomes to ensure that all adverse events, such as perforation, excessive bleeding, infection, and minor gastrointestinal outcomes, were captured and included. Results: The research group had more biopsies obtained (mean = 13.58 ± 4.21) compared to controls (mean = 9.33 ± 4.40), P ≤ 0.0001, however there was no difference in adverse events. Serious outcomes, defined as perforation, bleeding and infection, did not occur, in either group. As such, the relationship between serious adverse events and number of biopsies obtained was not determined. Minor gastrointestinal outcomes, such as abdominal pain, diarrhea or vomiting, were reported in 21 patients (8 research participants and 13 control participants) however the incidence of minor gastrointestinal outcomes between the two groups did not vary significantly, P = 0.45. Additionally, the mean number of biopsies obtained in patients who had a minor outcome (mean = 12.1 ± 0.77), compared to those with no adverse outcome (mean = 10.34 ± 0.5), revealed no statistical difference between the groups (P = 0.12), suggesting that number of biopsies is not associated with incidence of minor adverse events. Conclusion: Patients participating in research requiring acquisition of additional biopsies for research purposes alone, are not at an increased risk of adverse outcomes. abstract_id: PUBMED:29115055 Differences between adult and pediatric onset Henoch-Schonlein purpura from North India. Aim: Henoch-Schönlein purpura (HSP), a primary vasculitis, characterized by purpura, abdominal pain, arthritis and renal involvement, is predominantly a disease of childhood. However, rarely it can occur in adults in whom it is believed to be a more severe form with poor renal outcomes. We aimed to answer if the age of onset affected the clinical spectrum and renal outcomes of the disease in a north Indian population. Hence, we studied the differences in clinical spectrum and renal outcomes between adult-onset HSP and childhood-onset HSP. Methods: Case records of all adult patients diagnosed with HSP (onset ≥ 18 years) over the last 25 years in our department (1992-2017) were retrieved. Data on clinical features, lab abnormalities and outcomes were extracted and compared with that in pediatric HSP patients (onset < 18 years) seen during the same period. Results: A total of 87 patients, including 30 adults and 57 children, were seen during this period. Compared to children, most of the adults had purpura as the first clinical manifestation (86.7% vs. 56.1%, P < 0.01) whereas abdominal pain was only rarely the initial symptom in adults (10.0% vs. 36.8%, P < 0.02). During the disease course, adults had a higher frequency of joint involvement as compared to children (90.0% vs. 43.9%, P < 0.001). There was no significant difference in the frequency of renal involvement (60.0% vs. 50.9%, P = NS) and gastrointestinal involvement (66.7% vs. 84.2%, P = NS) between adults and children. Outcomes were good in both groups and most adults and children achieved complete recovery (83.3% and 86.0%, respectively). There was no difference in the frequency of immunosuppressive treatment required by the two groups. None of the patients developed renal insufficiency on follow-up. Conclusion: Compared to children, adults with HSP seldom have abdominal pain as the first clinical manifestation while joint involvement is seen more commonly in them during the disease course. abstract_id: PUBMED:27927245 Congenital adhesion band causing small bowel obstruction: What's the difference in various age groups, pediatric and adult patients? Background: A congenital adhesion band is a rare condition, but may induce a small bowel obstruction (SBO) at any age. However, only a few sporadic case reports exit. We aimed to identify the clinical characteristics of congenital adhesion band manifesting a SBO stratified by age group between pediatric and adult patients. Methods: The medical records of all patients with a SBO between Jan 1, 2009 and Dec 31, 2015 were retrospectively reviewed. Cases associated with previous surgical procedure and cases of secondary obstruction due to inflammatory processes or tumor and other systemic diseases were excluded. The patients were divided into two groups according to age below or above 18 years: pediatric and adult. The basic clinical characteristics were analyzed and compared between groups. Results: Of 251 patients with a SBO, 15 (5.9%) met the inclusion criteria; 10 cases in pediatric group (mean age 17.9 ± 38.7 months) and 5 cases in adult group (mean age 60.0 ± 19.7 years). The pediatric group (66.6%) included 3 neonates, 5 infants, and 2 school children. They usually presented with bilious vomiting (50.0%) and abdominal distention (60.0%), and demonstrated a high rate of early operation (80.0%) and bowel resection (70.0%). In contrast, the adult group (33.3%) presented with abdominal pain (100%) in all cases and underwent a relatively simple procedure of band release using a laparoscopic approach (60%). However, group differences did not reach statistical significance. In addition, two groups did not differ in the time interval to the operation or in the range of the operation (p = 0.089 vs. p = 0.329). No significant correlation was found between the time interval to the operation and the necessity of bowel resection (p = 0.136). There was no mortality in either group. Conclusions: Congenital adhesion band is a very rare condition with diverse clinical presentations across ages. Unlike adult patients, pediatric patients showed a high proportion of early operation and bowel resection. A good result can be expected with an early diagnosis and prompt management regardless of age. abstract_id: PUBMED:38445940 Surgery for chronic idiopathic constipation: pediatric and adult patients - a systematic review. Background: Chronic idiopathic constipation (CIC) is a substantial problem in pediatric and adult patients with similar symptoms and workup; however, surgical management of these populations differs. We systematically reviewed the trends and outcomes in the surgical management of CIC in pediatric and adult populations. Methods: A literature search was performed using Ovid MEDLINE, Embase, Scopus, the Cochrane Central Register of Controlled Trials, and ClinicalTrials.gov between January 1, 1995 and June 26, 2020. Clinical trials and retrospective and prospective studies of patients of any age with a diagnosis of CIC with data of at least 1 outcome of interest were selected. The interventions included surgical resection for constipation or antegrade continence enema (ACE) procedures. The outcome measures included bowel movement frequency, abdominal pain, laxative use, satisfaction, complications, and reinterventions. Results: Adult patients were most likely to undergo resection (94%), whereas pediatric patients were more likely to undergo ACE procedures (96%) as their primary surgery. Both ACE procedures and resections were noted to improve symptoms of CIC; however, ACE procedures were associated with higher complication and reintervention rates. Conclusion: Surgical management of CIC in pediatric and adult patients differs with pediatric patients receiving ACE procedures and adults undergoing resections. The evaluation of resections and long-term ACE data in pediatric patients should be performed to inform patients and physicians whether an ACE is an appropriate procedure despite high complication and reintervention rates or whether resections should be considered as an initial approach for CIC. abstract_id: PUBMED:19784724 Intussusception in adult and pediatric patients: two different entities. Purpose: Intussusception is one of the most common abdominal emergencies in pediatrics, but adult intussusception is an uncommon entity and most surgeons have only limited experience in treating this disease. The purpose of this study was to highlight the differences between pediatric and adult intussusception. Methods: The records of 40 patients during 14 years were reviewed retrospectively. The symptoms, diagnosis, sites of intussusception, associated pathologies, and treatment methods of each patient were analyzed. Results: A total of 31 pediatric and 9 adult patients were included in the study. In the pediatric group, bloody stool and vomiting were the most common symptoms whereas adult patients commonly presented with abdominal pain. The physical examination was diagnostic in a remarkable proportion of the pediatric patients but the diagnosis was suggested based on imaging techniques in the adults, and preoperative diagnosis was more successful in the pediatric group. Intussusception was more often associated with an underlying pathology in adults and no adult patient underwent nonoperative reduction, whereas pediatric patients were managed either with hydrostatic reduction or surgery. Conclusions: Although intussusceptions occur at all ages, there are major differences in the clinical presentation, diagnostic approach, and management between pediatric and adult populations. Intussusception is remarkably different in these two age groups and it must be approached from a different clinical perspective. abstract_id: PUBMED:37633885 Clinical presentations and outcomes of pancreaticobiliary maljunction in different pediatric age groups. Background: Pancreaticobiliary maljunction (PBM) is a congenital defect, with risk of developing various pancreaticobiliary and hepatic complications. The presentations of PBM in children and adults are believed to be different, but studies on PBM children of different age groups are limited. This study was to evaluate clinicopathologic characteristics and outcomes in PBM children of different ages. Methods: A total of 166 pediatric patients with PBM were reviewed retrospectively. Clinicopathological, imaging, laboratory, surgical, and follow-up data were collected and analyzed. The patients were divided into three age groups, namely, group A (< 1 year, n = 31), group B (1-3 years, n = 63), and group C (> 3 years, n = 72). Results: The major clinical manifestation was jaundice in group A and abdominal pain and vomiting in groups B and C. Acute pancreatitis was more often seen in group C than group A. The length of common channel was significantly longer in group C than group A, while the maximum diameter of common bile duct in group C was smaller than that in group A. Cholangitis and cholecystitis were more commonly performed in groups B and C, while hepatic fibrosis in group A. Whether preoperatively or postoperatively, group C was more likely to have elevated serum amylase, while groups A and B were more likely to present with abnormal liver function indicators, including the increase of aspartate transaminase, alanine transaminase, and gamma-glutamyl transpeptidase. Conclusion: Presentation of PBM varies among different pediatric age groups, thus suggesting that targeted management should be carried out according to these differences. abstract_id: PUBMED:33581882 Favorable postoperative outcomes for children with COVID-19 infection undergoing surgical intervention: Experience at a free-standing children's hospital. Background: Current literature has shown that adult patients with perioperative Coronavirus Disease-2019 (COVID-19) have increased rates of postoperative morbidity and mortality. We hypothesized that children with COVID-19 have favorable postoperative outcomes compared to the reported adult experience. Methods: We performed a retrospective cohort study for children with a confirmed preoperative COVID-19 diagnosis from April 1st, 2020 to August 15th, 2020 at a free-standing children's hospital. Primary outcomes evaluated were postoperative complications, readmissions, reoperations, and mortality within 30 days of operation. Secondary outcomes included hospital resource utilization, hospital length of stay, and postoperative oxygen support. Results: A total of 66 children with preoperative confirmed COVID-19 were evaluated with median age of 9.5 years (interquartile range (IQR) 5-14) with 65% male and 70% Hispanic White. Sixty-five percent of patients had no comorbidities, with abdominal pain identified as the most common preoperative symptom (65%). Twenty-three percent of patients presented with no COVID-19 related symptoms. Eighty-two percent of patients had no preoperative chest imaging and 98% of patients did not receive preoperative oxygen support. General pediatric surgeons performed the majority of procedures (68%) with the most common diagnosis appendicitis (47%). Forty-one percent of patients were discharged the same day as surgery with 9% of patients utilizing postoperative intensive care unit resources and only 5% receiving postoperative invasive mechanical ventilation. Postoperative complications (7%), readmission (6%), and reoperation (6%) were infrequent, with no mortality. Conclusion: COVID-19+ children requiring surgery have a favorable postoperative course and short-term outcomes compared to the reported adult experience. Type Of Study: Prognosis Study. Level Of Evidence: Level IV. Answer: The provided abstracts do not directly address the long-term outcomes of pediatric recurrent abdominal pain into adulthood. They focus on various conditions and their presentations and management in pediatric versus adult populations, but none of the abstracts specifically track the progression of pediatric recurrent abdominal pain into adult outcomes. For instance, abstracts discuss differences in the presentation and management of conditions such as Takayasu arteritis (PUBMED:35927524), Henoch-Schönlein purpura (PUBMED:29115055), congenital adhesion band causing small bowel obstruction (PUBMED:27927245), chronic idiopathic constipation (PUBMED:38445940), and intussusception (PUBMED:19784724) between pediatric and adult patients. They also cover the safety of obtaining research biopsies during pediatric colonoscopy (PUBMED:26140101), the care of adult patients in pediatric emergency departments (PUBMED:32745921 and PUBMED:30853574), and the clinical presentations and outcomes of pancreaticobiliary maljunction in different pediatric age groups (PUBMED:37633885). Additionally, one abstract reports on favorable postoperative outcomes for children with COVID-19 infection undergoing surgical intervention (PUBMED:33581882). However, none of these abstracts provide a longitudinal perspective on the natural history of recurrent abdominal pain from childhood into adulthood. To answer the question of whether adults with a history of pediatric recurrent abdominal pain simply "grow out of it," one would need longitudinal studies that follow individuals with this condition from childhood into adulthood to assess the persistence or resolution of symptoms over time. Such studies would help determine if pediatric recurrent abdominal pain resolves as children age or if it continues to affect individuals into adulthood, potentially leading to chronic abdominal pain or other related health issues.
Instruction: Selective enamel etching reconsidered: better than etch-and-rinse and self-etch? Abstracts: abstract_id: PUBMED:30467810 Demineralized enamel reduces margin integrity of self-etch, but not of etch-and-rinse bonded composite restorations. The aim of this study was to investigate margin integrity of Class V composite restorations in demineralized and sound enamel after bonding with different etch-and-rinse and self-etch adhesive systems. Out of a total of 60 specimens from bovine incisors, 30 specimens were demineralized (21 days, acid buffer, pH 4.95) to create artificial enamel lesions. Circular Class V cavities were prepared in all 60 specimens and treated with either an unfilled etch-and-rinse adhesive (Syntac Classic; Ivoclar Vivadent), a filled etch-and-rinse adhesive (Optibond FL; Kerr), or a self-etch adhesive (iBond Self Etch; Heraeus) (n = 10 per group). The cavities were restored with a nanofilled resin composite and thermocycled (5000×, 5-55 °C). Scanning electron microscopy was used to evaluate margin integrity of the composite restorations, and the percentage of continuous margin was statistically analyzed (α = 0.05). Demineralized enamel led to a significantly lower margin integrity when the self-etch adhesive iBond Self Etch was applied, but did not affect margin integrity when the etch-and-rinse adhesives Optibond FL (filled) or Syntac Classic (unfilled) were used. No significant differences in margin integrity in sound and demineralized enamel were observed between the different adhesives. Demineralized enamel reduces margin integrity of composite restorations when bonded with a self-etch adhesive, but does not affect margin integrity when an etch-and-rinse approach is used. abstract_id: PUBMED:37929222 The effects of dental adhesives total etch; self-etch and selective etch application procedures on microleakage in class II composite restorations. Purpose: The aim of this study is to evaluate the amount of microleakage resulting from the application of self-etch, selective etch, etch-and-rinse of adhesive systems in class II cavities. Materials And Methods: Four adhesive systems with etch-and-rinse, selective etch and self etch methods were used on the extracted teeth. All groups were restored with G-aenial A'CHORD (Nanohybrid) (GC, Tokyo, Japan) A2 composite. After 1000 thermal cycles were applied to the teeth after restoration, the samples were kept in 0.5% basic fuchsin for 24 hours. Microleakage values formed after dye penetration with basic fuchsin were determined quantitatively by scoring method on sections taken from each sample in the mesiodistal direction. One-way Analysis of Variance (ANOVA) and Tukey test were used for statistical analysis of the data (p<0.05). Results: While there was no statistically significant difference between the etch-and-rinse and selective etch applications of adhesive systems (G2-Bond Universal, Clearfil Tri-S Bond Universal Prime&Bond Universal and Tokuyama Bond Force II) (p>0.05), there was a statistically significant difference in self-etch application (p<0.05). As a result of Prime&Bond Universal's self-etch application, it showed statistically more microleakage than the other three adhesive systems (p<0.05). Conclusion: It has been observed that additional etching of enamel and/or dentin with phosphoric acid reduces the amount of microleakage. abstract_id: PUBMED:25908573 Does active application of universal adhesives to enamel in self-etch mode improve their performance? Objectives: To evaluate the effect of adhesion strategy on the enamel microshear bond strengths (μSBS), etching pattern, and in situ degree of conversion (DC) of seven universal adhesives. Methods: 84 extracted third molars were sectioned in four parts (buccal, lingual, proximal) and divided into 21 groups, according to the combination of the main factors adhesive (AdheSE Universal [ADU], All-Bond Universal [ABU], Clearfil Universal [CFU], Futurabond U [FBU], G-Bond Plus [GBP], Prime&Bond Elect (PBE), and Scotchbond Universal Adhesive [SBU]), and adhesion strategy (etch-and-rinse, active self-etch, and passive self-etch). Specimens were stored in water (37°C/24h) and tested at 1.0mm/min (μSBS). Enamel-resin interfaces were evaluated for DC using micro-Raman spectroscopy. The enamel-etching pattern was evaluated under a field-emission scanning electron microscope (direct and replica techniques). Data were analyzed with two-way ANOVA and Tukey's test (α=0.05). Results: Active self-etch application increased μSBS and DC for five out of the seven universal adhesives when compared to passive application (p<0.001). A deeper enamel-etching pattern was observed for all universal adhesives in the etch-and-rinse strategy. A slight improvement in etching ability was observed in active self-etch application compared to that of passive self-etch application. Replicas of GBP and PBE applied in active self-etch mode displayed morphological features compatible with water droplets. The DC of GBP and PBE were not affected by the application/strategy mode. Conclusions: In light of the improved performance of universal adhesives when applied actively in SE mode, selective enamel etching with phosphoric acid may not be crucial for their adhesion to enamel. Clinical Significance: The active application of universal adhesives in self-etch mode may be a practical alternative to enamel etching in specific clinical situations. abstract_id: PUBMED:31206905 Immediate enamel bond strength of universal adhesives to unground and ground surfaces in different etching modes. The purpose of this study was to determine the immediate bond effectiveness of universal adhesives to unground and ground enamel surfaces in different etching modes, through shear bond strength (SBS) tests and scanning electron microscopy observations. Three universal adhesives, a conventional two-step self-etch adhesive, and a conventional single-step self-etch adhesive were compared. Human enamel specimens from lower anterior teeth were divided into four groups and subjected to the following treatments: (i) unground enamel in self-etch mode; (ii) ground enamel in self-etch mode; (iii) unground enamel in etch-&-rinse mode; and (iv) ground enamel in etch-&-rinse mode. Bonded assemblies were subjected to SBS testing. All the adhesives showed significantly higher SBS values in etch-&-rinse mode than in self-etch mode, regardless of whether enamel was unground or ground. The influence of the enamel surface condition on SBS was different in different etching modes. Without pre-etching, all tested materials showed lower SBS values in unground enamel than in ground enamel. In etch-&-rinse mode, no significant differences in SBS values were observed between unground enamel and ground enamel for any of the adhesives tested. Phosphoric acid pre-etching before application of self-etch adhesives to an unground enamel surface is essential to enhance initial enamel bond effectiveness. abstract_id: PUBMED:19058678 Selective enamel etching reconsidered: better than etch-and-rinse and self-etch? Purpose: This study evaluated the marginal quality of differently bonded direct resin composite restorations in enamel and dentin, before and after thermomechanical loading (TML). Special attention was focussed on the performance of selective enamel etching, etch-and-rinse, and self-etching adhesives. Materials And Methods: Eighty MO cavities with proximal margins beneath the cementoenamel junction were prepared in extracted human third molars. Direct resin composite restorations (Tetric EvoCeram, n=8) were placed with 4-step selective enamel etching (Syntac SE), 4-step etch-and-rinse (Syntac ER), 2-step etch-and-rinse (XP Bond, Scotchbond 1 XT/Single Bond Plus), 2-step self-etching (AdheSE, Clearfil SE Bond), 2-step self-etching with selective enamel etching (AdheSE SE, Clearfil SE Bond SE), and 2-step self-etching with etch-and-rinse (AdheSE TE, Clearfil SE Bond TE). Marginal gaps were analyzed using epoxy resin replicas under a scanning electron microscope at 200X magnification. Results: Initially, high percentages of gap-free margins were identified for all adhesives. After TML, the results were as follows: (A) Enamel margins: When phosphoric acid was used on enamel, results were constantly higher (approximately 90%) compared with two-step self-etchin adhesives (approximately 70%; p < 0.05). (B) Dentin margins: No statistical differences were found when etch-and-rinse and selective etch approaches were compared (59% to 64%; p > 0.05). When self-etching adhesives were used as per manufacturers' directions, dentin margins exhibited the best marginal quality (74% to 82%; p < 0.05). When self-etching adhesives were used under etch-and-rinse conditions, marginal quality in dentin was significantly reduced to 35% to 42% (p < 0.05). Conclusion: Enamel bonding was generally more effective with phosphoric-acid etching. Enamel bonding performance of 2-step self-etching adhesives was improved when phosphoric acid was applied on enamel selectively. abstract_id: PUBMED:22904656 Bond strength of one-step self-etch adhesives and their predecessors to ground versus unground enamel. Objective: The aim of this study was to compare the shear bond strength of several self-etch adhesives to their two-step predecessors to ground and unground enamel. Methods: Seventy-five extracted, non-carious human third molar teeth were selected for this study. The buccal surfaces of each tooth were mechanically ground to obtain flat enamel surfaces (ground enamel), while the lingual surfaces were left intact (unground enamel). The teeth were randomly divided into five groups according to the adhesive systems (n=15): one-step self-etch adhesive - Clearfil S3 Bond, its two-step predecessor - Clearfil SE Bond, one-step self-etch adhesive - AdheSE One, and its two-step predecessor - AdheSE, and a two-step etch-and-rinse adhesive - Adper Single Bond 2(control). After application of the adhesives to the buccal and lingual enamel surfaces of each tooth, a cylindrical capsule filled with a hybrid composite resin (TPH) was seated against the surfaces. The specimens were stored in distilled water at 37°C for 24 hours, followed by thermocy-cling (5°C-55°C/500 cycles). They were subjected to shear bond strength test in a universal testing machine at a crosshead speed of 1.0 mm/minute. The data were compared using a two-way ANOVA, followed by Bonferroni test at P<.05. Results: All adhesives exhibited statistically similar bond strengths to ground and unground enamel except for the etch-and-rinse adhesive that showed significantly higher bond strengths than the self-etch adhesives (P<.05). No significant differences in bond strength values were observed between ground and unground enamel for any of the adhesives tested (P=.17). Conclusion: Similar bond strengths to ground and unground enamel were achieved with one-step self-etch adhesives and their predecessors. Enamel preparation did not influence the bonding performance of the adhesives tested. abstract_id: PUBMED:28812169 Bond strength of etch-and-rinse and self-etch adhesive systems to enamel and dentin irradiated with a novel CO2 9.3 μm short-pulsed laser for dental restorative procedures. The objective of this study was to evaluate the influence of CO2 9.3 μm short-pulsed laser irradiation on the shear bond strength of composite resin to enamel and dentin. Two hundred enamel and 210 dentin samples were irradiated with a 9.3 µm carbon dioxide laser (Solea, Convergent Dental, Inc., Natick, MA) with energies which either enhanced caries resistance or were effective for ablation. OptiBond Solo Plus [OptiBondTE] (Kerr Corporation, Orange, CA) and Peak Universal Bond light-cured adhesive [PeakTE] (Ultradent Products, South Jordan, UT) were used. In addition, Scotchbond Universal [ScotchbondSE] (3M ESPE, St. Paul, MN) and Peak SE self-etching primer with Peak Universal Bond light-cured adhesive [PeakSE] (Ultradent Products) were tested. Clearfil APX (Kuraray, New York, NY) was bonded to the samples. After 24 h, a single plane shear bond test was performed. Using the caries preventive setting on enamel resulted in increased shear bond strength for all bonding agents except for self-etch PeakSE. The highest overall bond strength was seen with PeakTE (41.29 ± 6.04 MPa). Etch-and-rinse systems achieved higher bond strength values to ablated enamel than the self-etch systems did. PeakTE showed the highest shear bond strength with 35.22 ± 4.40 MPa. OptiBondTE reached 93.8% of its control value. The self-etch system PeakSE presented significantly lower bond strength. The shear bond strength to dentin ranged between 19.15 ± 3.49 MPa for OptiBondTE and 43.94 ± 6.47 MPa for PeakSE. Etch-and-rinse systems had consistently higher bond strength to CO2 9.3 µm laser-ablated enamel. Using the maximum recommended energy for dentin ablation, the self-etch system PeakSE reached the highest bond strength (43.9 ± 6.5 MPa). abstract_id: PUBMED:34443268 Bonding Performance of Universal Adhesives Applied to Nano-Hydroxyapatite Desensitized Dentin Using Etch-and-Rinse or Self-Etch Mode. The study assessed the bonding performance of three universal adhesives on desensitized dentin with etch-and-rinse mode or self-etch mode after nano-hydroxyapatite (nHAp)-based desensitizers application. Simulated sensitive dentin specimens were prepared and separated into four groups: no treatment as the negative control, groups desensitized by Biorepair toothpaste, Dontodent toothpaste, or nHAp paste. Three universal adhesives of All-Bond Universal, Single Bond Universal, and Clearfil Universal Bond with etch-and-rinse or self-etch mode were bonded to the desensitized dentin specimens separately, followed by resin composite build-ups. Micro-tensile bond strength was measured using a micro-tensile tester. The wettability of desensitized dentin was evaluated by the contact angle of the adhesives. Resin infiltration was observed by confocal laser scanning microscopy. Dentin tubular occlusion and nanoleakage were observed by scanning electron microscope. The results showed that the etch-and-rinse or self-etch mode of each adhesive showed similar bond strength when bonding to nHAp-based desensitized dentin. The dentin surface was partially covered by desensitizers after desensitization. Compared with the self-etch mode, stronger demineralization and more reopened dentin tubules were observed in the etch-and-rinse mode after acid etching; longer resin tags and more nanoleakage in the resin-dentin interface were observed when using the etch-and-rinse mode. When bonding to nHAp-based desensitized dentin with universal adhesives, no significant difference in bond strength was found between self-etch mode or etch-and-rinse mode; while the latter produced more nanoleakage in the resin-dentin interfaces. abstract_id: PUBMED:27449836 Longevity of Self-etch Dentin Bonding Adhesives Compared to Etch-and-rinse Dentin Bonding Adhesives: A Systematic Review. Objectives: A systematic review and meta-analysis were performed to compare longevity of Self-Etch Dentin Bonding Adhesives to Etch-and-Rinse Dentin Bonding Adhesives. Material And Methods: The following databases were searched for PubMed, MEDLINE, Web of Science, CINAHL, the Cochrane Library complemented by a manual search of the Journal of Adhesive Dentistry. The MESH keywords used were: "etch and rinse," "total etch," "self-etch," "dentin bonding agent," "bond durability," and "bond degradation." Included were in-vitro experimental studies performed on human dental tissues of sound tooth structure origin. The examined Self-Etch Bonds were of two subtypes; Two Steps and One Step Self-Etch Bonds, while Etch-and-Rinse Bonds were of two subtypes; Two Steps and Three Steps. The included studies measured micro tensile bond strength (μTBs) to evaluate bond strength and possible longevity of both types of dental adhesives at different times. The selected studies depended on water storage as the aging technique. Statistical analysis was performed for outcome measurements compared at 24 h, 3 months, 6 months and 12 months of water storage. Results: After 24 hours (p-value = 0.051), 3 months (p-value = 0.756), 6 months (p-value=0.267), 12 months (p-value=0.785) of water storage self-etch adhesives showed lower μTBs when compared to the etch-and-rinse adhesives, but the comparisons were statistically insignificant. Conclusion: In this study, longevity of Dentin Bonds was related to the measured μTBs. Although Etch-and-Rinse bonds showed higher values at all times, the meta-analysis found no difference in longevity of the two types of bonds at the examined aging times. abstract_id: PUBMED:20178215 Enamel-resin bond durability of self-etch and etch & rinse adhesives. Purpose: To evaluate the degradation of resin-enamel interfaces bonded with different adhesive systems. Methods: Flat enamel surfaces were ground buccally on bovine incisors. Two etch & rinse self-priming adhesives (Single Bond, Prime & Bond NT), three two-step self-etch adhesives (Clearfil SE Bond, Resulcin Aquaprime, NRC/Prime & Bond NT) and two all-in-one adhesives (Etch & Prime 3.0, Adper Prompt-L-Pop) were used for bonding. A hybrid resin composite (Tetric Ceram) was selected for coronal build-up. Bonded specimens were stored in water at 37 degrees C for 24 hours, 6 months and 1 year, respectively, and then sectioned into 1 mm2 beams. Each microtensile stick was loaded in tension until failure (crosshead speed: 0.5 mm/minute). Bond strength data were analyzed with two-way ANOVA and Student Newman Keuls tests (P < 0.05). Etching patterns resulting from phosphoric acid etching and self-etch adhesives application on ground enamel were analyzed under SEM. Results: All adhesives attained similar bond strengths after 24 hours. All-in-one adhesives and Resulcin Aqua Prime recorded a significant reduction in bond strengths after 6 months and 1 year of water aging. Answer: The concept of selective enamel etching involves the application of phosphoric acid to enamel only, while using a self-etch adhesive for the rest of the preparation. This technique has been compared to traditional etch-and-rinse and self-etch approaches in various studies to determine its effectiveness in terms of bond strength and marginal integrity. According to the study by PUBMED:19058678, selective enamel etching with phosphoric acid generally resulted in more effective enamel bonding compared to two-step self-etch adhesives. The study found that after thermomechanical loading (TML), enamel margins bonded with phosphoric acid etching showed higher percentages of gap-free margins (approximately 90%) compared to those bonded with two-step self-etch adhesives (approximately 70%). For dentin margins, however, the best marginal quality was observed with self-etching adhesives used as per manufacturers' directions, and the use of self-etching adhesives under etch-and-rinse conditions significantly reduced marginal quality in dentin. In another study, PUBMED:30467810 found that demineralized enamel reduced margin integrity of composite restorations when bonded with a self-etch adhesive, but did not affect margin integrity when an etch-and-rinse approach was used. This suggests that selective enamel etching could potentially offer better margin integrity in certain situations. PUBMED:37929222 also observed that additional etching of enamel and/or dentin with phosphoric acid reduces the amount of microleakage in Class II composite restorations, indicating that selective enamel etching could be beneficial in reducing microleakage. However, it is important to note that the performance of selective enamel etching may depend on the specific adhesive system used and the clinical situation. For instance, PUBMED:34443268 found that when bonding to nano-hydroxyapatite desensitized dentin with universal adhesives, no significant difference in bond strength was found between self-etch mode or etch-and-rinse mode, although the latter produced more nanoleakage in the resin-dentin interfaces. In summary, selective enamel etching can be considered as a viable alternative to traditional etch-and-rinse and self-etch approaches, particularly for improving enamel bonding and reducing microleakage. However, the choice of technique should be based on the specific adhesive system and the clinical scenario.
Instruction: Rinne test: does the tuning fork position affect the sound amplitude at the ear? Abstracts: abstract_id: PUBMED:27013057 Rinne test: does the tuning fork position affect the sound amplitude at the ear? Background: Guidelines and text-book descriptions of the Rinne test advise orienting the tuning fork tines in parallel with the longitudinal axis of the external auditory canal (EAC), presumably to maximise the amplitude of the air conducted sound signal at the ear. Whether the orientation of the tuning fork tines affects the amplitude of the sound signal at the ear in clinical practice has not been previously reported. The present study had two goals: determine if (1) there is clinician variability in tuning fork placement when presenting the air-conduction stimulus during the Rinne test; (2) the orientation of the tuning fork tines, parallel versus perpendicular to the EAC, affects the sound amplitude at the ear. Methods: To assess the variability in performing the Rinne test, the Canadian Society of Otolaryngology - Head and Neck Surgery members were surveyed. The amplitudes of the sound delivered to the tympanic membrane with the activated tuning fork tines held in parallel, and perpendicular to, the longitudinal axis of the EAC were measured using a Knowles Electronics Mannequin for Acoustic Research (KEMAR) with the microphone of a sound level meter inserted in the pinna insert. Results: 47.4 and 44.8% of 116 survey responders reported placing the fork parallel and perpendicular to the EAC respectively. The sound intensity (sound-pressure level) recorded at the tympanic membrane with the 512 Hz tuning fork tines in parallel with as opposed to perpendicular to the EAC was louder by 2.5 dB (95% CI: 1.35, 3.65 dB; p < 0.0001) for the fundamental frequency (512 Hz), and by 4.94 dB (95% CI: 3.10, 6.78 dB; p < 0.0001) and 3.70 dB (95% CI: 1.62, 5.78 dB; p = .001) for the two harmonic (non-fundamental) frequencies (1 and 3.15 kHz), respectively. The 256 Hz tuning fork in parallel with the EAC as opposed to perpendicular to was louder by 0.83 dB (95% CI: -0.26, 1.93 dB; p = 0.14) for the fundamental frequency (256 Hz), and by 4.28 dB (95% CI: 2.65, 5.90 dB; p < 0.001) and 1.93 dB (95% CI: 0.26, 3.61 dB; p = .02) for the two harmonic frequencies (500 and 4 kHz) respectively. Conclusions: Clinicians vary in their orientation of the tuning fork tines in relation to the EAC when performing the Rinne test. Placement of the tuning fork tines in parallel as opposed to perpendicular to the EAC results in a higher sound amplitude at the level of the tympanic membrane. abstract_id: PUBMED:33786411 Rinne Test Results: How Badly Can We Be Mistaken? Objective: To establish the extent to which sound amplitudes delivered by a vibrating tuning fork change around its long axis and to evaluate whether such differences in amplitude might change the results of the Rinne test. Study Design: Experimental measurements. Setting: Laboratory setting. Methods: Setup I: a vibrating tuning fork was handheld and manually rotated around its long axis next to a sound recording device (the simulated ear) in order to record sound amplitude data at a full range of angles relative to the device; files were split into segments in which sound amplitude changed: A (from a maximum to a minimum) and B (from a minimum to a maximum). Setup II: a vibrating tuning fork was machine-rotated, and the angle of rotation, along with the sound amplitude, was automatically recorded through a single full rotation. Results: The angles of 0° and 180° (which equate to the established best practice in Rinne testing) were associated with the highest sound amplitudes. All other angles decreased sound amplitude. The greatest decrease in amplitude was recorded at 51° and 130°. This difference ranged from 9.8 to 34.7 dB, depending on the initial amplitude. Conclusion: The outcome of a Rinne test can be affected if attention is not paid to the precise angle at which the tuning fork is held relative to the ear. The potential of this effect will be greater when high background noise or patient hearing loss requires that the tuning fork be vigorously excited to obtain high sound amplitudes. abstract_id: PUBMED:24085711 Rinne revisited: steel versus aluminum tuning forks. Objective: (1) Determine whether tuning fork material (aluminum vs stainless steel) affects Rinne testing in the clinical assessment of conductive hearing loss (CHL). (2) Determine the relative acoustic and mechanical outputs of 512-Hz tuning forks made of aluminum and stainless steel. Study Design: Prospective, observational. Setting: Outpatient otology clinic. Subjects And Methods: Fifty subjects presenting May 2011 to May 2012 with negative or equivocal Rinne in at least 1 ear and same-day audiometry. Rinne test results using aluminum and steel forks were compared and correlated with the audiometric air-bone gap. Bench top measurements using sound-level meter, microphone, and artificial mastoid. Results: Patients with CHL were more likely to produce a negative Rinne test with a steel fork than with an aluminum fork. Logistic regression revealed that the probability of a negative Rinne reached 50% at a 19 dB air-bone gap for stainless steel versus 27 dB with aluminum. Bench top testing revealed that steel forks demonstrate, in effect, more comparable air and bone conduction efficiencies while aluminum forks have relatively lower bone conduction efficiency. Conclusion: We have found that steel tuning forks can detect a lesser air-bone gap compared to aluminum tuning forks. This is substantiated by observations of clear differences in the relative acoustic versus mechanical outputs of steel and aluminum forks, reflecting underlying inevitable differences in acoustic versus mechanical impedances of these devices, and thus efficiency of coupling sound/vibratory energy to the auditory system. These findings have clinical implications for using tuning forks to determine candidacy for stapes surgery. abstract_id: PUBMED:9280421 History of the tuning fork. II: Evolution of the classical experiments by Weber, Rinne and Schwabach Background: Since the 17th centrury it was known that sounds could be perceived via air conduction and bone conduction and that this provided a means of differentiating between hearing disorders located in the middle ear and those located in the acoustic nerve. For a long time to come, however, there was no need for such a differential diagnosis. After the invention of the tuning fork in 1711 this instrument had soon become widely used in music, but it took well over 100 years until it was introduced into physiology and otology. FROM DIRECTIONAL HEARING TO WEBER'S TEST: J. B. Venturi, a physicist in Modena, Italy, in 1802 had shown that the perception of the direction from which a sound is coming is governed by the fact that one ear is hit by the sound more intensely than the other ear. C. T. Tourtual, a physician in Münster, Germany, demonstrated in 1827 that this also holds true for sound conducted via the skull bones. He used a watch as sound source. He found that occlusion of both ear canals would increase the sensation in both ears equally, but that occlusion of only one ear would increase the sensation only in the occluded ear, thus giving the impression that the sound were coming from that side. He was interested in a comparison between vision and audition, and he concluded that with regard to recognizing the direction of a sensory signal vision was superior to audition. In the same year 1827 C. Wheatstone, a physicist in London, investigating the mode of vibration of the tympanic membrane and using a tuning fork found the same phenomena as Tourtual and some more effects. E. H. Weber, an anatomist and physiologist in Leipzig, Germany, described the very same phenomena as Tourtual and Wheatstone once more in 1834. He wanted to prove that airborne sound is perceived by the vestibulum and the semicircular canals, bone conducted sound by the cochlea. None of these investigators was thinking of a clinical use of their findings and made no such suggestion. E. Schmalz, an otologist in Dresden, Germany, in 1845 introduced the tuning fork and the test later named after Weber into otology and explained in great detail all possibilities of a diagnostic evaluation of the test. His grand achievement, however, passed unnoticed at his time. The Rinne Test: A. Rinne, a physician in Göttingen, Germany. In 1855 described the test which later was named after him, in an elaborate treatise on the physiology of the ear. He wanted to demonstrate that in man and animals living in the air, as opposed to those living in water, the conduction of sound via the bones of the skull is just an unavoidable side effect of sound perception. He mentioned a clinical application of his test only in a footnote and obviously never used it himself in a systematic way. His test was made generally known by Lucae in Berlin only after 1880. The value of Weber's and Rinne's tuning fork tests was much disputed even at the turn of the century and only gradually became generally accepted. abstract_id: PUBMED:33454812 Validation of a smartphone-based Rinne test to detect an air-bone gap. Objective: To validate a smartphone-based Rinne test employing the vibration application of mobile telephones. Study Design: Prospective controlled clinical study. Setting: Tertiary referral medical center. Methods: Twenty consecutive patients hospitalized in the otolaryngology department of a tertiary medical center due to unilateral hearing loss (HL study group), and 30 consecutive inpatients on the same ward who had no otological history (controls) were enrolled. Each participant underwent the traditional 512 Hz tuning fork-based Rinne test, as well as a smartphone-based Rinne test by means of a single uncovered smartphone with a vibration application. The test results were compared to those of formal audiometry. Results: The overall agreement between the traditional Rinne test and the smartphone-based test was 98%. The Sensitivity was 85% for both tests, specificity was 90% and 93% for smartphone and tuning fork tests, respectively. The smartphone-based Rinne test could correctly discriminate between patients with an air-bone gap ≥ 25 dB at 512 Hz from patients with a lower or no air-bone gap at 512 Hz. The smartphone-based Rinne could not evaluate two patients with a moderately severe/severe sensorineural hearing loss due to their inability to detect the vibrations. Conclusion: A smartphone-based Rinne test was validated for the detection of an air-bone gap ≥ 25 dB at 512 Hz in the clinical setting. The validity of patient-operated smartphone-based Rinne test awaits further study. Level Of Evidence: 2B. abstract_id: PUBMED:29661046 Diagnostic Accuracy of Tuning Fork Tests for Hearing Loss: A Systematic Review. Objective (1) To determine the diagnostic accuracy of tuning fork tests (TFTs; Weber and Rinne) for assessment of hearing loss as compared with standard audiometry. (2) To identify the audiometric threshold at which TFTs transition from normal to abnormal, thus indicating the presence of hearing loss. Data Sources PubMed, Ovid Medline, EMBASE, Web of Science, Cochrane, and Scopus and manual bibliographic searches. Review Methods A systematic review of studies reporting TFT accuracy was performed according to a standardized protocol. Two independent evaluators corroborated the extracted data and assessed risk of bias. Results Seventeen studies with 3158 participants, including adults and children, met inclusion criteria. The sensitivity and specificity of the Rinne test for detecting conductive hearing loss ranged from 43% to 91% and 50% to 100%, respectively, for a 256-Hz fork and from 16% to 87% and 55% to 100% for a 512-Hz fork. The audiometric thresholds at which tests transition from normal to abnormal ranged from 13 to 40 dB of conductive hearing loss for the Rinne test and from 2.5 to 4 dB of asymmetry for the Weber test. Significant heterogeneity in TFT methods and audiometric thresholds to define hearing loss precluded meta-analysis. There is high risk of bias in patient selection for a majority of the studies. Conclusion Variability exists in the reported test accuracy measurements of TFTs for clinical screening, surgical candidacy assessments, and estimation of hearing loss severity. Clinicians should remain mindful of these differences and optimize these techniques in specific clinical applications to improve TFT accuracy. abstract_id: PUBMED:1092293 Accuracy of the Bing and Rinne tuning fork tests. Fifty children seen in an ENT clinic received the Bing and Rinne tuning fork tests, otoscopic examination, and puretone audiometry to determine the accuracy of the tuning fork tests in predicting otoscopic abnormality or conductive hearing loss. Neither tuning fork test showed high accuracy. The Bing test overidentified by yielding negative results (conductive loss) for many ears with normal hearing or sensorineural loss. It correctly identified conductive loss or otoscopic abnormality only slightly better than chance57% to 66% of the time. The Rinne test identified with high accuracy those ears with a 40 dB or greater air-bone gap. For air-bone gaps of 10-35 dB, only 27% of the ears gave negative results, meaning correct identification. However, when a negative Rinne occurred, the result was in error only 2% of the time. abstract_id: PUBMED:2754111 Directional sensitivity of sound-pressure levels in the human ear canal. Changes in sound pressures measured in the ear canal are reported for broadband sound sources positioned at various locations about the subject. These location-dependent pressures are one source of acoustical cues for sound localization by human listeners. Sound source locations were tested with horizontal and vertical resolution of 10 degrees. Sound levels were measured with miniature microphones placed inside the two ear canals. Although the measured amplitude spectra varied with the position of the microphone in the ear canal, it is shown that the directional sensitivity at any particular frequency of the broadband stimulus is independent of microphone position anywhere within the ear canal. At any given frequency, the distribution of sound pressures as a function of sound source location formed a characteristic spatial pattern comprising one or two discrete areas from which sound sources produced maximum levels in the ear canal. The locations of these discrete areas varied in horizontal and vertical location according to sound frequency. For example, around 8 kHz, two areas of maximum sensitivity typically were found that were located laterally and were separated from each other vertically, whereas, around 12 kHz, two such areas were found located on the horizontal plane and separated horizontally. The spatial patterns of sound levels were remarkably similar among different subjects, although some frequency scaling was required to accommodate for differences in the subjects' physical sizes. Interaural differences in sound-pressure level (ILDs) at frequencies below about 8 kHz tended to increase monotonically with increasing distance of the sound source from the frontal midline and tended to be relatively constant as a function of vertical source location. At higher frequencies, however, ILDs varied both with the horizontal and with the vertical location of the sound source. At some frequencies, asymmetries between the left and right ears in a given subject resulted in substantial ILDs even for midline sound sources. These results indicate the types of horizontal and vertical spatial information that are available from sound level cues over various ranges of frequency and, within a small subject population, indicate the nature of intersubject variability. abstract_id: PUBMED:36200171 The Effect of Stimulation Position and Ear Canal Occlusion on Perception of Bone Conducted Sound. The position of a bone conduction (BC) transducer influences the perception of BC sound, but the relation between the stimulation position and BC sound perception is not entirely clear. In the current study, eleven participants with normal hearing were evaluated for their hearing thresholds and speech intelligibility for three stimulation positions (temple, mastoid, and condyle) and four types of ear canal occlusion produced by headphones. In addition, the sound quality for three types of music was rated with stimulation at the three positions. Stimulation at the condyle gave the best performance while the temple showed the worst performance for hearing thresholds, speech intelligibility, and sound quality. The in-ear headphones gave the highest occlusion effect while fully open headphones gave the least occlusion effect. BC stimulated speech intelligibility improved with greater occlusion, especially for the temple stimulation position. The results suggest that BC stimulation at the condyle is generally superior to the other positions tested in terms of sensitivity, clarity, and intelligibility, and that occlusion with ordinary headphones improves the BC signal. abstract_id: PUBMED:9455950 Clinical utility of the 512-Hz Rinne tuning fork test. Objective: This study aimed to examine the reliability of the 512-Hz Rinne tuning fork test to detect conductive hearing losses. The effects of tester experience, the use of masking, and the interpretation of equivocal (+/-) Rinne results on test reliability also were examined. Study Design: Retrospective. Setting: Private otology practice. Patients: 1,000 adult patients (2,000 ears) seen for their initial otologic evaluation. Interventions: Diagnostic. Main Outcome Measure: Sensitivity of the 512-Hz Rinne tuning fork test was assessed by comparing tuning fork results with the pure-tone average air-bone gap. Results: Results showed the 512-Hz Rinne tuning fork test could be very effective at detecting conductive hearing losses when performed by an experienced tester and when masking was used. Sensitivity was lower when masking was not used and lowest when the Rinne was performed by a less-experienced tester. Sensitivity for all groups was improved by interpreting equivocal results as indicating a conductive loss. Conclusions: Despite reports of poor reliability, the 512-Hz Rinne tuning fork test can be an important tool in an otology practice for the detection of conductive hearing losses and for confirming audiometric findings. In primary care settings, the Rinne would be most effective as part of a screening program for conductive hearing losses, but not as the sole indicator for referral. Answer: Yes, the position of the tuning fork does affect the sound amplitude at the ear during the Rinne test. Studies have shown that the orientation of the tuning fork tines relative to the external auditory canal (EAC) can influence the sound amplitude at the tympanic membrane. Specifically, placing the tuning fork tines in parallel with the EAC results in a higher sound amplitude at the level of the tympanic membrane compared to when the tines are held perpendicular to the EAC. This has been demonstrated with different frequencies of tuning forks, including 512 Hz and 256 Hz, with the parallel position consistently producing louder sound intensities for both fundamental and harmonic frequencies (PUBMED:27013057). Further research has indicated that the angle at which the tuning fork is held relative to the ear can significantly affect the outcome of the Rinne test. Angles of 0° and 180°, which correspond to the best practice in Rinne testing, are associated with the highest sound amplitudes. Deviations from these angles result in decreased sound amplitude, with the greatest decrease observed at angles of 51° and 130°. The difference in amplitude can range from 9.8 to 34.7 dB depending on the initial amplitude, which suggests that the accuracy of the Rinne test can be compromised if the tuning fork is not held at the correct angle (PUBMED:33786411). These findings underscore the importance of proper technique when performing the Rinne test to ensure accurate assessment of air conduction and to avoid misinterpretation of the results. Clinicians should be aware of the potential variability in sound amplitude based on tuning fork position and angle to improve the diagnostic utility of the Rinne test.
Instruction: Radioactive seed localization breast biopsy and lumpectomy: can specimen radiographs be eliminated? Abstracts: abstract_id: PUBMED:14597442 Radioactive seed localization breast biopsy and lumpectomy: can specimen radiographs be eliminated? Background: Wire localization (WL) is the current standard for surgical diagnosis of nonpalpable breast lesions. Many disadvantages inherent to WL are solved with radioactive seed localization (RSL). This trial investigated the ability of RSL to reduce the need for specimen radiographs and operating room delays associated with WL. Methods: A total of 134 women were entered onto an institutional review board-approved study. RSL was performed by placing a titanium seed containing.29 to 20 mCi of iodine-125 to within 1 cm of the suggestive breast lesion. The surgeon used a handheld gamma detector to locate and excise the iodine-125 seed and the lesion. Results: Specimen radiographs were eliminated in 98 (79%) of 124 patients. Surgical seed retrieval was 100% in 124 patients. No seed migration occurred after correct radiographical placement. A total of 26 (21%) of 124 patients required a specimen radiograph; 22 (85%) of these 26 were performed for microcalcifications. Conclusions: After surgical removal, RSL can eliminate specimen radiographs when the radiologist accurately places the seed and the pathologist grossly identifies the lesion. If small microcalcifications are noted before surgery, then specimen radiographs may be necessary. RSL reduced requirements for specimen radiographs, decreased OR time, improved incision placement, and improved resections to clear margins. abstract_id: PUBMED:28554356 Radioactive Seed Localization Program for Patients With Nonpalpable Breast Lesions. Without early diagnosis and treatment, many lives are lost to breast cancer. Increased breast cancer awareness has facilitated research to guide health care providers toward improving patient outcomes. Research in diagnostic and treatment modalities has expanded to focus on improving the quality of life for patients with breast cancer who are living longer than expected. Providers can offer patients with nonpalpable breast lesions new screening techniques and improved treatment options, including radioactive seed localization lumpectomy. This treatment offers patients the potential for decreased tumor re-excision for positive margins near the surgical site, a lower volume of excised breast tissue, decreased operative time, convenient surgical scheduling, and less pain. Additionally, radioactive seed localization lumpectomy can improve patient and staff member satisfaction. abstract_id: PUBMED:36773530 Preoperative localization of breast lesions: Comparing digital breast tomosynthesis-guided radioactive seed localization versus standard 2D stereotactic radioactive seed localization. Purpose: To compare single seed digital breast tomosynthesis-guided radioseed localization (DBT-L) to standard 2D stereotactic-guided radioseed localization (SGL) of the breast. Methods: A retrospective review of a large tertiary cancer center's database yielded 68 women who underwent preoperative DBT-L from March 2019-December 2019 and a matched cohort of 65 women who underwent SGL during the same period. The electronic medical record and radiology were reviewed for patient characteristics including breast density, exam technique, pre- and post-operative pathology, exam duration, and radiation dose to the patient. To compare margin outcomes between the groups, the chi-square test of independence was used; to compare continuous outcomes such as exam duration and total dose, the Wilcoxon rank sum test was used. Results: DBT-L and SGL localization targets included biopsy marker (62/68, 91% vs 55/65, 85%), distortion (4/68, 6% vs 2/65, <3%), focal asymmetry (1/68 and 1/65, < 2% for both), calcifications (1/68, <2% vs 4/65, 6%), and anatomic landmarks (0% vs 3/65, 5%). 72% and 71% of localizations were performed for malignant pathology in the DBT-L and SGL groups, respectively. The median duration of DBT-L was 8.3 min vs 10.3 min for SGL, representing statistically significant time savings (p = 0.003). The median total organ dose of DBT-L was 8.6 mGy vs 10.4 mGy for SGL, representing statistically significant dose savings (p = 0.018). The incidence of positive margins at surgery was not statistically different between the groups (p = 0.26). Conclusion: DBT-L demonstrates both time and dose savings for the patient compared to SGL without compromising surgical outcome. abstract_id: PUBMED:31248796 125I radioactive seed localization for non-palpable lesions in breast cancer. Aim: Marking of non-palpable breast lesions with 125I radioactive seeds is an alternative to the use of the surgical wire. The objective of this work is to present the results that we have obtained using radioactive seed localization compared to the reference technique in our center, the wire localization of non-palpable breast lesions. Material And Method: Longitudinal prospective study that includes patients with histological diagnostic of breast cancer, with non-palpable lesions that are candidates to primary surgical treatment by radioactive seed localization (2016-2018) and by wire localization (2015-2016). Histological analysis of the surgical specimen was performed determining the status of surgical margins. The volume of the surgical specimen was calculated. Results: A total of 146 patients were included, 95 who underwent surgery by radioactive seed localization and 51 by wire localization. The mean cube volume of the specimens were 135.67cm3 vs. 190.77cm3 (p=0.017), respectively. Eleven patients who underwent surgery by radioactive seed localization showed affected margins of the specimen (11.6%), versus 7 (13.2%) of wire localization group (p=0.084). Reintervention was performed in 9 of the patients marked with seeds and in 7 marked with wires (p=0.49). Conclusion: The use of 125I radioactive seeds is feasible in non-palpable breast lesions, with a low rate of reintervention and volumes of surgical specimens significantly lower than those obtained by wire localization. abstract_id: PUBMED:35992728 Radioactive seed localization of foreign body. Radioactive seed localization is a method widely used within breast cancer treatment. This case represents a 43-year-old male patient with a foreign body in his left axilla. We used radioactive seed localization for marking the foreign body, which made the subsequent surgery possible, quick, and minimal invasive. abstract_id: PUBMED:37225666 Radioactive Seed Localization for Non-Breast Tissue Excision. Lymph nodes are common basins for cancer recurrence; however, during surgery, lymphatic tissue is often indistinguishable from surrounding tissue making local excision difficult. Novel breast surgery techniques have utilized radioactive seed localization (RSL) to preoperatively tag tissue so it can be identified intraoperatively with a gamma probe. Our goal was to assess the use of RSL in non-breast tissues. This was a retrospective case series of non-breast cancer patients undergoing RSL. Overall, 42 patients met inclusion criteria. Pathology results indicated benign findings in 20 patients (47.62%), toxoplasma in 1 patient (2.38%), non-necrotizing granulomatous disease in 2 patients (4.76%), and malignant progression in 19 patients (45.24%). Two patients had non-lymphatic tissue removed: one in the abdominal wall and one in the lower lumbar region. Radioactive seed localization is an effective technique to localize and excise non-palpable lymph nodes and masses identified on imaging, highlighting its wide variety of uses in non-breast cancer cases. abstract_id: PUBMED:28927740 Comparative Evaluation of Iodine-125 Radioactive Seed Localization and Wire Localization for Resection of Breast Lesions. Purpose: Radioactive seed localization (RSL) uses a titanium seed labeled with iodine-125 energy for surgery of nonpalpable breast lesions. RSL facilitates radiology-surgery scheduling and allows for improved oncoplasty compared with wire localization (WL). The purpose of this work was to compare the 2 techniques. Methods: We performed a retrospective study of all breast lesions operated with RSL between February 2013 and March 2015 at our university institution, and compared with an equivalent number of surgeries performed with a single WL. Imaging and pathology reports were reviewed for information on guidance mode, accuracy of targeting, nature of excised lesion, size and volume of surgical specimen, status of margins, and reinterventions. Results: A total of 254 lesions (247 women) were excised with RSL and compared with 257 lesions (244 women) whose surgery was guided by WL. Both groups were comparable in lesion pathology, guidance mode for RSL or WL positioning, and accuracy of targeting (98% correct). Mean delay between biopsy and surgery was 84 days for RSL versus 103 after WL (P = .04). No differences were noted after RSL or WL for surgical specimen mean weight, largest diameter, and volume excised. For malignancies, the rate of positive margins was comparable (2.8%-3%), with 5 of 10 women in the RSL group who underwent a second surgery displaying residual malignancy compared with 3 of 9 women in the WL group. Conclusions: RSL is safe and accurate, and has comparable surgical endpoints to WL. Because RSL offers flexible scheduling and facilitated oncoplasty, RSL may replace WL for resection of nonpalpable single breast lesions. abstract_id: PUBMED:38396480 Radioactive Seed Localization for Nonpalpable Breast Lesions: Systematic Review and Meta-Analysis. Background: This study is a systematic review with meta-analysis comparing radioactive seed localization (RSL) versus radio-guided occult lesion localization (ROLL) and wire-guided localization (WGL) for patients with impalpable breast cancer undergoing breast-conserving surgery and evaluating efficacy, safety, and logistical outcomes. The protocol is registered in PROSPERO with the number CRD42022299726. Methods: A search was conducted in the Embase, Lilacs, Pubmed, Scielo, Web of Science, and clinicaltrials.gov databases, in addition to a manual search in the reference list of relevant articles, for randomized clinical trials and cohort studies. Studies selected were submitted to their own data extraction forms and risk of bias analysis according to the ROB 2 and ROBINS 1 tools. A meta-analysis was performed, considering the random effect model, calculating the relative risk or the mean difference for dichotomous or continuous data, respectively. The quality of the evidence generated was analyzed by outcome according to the GRADE tool. Overall, 46 articles met the inclusion criteria and were included in this systematic review; of these, 4 studies compared RSL and ROLL with a population of 1550 women, and 43 compared RSL and WGL with a population of 19,820 women. Results: The results showed that RSL is a superior method to WGL in terms of surgical efficiency in the impalpable breast lesions' intraoperative localization, and it is at least equivalent to ROLL. Regarding security, RSL obtained results equivalent to the already established technique, the WGL. In addition to presenting promising results, RSL has been proven to be superior to WGL and ROLL technologies. abstract_id: PUBMED:32363081 How to Safely Perform Magnetic Resonance Imaging-guided Radioactive Seed Localizations in the Breast. When a breast lesion is seen on only magnetic resonance imaging (MRI) and needs to be excised, pre-operative MRI-guided radioactive seed localization (RSL) is preferred. However, MRI safety and radiation safety issues associated with the inability to recover a potentially lost seed in the MRI scanner room (Zone IV) have precluded this in the past. This manuscript provides a protocol for MRI-guided RSLs that meets MRI safety and radiation safety criteria established by the American College of Radiology and the Nuclear Regulatory Commission. To the authors' best knowledge, this has never been published in the literature. The complete protocol is provided. abstract_id: PUBMED:36447622 Radioactive Seed Localization for Conservative Surgery of Nonpalpable Breast Cancer: Recommendations for Technology Implantation Program. Background: The radioactive seed localization (RSL) is used in impalpable breast cancer conservative surgery to assist the surgeon in accurately locating and excising the lesion site. This study aims to present recommendations about the RSL program implementation in health institutions that perform breast cancer conservative surgery with intraoperative localization. Methods: An extensive literature review was performed. It comprehends: the committee responsible for implementation of the program actions; description of the necessary multidisciplinary team; the radiological safety committee role; the facility licensing; professionals training; material and instrumentation associated with the technique; and seed tracking system. Results: 13 topics are presented. The Program Implementation Committee must be formed by leaders from each department. The committee assumes responsibility for evaluating the necessary processes and presenting the schedule for program implementation. Since the procedure is classified as a nuclear medicine procedure it requires licensing. The Professional Team Formation, Education, and Training is a priority and simulation exercises are necessary. The Materials and Instrumentation Associated with the Technique must be well-know by the team and they should practice using radiation detectors. The seed must be always tracked, from moment they are received to discard. An Inventory for Tracking Seeds is provided. The Radiological Safety Aspects such as the ALARA principle are presented. A full description for the Radiological Procedure for Placing the seeds, the surgical removal and the Specimen Handling in Pathology focusing on how to locate the seed and retrieve them. After removed, the seeds can be placed in storage to wait for full radioactive decay or be returned to the manufacturer. Conclusions: The procedure has the advantage to increase to 2 months the time between insertion of the seed and the surgical removal. Regular multidisciplinary team meetings during program development are important to create a realistic timeline, having briefing meetings after the first 1-5 RSL cases and having annual or biannual follow-up meetings to discuss any issues or incidents. Abstract Graphic ImageCreated by Macrovector, obtained in Freepik at https://br.freepik.com/fotos-vetores-gratis/oncologia.This graphical abstract shows everything that is necessary to implement the RSL technique and are discussed in this paper. Highlights: This study present recommendations for RSL program implementation in hospitalsWas performed by an extensive descriptive and qualitative literature reviewTopics 1: Implementation Committee, Professional Team Training, InstrumentationTopics 2: Radiological Safety, Patient Consent, Radiological ProcessTopics 3: Surgical Procedure, Pathology, Seeds Disposal Completion. Answer: Radioactive seed localization (RSL) is a technique used for the surgical diagnosis of nonpalpable breast lesions, which has been investigated for its ability to reduce the need for specimen radiographs and operating room delays associated with wire localization (WL). In a study involving 134 women, RSL was performed by placing a titanium seed containing iodine-125 within 1 cm of the suggestive breast lesion, and the surgeon used a handheld gamma detector to locate and excise the seed and lesion. The results showed that specimen radiographs were eliminated in 79% of patients, and surgical seed retrieval was 100% successful with no seed migration after correct placement. However, 21% of patients required a specimen radiograph, primarily for microcalcifications. The study concluded that RSL can eliminate specimen radiographs when the seed is accurately placed by the radiologist and the lesion is grossly identified by the pathologist. RSL was found to reduce the need for specimen radiographs, decrease operating room time, improve incision placement, and improve resections to clear margins (PUBMED:14597442). In summary, while RSL has the potential to eliminate the need for specimen radiographs in a significant number of cases, it may still be necessary in certain situations, such as when small microcalcifications are present before surgery. The overall effectiveness of RSL in reducing the need for specimen radiographs and improving surgical outcomes has been supported by various studies, making it a valuable technique in the surgical management of nonpalpable breast lesions (PUBMED:28554356, PUBMED:36773530, PUBMED:31248796, PUBMED:35992728, PUBMED:37225666, PUBMED:28927740, PUBMED:38396480, PUBMED:32363081, PUBMED:36447622).
Instruction: Racial disparities in HIV virologic failure: do missed visits matter? Abstracts: abstract_id: PUBMED:19295340 Racial disparities in HIV virologic failure: do missed visits matter? Background: Racial/ethnic health care disparities are well described in people living with HIV/AIDS, although the processes underlying observed disparities are not well elucidated. Methods: A retrospective analysis nested in the University of Alabama at Birmingham 1917 Clinic Cohort observational HIV study evaluated patients between August 2004 and January 2007. Factors associated with appointment nonadherence, a proportion of missed outpatient visits, were evaluated. Next, the role of appointment nonadherence in explaining the relationship between African American race and virologic failure (plasma HIV RNA >50 copies/mL) was examined using a staged multivariable modeling approach. Results: Among 1221 participants, a broad distribution of appointment nonadherence was observed, with 40% of patients missing at least 1 in every 4 scheduled visits. The adjusted odds of appointment nonadherence were 1.85 times higher in African American patients compared with whites [95% confidence interval (CI) = 1.61 to 2.14]. Appointment nonadherence was associated with virologic failure (odds ratio = 1.78, 95% CI = 1.48 to 2.13) and partially mediated the relationship between African American race and virologic failure. African Americans had 1.56 times the adjusted odds of virologic failure (95% CI = 1.19 to 2.05), which declined to 1.30 (95% CI = 0.98 to 1.72) when controlling for appointment nonadherence, a hypothesized mediator. Conclusions: Appointment nonadherence was more common in African American patients, associated with virologic failure, and seemed to explain part of observed racial disparities in virologic failure. abstract_id: PUBMED:30793009 Racial Disparities in Virologic Failure and Tolerability During Firstline HIV Antiretroviral Therapy. Background: Racial/ethnic disparities in HIV outcomes have persisted despite effective antiretroviral therapy. In a study of initial regimens, we found viral suppression varied by race/ethnicity. In this exploratory analysis, we use clinical and socioeconomic data to assess factors associated with virologic failure and adverse events within racial/ethnic groups. Methods: Data were from AIDS Clinical Trial Group A5257, a randomized trial of initial regimens with either atazanavir/ritonavir, darunavir/ritonavir, or raltegravir (each combined with tenofovir DF and emtricitabine). We grouped participants by race/ethnicity and then used Cox-proportional hazards regression to examine the impact of demographic, clinical, and socioeconomic factors on the time to virologic suppression and time to adverse event reporting within each racial/ethnic group. Results: We analyzed data from 1762 participants: 757 self-reported as non-Hispanic black (NHB), 615 as non-Hispanic white (NHW), and 390 as Hispanic. The proportion with virologic failure was higher for NHB (22%) and Hispanic (17%) participants compared with NHWs (9%). Factors associated with virologic failure were poor adherence and higher baseline HIV RNA level. Prior clinical AIDS diagnosis was associated with virologic failure for NHBs only, and unstable housing and illicit drug use for NHWs only. Factors associated with adverse events were female sex in all groups and concurrent use of medications for comorbidities in NHB and Hispanic participants only. Conclusions: Clinical and socioeconomic factors that are associated with virologic failure and tolerability of antiretroviral therapy vary between and within racial and ethnic groups. Further research may shed light into mechanisms leading to disparities and targeted strategies to eliminate those disparities. abstract_id: PUBMED:27979005 Racial Disparities in HIV Care Extend to Common Comorbidities: Implications for Implementation of Interventions to Reduce Disparities in HIV Care. Background: Prior studies have described racial disparities in the quality of care for persons with HIV infection, but it is unknown if these disparities extend to common comorbid conditions. To inform implementation of interventions to reduce disparities in HIV care, we examined racial variation in a set of quality measures for common comorbid conditions among Veterans in care for HIV in the United States. Method: The cohort included 23,974 Veterans in care for HIV in 2013 (53.4% black; 46.6% white). Measures extracted from electronic health record and administrative data were receipt of combination antiretroviral therapy (cART), HIV viral control (serum RNA < 200 copies/ml among those on cART), hypertension control (blood pressure < 140/90 mm Hg among those with hypertension), diabetes control (hemoglobin A1C < 9% among those with diabetes), lipid monitoring, guideline-concordant antidepressant prescribing, and initiation and engagement in substance use disorder (SUD) treatment. Black persons were less likely than their white counterparts to receive cART (90.2% vs. 93.2%, p<.001), and experience viral control (84.6% vs. 91.3%, p<.001), hypertension control (61.9% vs. 68.3%, p<.001), diabetes control (85.5% vs. 89.5%, p<.001), and lipid monitoring (81.5% vs. 85.2%, p<.001). Initiation and engagement in SUD treatment were similar among blacks and whites. Differences remained after adjusting for age, comorbidity, retention in HIV care, and a measure of neighborhood social disadvantage created from census data. Significance: Implementation of interventions to reduce racial disparities in HIV care should comprehensively address and monitor processes and outcomes of care for key comorbidities. abstract_id: PUBMED:34448243 Virtual group visits to reduce psychosocial distress and racial disparities in pediatric vitiligo and alopecia areata patients. We examined the potential for virtual, pediatric group visits to reduce racial disparities in psychosocial isolation and access to care among children of color with pediatric vitiligo and alopecia areata as well as their caregivers. We have conducted 10 visits with 30 pediatric skin of color patients between 5 and 16 years old: 18 patients with alopecia areata, 12 with vitiligo, 59% female, 64% Black, and 23% Latinx, and 33 parents. After the visits, children and parents felt they had more people they knew and could reach out to about their dermatologic condition, the visits exceeded both children's and parents' expectations, and both children and parents reported that they would continue attending future group visits if offered. Group visits in the virtual format may present an opportunity for reducing racial disparities in psychosocial isolation and access to care across geographic locations in pediatric dermatology. abstract_id: PUBMED:28992096 Evaluating the Population Impact on Racial/Ethnic Disparities in HIV in Adulthood of Intervening on Specific Targets: A Conceptual and Methodological Framework. Reducing racial/ethnic disparities in human immunodeficiency virus (HIV) disease is a high priority. Reductions in HIV racial/ethnic disparities can potentially be achieved by intervening on important intermediate factors. The potential population impact of intervening on intermediates can be evaluated using observational data when certain conditions are met. However, using standard stratification-based approaches commonly employed in the observational HIV literature to estimate the potential population impact in this setting may yield results that do not accurately estimate quantities of interest. Here we describe a useful conceptual and methodological framework for using observational data to appropriately evaluate the impact on HIV racial/ethnic disparities of interventions. This framework reframes relevant scientific questions in terms of a controlled direct effect and estimates a corresponding proportion eliminated. We review methods and conditions sufficient for accurate estimation within the proposed framework. We use the framework to analyze data on 2,329 participants in the CFAR [Centers for AIDS Research] Network of Integrated Clinical Systems (2008-2014) to evaluate the potential impact of universal prescription of and ≥95% adherence to antiretroviral therapy on racial disparities in HIV virological suppression. We encourage the use of the described framework to appropriately evaluate the potential impact of targeted interventions in addressing HIV racial/ethnic disparities using observational data. abstract_id: PUBMED:27629866 Understanding Determinants of Racial and Ethnic Disparities in Viral Load Suppression. Racial and ethnic disparities in viral load suppression (VLS) have been well documented among people living with HIV (PLWH). The authors hypothesized that a contemporary analytic technique could reveal factors underlying these disparities and provide more explanatory power than broad stereotypes. Classification and regression tree analysis was used to detect factors associated with VLS among 11 419 adult PLWH receiving treatment from 186 New York State HIV clinics in 2013. A total of 8885 (77.8%) patients were virally suppressed. The algorithm identified 8 mutually exclusive subgroups characterized by age, housing stability, drug use, and insurance status but neither race nor ethnicity. Our findings suggest that racial and ethnic disparities in VLS exist but likely reflect underlying social and behavioral determinants of health. abstract_id: PUBMED:32762633 Exploring Changes in Racial/Ethnic Disparities of HIV Diagnosis Rates Under the "Ending the HIV Epidemic: A Plan for America" Initiative. Objectives: Racial/ethnic disparities in HIV diagnosis rates remain despite the availability of effective treatment and prevention tools in the United States. In 2019, President Trump announced the "Ending the HIV Epidemic: A Plan for America" (EHE) initiative to reduce new HIV infections in the United States at least 75% by 2025 and at least 90% by 2030. The objective of this study was to show the potential effect of the EHE initiative on racial/ethnic disparities in HIV diagnosis rates at the national level. Methods: We used 2017 HIV diagnoses data from the Centers for Disease Control and Prevention National HIV Surveillance System. We developed a counterfactual scenario to determine changes in racial/ethnic disparities if the 2017 HIV diagnosis rates were reduced by 75% in the geographic regions targeted by the EHE initiative. We used 4 measures to calculate results: rate ratio, population-attributable proportion (PAP), Gini coefficient, and Index of Disparity. Results: The relative measures of racial/ethnic disparity decreased by 9%-21% in the EHE scenario compared with the 2017 HIV diagnoses data. The largest decrease was in the Hispanic/Latino:white rate ratio (-20.6%) and in the black:white rate ratio (-18.2%). The PAP measure decreased by 11.5%. The absolute versions of the Index of Disparity (unweighted and weighted) were approximately 50% lower in the EHE scenario than in the 2017 HIV diagnoses data. Conclusions: EHE efforts could reduce but will not eliminate racial/ethnic disparities in HIV diagnosis rates. Efforts to address racial/ethnic disparities should continue, and innovative approaches, specifically those that focus on social and structural factors, should be developed and implemented for populations that are disproportionately affected by HIV in the United States. abstract_id: PUBMED:26746424 Persistent Racial Disparities in HIV Infection in the USA: HIV Prevalence Matters. Objectives: Despite increased funding and efforts to prevent and control HIV infections in the black and Hispanic communities, racial disparities persist in the USA. We used a mathematical model to explain the phenomena. Methods: A mathematical model was constructed to project HIV prevalence ratio (PR), incidence rate ratio (IRR), and HIV-specific mortality rate ratio (MRR) among blacks and Hispanics vs. whites in two scenarios: (1) an annual reduction in HIV incidence rate at the 2007-2010 level and (2) an annual reduction in HIV incidence rate at the 2007-2010 level among whites (4.2 %) and twice that of whites among blacks and Hispanics (8.4 %). Results: In scenario no. 1, the PR, IRR, and MRR among blacks would decrease from 7.6 to 5.8, 7.9 to 5.9, and 11.3 to 5.3 and among Hispanics from 2.8 to 1.8, 3.1 to 1.9, and 2.3 to 1.0, respectively. In scenario no. 2, the PR, IRR, and MRR among blacks would decrease from 7.6 to 5.1, 7.9 to 2.5, and 11.3 to 4.7 and among Hispanics from 2.8 to 1.6, 3.1 to 0.8, and 2.3 to 0.9, respectively. Conclusions: Much of the persistent racial disparities in HIV infection in the USA, as measured by PR, IRR, and MRR, can be explained by higher HIV prevalence among blacks and Hispanics. The public health community should continue its efforts to reduce racial disparities, but also need to set realistic goals and measure progress with sensitive indicators. abstract_id: PUBMED:36220686 Changes in Racial and Ethnic Disparities of HIV Diagnoses Among Adolescents and Young Adults Aged 13-24 Years, 2015-2019. Purpose: We examined changes in racial/ethnic disparities in HIV diagnoses among adolescents and young adults aged 13-24 years from 2015 through 2019. Methods: We used national surveillance data for 2015-2019 from AtlasPlus to calculate 12 absolute and relative disparity measures for 7 racial/ethnic groups to understand HIV diagnosis trends over time. We calculated four absolute measures (Black-to-White rate difference, Hispanic-to-White rate difference, Absolute Index of Disparity [ID], population-weighted Absolute ID) and eight relative measures (Black-to-White rate ratio, Hispanic-to-White rate ratio, ID, population-weighted ID, population attributable proportion, Gini coefficient, Theil index, and mean log deviation). Results: HIV diagnosis rates decreased by 15.9% across all racial/ethnic groups combined. All the absolute disparity measures we examined indicated substantial reductions (13.5%-18.5%) in absolute disparities. Most of the relative disparity measures (eight of eight population-unadjusted measures and five of eight population-adjusted measures) declined as well, but the change was relatively modest and ranged from a 3.3% decrease to a 2.1% increase across the measures. Discussion: Despite progress, racial/ethnic disparities in HIV diagnoses among adolescents and young adults remain. Programs and services that are culturally relevant and tailored for this population may assist with continued progress toward reducing racial/ethnic disparities. abstract_id: PUBMED:35133528 Age and Racial Disparities in Telehealth Use Among People with HIV During the COVID-19 Pandemic. The COVID-19 pandemic resulted in widespread telehealth expansion. To determine telehealth uptake and potential sociodemographic differences in utilization among people with HIV (PwH), we examined HIV care appointments at the University of Chicago Medicine, an urban tertiary hospital. Visits between March 15th and September 9th for 2019 and 2020 were categorized as in-person, telehealth, and within telehealth, video, and phone. Differences in visit types were modeled using logistic regression to examine associations with demographics, insurance type, and HIV risk transmission category. Telehealth appointments were more likely for those aged 46-60 versus those 31-45 [46-60; AOR 1.89 95% CI (1.14, 3.15)]. Black race and participants of other races were less likely to use telehealth compared to whites [Black: AOR 0.33 95% CI (0.16, 0.64), other: AOR 0.10 95% CI (0.02, 0.34)]. Future studies should continue to examine potential disparities in telehealth use among PwH, including age and racial differences. Answer: Yes, missed visits do matter when it comes to racial disparities in HIV virologic failure. A retrospective analysis nested in the University of Alabama at Birmingham 1917 Clinic Cohort observational HIV study evaluated patients and found that appointment nonadherence was more common in African American patients, associated with virologic failure, and seemed to explain part of the observed racial disparities in virologic failure. Specifically, African Americans had higher adjusted odds of virologic failure, which declined when controlling for appointment nonadherence, indicating that missed visits partially mediated the relationship between African American race and virologic failure (PUBMED:19295340). Furthermore, other studies have found that racial/ethnic disparities in HIV outcomes persist despite effective antiretroviral therapy, with factors such as poor adherence, higher baseline HIV RNA level, prior clinical AIDS diagnosis, unstable housing, and illicit drug use being associated with virologic failure in different racial/ethnic groups (PUBMED:30793009). These findings suggest that addressing the underlying causes of missed visits and improving adherence to appointments could be key strategies in reducing racial disparities in HIV virologic failure.
Instruction: Geriatric syndromes as outcome measures of hospital care: can administrative data be used? Abstracts: abstract_id: PUBMED:29119368 Patient reported outcome measures in geriatric care: first experiences Background: There are difficulties in expressing the value of geriatric care in outcome measures such as recovery or mortality rates. Rather, the goal of geriatric care is to maintain quality of life and functionality. As such, patient reported outcome measures (PROMs) may be more effective in measuring the value healthcare creates in geriatric patients. In 2015 the Dutch Geriatrics Society asked their Committee Quality of Care Measurement to select a suitable PROM for the purpose of measuring the outcomes of geriatric hospital care. Methods/results: The goal of this PROM is to measure outcomes of an hospital admission in the perspective of the elderly patient who was admitted to a geriatric ward. A group of caregivers in geriatric care identified four possible PROMs in the literature and based on selection criteria the TOPICS-MDS was chosen as most suitable. To increase the feasibility of implementation in daily practice, an item reduction study was performed and this resulted in a short form: TOPICS-SF. Two pilot studies in three hospitals took place on a geriatric ward. A response of 62% was observed during the first pilot with TOPICS-MDS and a response of 37% was observed during the second pilot with TOPICS-SF. The Katz-15 improved during hospital stay and during one month at home after discharge. Conclusion: The TOPICS-SF has been selected as PROM for the older patient receiving geriatric care and is feasible in practice. More research in different settings and with different moments of measurements is needed to evaluate the responsiveness of TOPICS-SF and the conditions for feasible implementation in daily practice. abstract_id: PUBMED:10366168 Geriatric syndromes as outcome measures of hospital care: can administrative data be used? Objective: To determine how often hospital administrative databases capture the occurrence of two common geriatric syndromes, pressure ulcers and incontinence. Design: Retrospective comparison of a nursing home and hospital database. Setting: Department of Veterans Affairs (VA) hospitals. Participants: All patients between 1992 and 1996 discharged from VA acute medical care and admitted to a VA nursing home. Measurements: The presence of incontinence or a pressure ulcer (stage 2 or larger) on admission to the nursing home was determined. Hospital discharge diagnoses were then reviewed to determine whether these conditions were recorded. The effect of ulcer stage, total number of discharge diagnoses, and temporal trends on the recording of these conditions in discharge diagnoses was also noted. Results: There were 17,004 admissions to nursing homes from acute care in 1996; 12.7% had a pressure ulcer and 43.4% were incontinent. Among these patients with a pressure ulcer, the hospital discharge diagnosis listed an ulcer in 30.8% of cases, and incontinence was included correctly as a discharge diagnosis in 3.4%. While deeper pressure ulcers were more likely to be recorded than superficial ulcers (P < .01), nearly 50% of stage 4 ulcers were not listed among hospital discharge diagnoses. Patients with more discharge diagnoses were more likely to record both conditions correctly. From 1992 to 1996, small but significant (P = .001) improvements were noted in the correct recording of these geriatric syndromes as discharge diagnoses. Conclusions: The occurrence of pressure ulcers and incontinence cannot be determined from hospital administrative databases and should not be used as outcomes when measuring quality of care among hospitalized patients. abstract_id: PUBMED:34083151 Geriatric Assessment and In-Hospital Economic Cost of Elderly Patients With Acute Coronary Syndromes. Background: Elderly patients with acute coronary syndromes (ACS) are at higher risk for complications and health care resources expenditure. No previous study has assessed the specific contribution of frailty and other geriatric syndromes to the in-hospital economic cost in this setting. Method: Unselected patients with ACS aged ≥75 years were prospectively included. A comprehensive geriatric assessment was performed during hospitalisation. Hospitalisation-related cost per patient was calculated with an analytical accountability method, including hospital stay-related expenditures, interventions, and consumption of devices. Expenditure was expressed in Euros (2019). The contribution of geriatric syndromes and clinical factors to the economic cost was assessed with a linear regression method. Results: A total of 194 patients (mean age 82.6 years) were included. Mean length of hospital stay was 11.3 days. The admission-related economic cost was €6,892.15 per patient. Most of this cost was attributable to hospital length of stay (77%). The performance of an invasive strategy during the admission was associated with economic cost (p=0.008). Of all the ageing-related variables, comorbidity showed the most significant association with economic cost (p=0.009). Comorbidity, disability, nutritional risk, and frailty were associated with the hospital length of stay-related component of the economic cost. The final predictive model of economic cost included age, previous heart failure, systolic blood pressure, Killip class at admission, left main disease, and Charlson index. Conclusions: Management of ACS in elderly patients is associated with a significant economic cost, mostly due to hospital length of stay. Comorbidity mostly contributes to in-hospital resources expenditure, as well as the severity of the coronary event. abstract_id: PUBMED:34758915 Benefits, Challenges, and Opportunities of Integrating Patient-Reported Outcome Measures in Geriatric Oncology to Advance Patient Screening for Functional Fitness for Treatment. Objectives: To discuss potential benefits, challenges, and opportunities of integrating patient-reported outcome measures in geriatric oncology, with the goal being to advance patient screening for 'functional fitness for treatment.' Data Sources: This is narrative review of the relevant literature using PubMed. Conclusion: Patient screening in geriatric cancer care can be driven by patient self-reports. Most of the research on patient-reported outcome measures in cancer care to date has involved, in different proportions and to different extents, older patients with cancer (≥70 years of age). However, reports specifically targeting implementation of patient-reported outcome measures in geriatric cancer care are surprisingly fewer. The International Society of Geriatric Oncology Priorities Initiative calls for more research into the use of patient-reported outcome measures for older adults with cancer. More feasibility studies will be required to evaluate patient-reported outcome measures as fit-for-purpose, reporting frequency, patient burden, integration and display of patient-reported information, and workflow impact. Developing, adapting, and validating patient-reported outcome measures for use in geriatric oncology will be key to ensure adequacy of measurement. Groundwork involving consultation of all potential end users of patient-reported information is paramount because there is no one size fits all. When the ground is ready for implementation adequate preparation, training, resources, and ongoing support will be needed. Implications For Nursing Practice: Working within the multidisciplinary team, nurses can promote patient screening that is led by the patient. Nursing roles can be developed, supported, expanded, or repurposed to involve greater use of patient-reported outcome measures for several key patient outcomes in geriatric cancer care. abstract_id: PUBMED:30270027 The Course of Geriatric Syndromes in Acutely Hospitalized Older Adults: The Hospital-ADL Study. Objectives: To establish the prevalence and course of geriatric syndromes from hospital admission up to 3 months postdischarge and to determine the probability to retain geriatric syndromes over the period from discharge until 3 months postdischarge, once they are present at admission. Design: Prospective multicenter cohort study conducted between October 2015 and June 2017. Setting And Participants: Acutely hospitalized patients aged 70 years and older recruited from internal, cardiology, and geriatric wards of 6 Dutch hospitals. Measures: Cognitive impairment, depressive symptoms, apathy, pain, malnutrition, incontinence, dizziness, fatigue, mobility impairment, functional impairment, fall risk, and fear of falling were assessed at admission, discharge, and 1, 2, and 3 months postdischarge. Generalized estimating equations analysis were performed to analyze the course of syndromes and to determine the probability to retain syndromes. Results: A total of 401 participants [mean age (standard deviation) 79.7 (6.7)] were included. At admission, a median of 5 geriatric syndromes were present. Most prevalent were fatigue (77.2%), functional impairment (62.3%), apathy (57.5%), mobility impairment (54.6%), and fear of falling (40.6%). At 3 months postdischarge, an average of 3 syndromes were present, of which mobility impairment (52.7%), fatigue (48.1%), and functional impairment (42.5%) were most prevalent. Tracking analysis showed that geriatric syndromes that were present at admission were likely to be retained. The following 6 geriatric syndromes were most likely to stay present postdischarge: mobility impairment, incontinence, cognitive impairment, depressive symptoms, functional impairment, and fear of falling. Implications: Acutely hospitalized older adults exhibit a broad spectrum of highly prevalent geriatric syndromes. Moreover, patients are likely to retain symptoms that are present at admission postdischarge. Our study underscores the need to address a wide range of syndromes at admission, the importance of communication on syndromes to the next care provider, and the need for adequate follow-up care and syndrome management postdischarge. abstract_id: PUBMED:24801947 Geriatric syndromes in individuals admitted to vascular and urology surgical units. Objectives: To document the incidence of geriatric syndromes (delirium, functional decline, falls, and pressure ulcers) in two surgical units and to determine the association between the occurrence of geriatric syndromes and admission type (elective vs nonelective), severity of surgery, and surgical subspecialty unit. Design: Retrospective cohort study. Setting: One vascular surgical unit and one urology surgical unit in an Australian tertiary teaching hospital. Participants: Individuals aged 65 and older admitted to a study unit for 3 days or more (N = 112). Measurements: Delirium was identified using a validated chart extraction tool. Functional decline from admission to discharge was identified from nursing documentation. Falls were identified according to documentation in the medical record cross-checked with the hospital incident reporting system. Pressure ulcers were identified according to documentation in the medical record. Results: Geriatric syndromes were present in 32% of participants. Delirium was identified in 21%, functional decline in 14%, falls in 8%, and pressure ulcers in 5%. Individuals admitted directly from the emergency or outpatient department and interhospital transfers (nonelective) were significantly more likely to develop any geriatric syndrome than those on an elective surgery list before admission to the hospital (41% vs 18%, P = .01). In multivariable analysis, nonelective admission (odds ratio (OR) = 3.3, 95% confidence interval (CI) = 1.6-4.7, P = .005), major surgery (OR = 3.1, 95% CI = 1.7-3.7, P = .004) and preexisting impairment in activities of daily living (OR 2.9, 95% CI 1.5-3.6, P = .007) increased the likelihood of geriatric syndromes. Conclusion: Geriatric syndromes are common in older adults undergoing surgery, and nonelective admission and major surgery increase the likelihood of geriatric syndromes occurring during hospitalization. Baseline dependency in ADLs is an important risk factor for the occurrence of these conditions. abstract_id: PUBMED:36629085 Risk of Four Geriatric Syndromes: A Comparison of Mental Health Care and General Hospital Inpatients. An observational, cross-sectional study is conducted to compare elevated risk scores of four geriatric syndromes (falls, malnutrition, physical impairment, delirium) in older hospitalized psychiatric patients (n=178) with patients hospitalized in a general hospital (n=687). The median age of all patients was 78 years (IQR 73.3-83.3), 53% were female. After correction for age and gender, we found significantly more often an elevated risk in the mental health care group, compared to the general hospital group of falls (Odds Ratio (OR) = 1.75; 95% Confidence Interval (CI) 1.18-2.57), malnutrition (OR = 4.12; 95% CI 2.67-6.36) and delirium (OR = 6.45; 95% CI 4.23-9.85). The risk on physical impairment was not statistically significantly different in both groups (OR = 1.36; 95% CI .90-2.07). Older mental health care patients have a higher risk to develop geriatric syndromes compared to general hospital patients with the same age and gender, which might be explained by a higher level of frailty. abstract_id: PUBMED:35241997 Geriatric syndromes and subsequent health-care utilization among older community dwellers in Stockholm. Little is known about the long-term effect of geriatric syndromes on health-care utilization. This study aims to determine the association between geriatric syndromes and health-care utilization during a four-year period among older community dwellers. Based on the Stockholm Public Health Cohort study, a total number of 6700 community dwellers aged ≥65 years were included. From a baseline survey in 2006, geriatric syndromes were defined as having at least one of the following: insomnia, functional decline, urinary incontinence, depressive symptoms and vision impairment. Health-care utilization was identified by linkages at individual level with register data with a four-year follow-up. Cox regression was performed to estimate the associations. Compared to those without geriatric syndromes, participants with any geriatric syndromes had a higher prevalence of frequent hospitalizations, long hospital stays, frequent outpatient visits and polypharmacy in each of the follow-up years. After controlling for covariates, having any geriatric syndromes was associated with higher levels of utilization of inpatient and outpatient care as well as polypharmacy. The association was stable over time, and the fully adjusted hazard ratio (95% confidence interval) remained stable in frequent hospitalizations (from 1.89 [1.31, 2.73] in year 1 to 1.70 [1.23, 2.35] in year 4), long hospital stay (from 1.75 [1.41, 2.16] to 1.49 [1.24, 1.78]), frequent outpatient visits (from 1.40 [1.26, 1.54] to 1.33 [1.22, 1.46]) and polypharmacy (from 1.63 [1.46, 1.83] to 1.53 [1.37, 1.71]). Having any geriatric syndromes is associated with higher levels of health-care utilization among older community dwellers, and the impact of geriatric syndromes is stable over a four-year period. Supplementary Information: The online version contains supplementary material available at. 10.1007/s10433-021-00600-2. abstract_id: PUBMED:17542997 Geriatric syndromes as predictors of adverse outcomes of hospitalization. Aims: The aim of this study was to determine the association between the common geriatric syndromes and predefined adverse outcomes of hospitalization and to identify the most important independent predictors of adverse outcomes using information gained within 24 h of admission in older general medical patients. Methods: A prospective longitudinal cohort study of patients aged > or =75 years admitted to the rapid assessment medical unit in a teaching hospital was carried out. The role of geriatric syndromes in predicting outcomes was examined in univariate and multivariate models. The outcome measures were (i) length of hospital stay (LOS) of 28 days or more, (ii) institutionalization or change in residential care status to a more dependent category at discharge or during 3 months post-discharge, (iii) unplanned readmissions during 3 months and (iv) mortality in hospital or 3 months post-discharge. Results: The presence of geriatric syndromes was significantly associated with increased LOS and institutionalization or change in residential care status to a more dependent category. The factors most predictive of these outcomes were impaired pre-admission functional status in activities of daily living, recurrent falls, urinary incontinence and supported living arrangements. The geriatric syndromes appeared less important in predicting unplanned readmission and death. Conclusion: The presence of geriatric syndromes in older general medical patients is an important determinant of adverse outcomes of hospitalization, particularly of LOS and admission to residential care. The predictors most useful for screening patients for these outcomes, within 24 h of admission, appear to be the presence of certain pre-existing geriatric syndromes before admission. abstract_id: PUBMED:37743682 Frailty Screening and Detection of Geriatric Syndromes in Acute Inpatient Care: Impact on Hospital Length of Stay and 30-Day Readmissions. Background: Frailty is prevalent in acute care and is associated with negative outcomes. While a comprehensive geriatric assessment to identify geriatric syndromes is recommended after identifying frailty, more evidence is needed to support this approach in the inpatient setting. This study examined the association between frailty and geriatric syndromes and their impact on outcomes in acutely admitted older adults. Methods: A total of 733 individuals aged ≥65 years admitted to the General Surgery Service of a tertiary hospital were assessed for frailty using the Clinical Frailty Scale (CFS) and for geriatric syndromes using routine nursing admission assessments, including cognitive impairment, falls, incontinence, malnutrition, and poor oral health. Multinomial logistic regression and Cox regression were used to evaluate the associations between frailty and geriatric syndromes and their concomitant impact on hospital length of stay (LOS) and 30-day readmissions. Results: Greater frailty severity was associated with an increased likelihood of geriatric syndromes. Individuals categorized as CFS 4-6 and CFS 7-8 with concomitant geriatric syndromes had 29% and 35% increased risks of a longer LOS, respectively. CFS 4-6 was significantly associated with functional decline (relative risk ratio =1.46; 95% confidence interval [CI], 1.03-2.07) and 30-day readmission (hazare ratio=1.78; 95% CI, 1.04-3.04), whereas these associations were not significant for CFS 7-8. Conclusion: Geriatric syndromes in frail individuals can be identified from routine nursing assessments and represent a potential approach for targeted interventions following frailty identification. Tailored interventions may be necessary to achieve optimal outcomes at different stages of frailty. Further research is required to evaluate interventions for older adults with frailty in a wider hospital context. Answer: Geriatric syndromes, such as pressure ulcers and incontinence, are common issues faced by older adults during hospitalization. However, the use of hospital administrative databases to capture the occurrence of these geriatric syndromes is not reliable. A study that compared a nursing home and hospital database found that among patients with a pressure ulcer, the hospital discharge diagnosis listed an ulcer in only 30.8% of cases, and incontinence was included correctly as a discharge diagnosis in just 3.4%. The recording of these conditions was influenced by factors such as ulcer stage and the total number of discharge diagnoses, but even with these considerations, the data was not consistently accurate. Over time, there were small improvements in the correct recording of these syndromes, but the overall conclusion was that hospital administrative databases should not be used to determine the occurrence of pressure ulcers and incontinence when measuring the quality of care among hospitalized patients (PUBMED:10366168). Therefore, while geriatric syndromes are important considerations in the care of older adults, administrative data alone is insufficient for accurately measuring these outcomes. Alternative methods, such as direct assessments or patient-reported outcome measures (PROMs), may be necessary to effectively evaluate the impact of hospital care on geriatric syndromes.
Instruction: Tracheostomy After Cardiac Surgery With Median Sternotomy and Risk of Deep Sternal Wound Infections: Is It a Matter of Timing? Abstracts: abstract_id: PUBMED:26146136 Tracheostomy After Cardiac Surgery With Median Sternotomy and Risk of Deep Sternal Wound Infections: Is It a Matter of Timing? Objective: To assess the impact of timing of percutaneous dilatational tracheotomy (PDT) on incidence of deep sternal wound infections (DSWI) after cardiac surgery with median sternotomy. Design: Retrospective study between 2003 and 2013. Setting: Single-center university hospital. Participants: Eight hundred seventy-nine patients after cardiac surgery with extracorporeal circulation and median sternotomy. Interventions: PDT using the Ciaglia-technique with direct bronchoscopic guidance. Measurement And Main Results: Mean time from surgery and (re)intubation to PDT was 6.7±9.9 and 3.8±3.3 days, respectively. Incidence of DSWI was 3.9% (34/879). The incidence of DSWI was comparable between patients with PDT performed before postoperative day (POD) 10 and those with PDT after POD 10 (29/755 [3.8%] v 5/124 [4.0%], p = n.s.). However, the authors observed an association of timing of PDT and DSWI: The incidence of DSWI was significantly higher in patients with PDT performed≤POD 1 compared to those with PDT after POD 2 (12/184 [6.52%] v 22/695 [3.16%], p = 0.046). In multivariate analysis, obesity, use of bilateral internal mammary arteries, ICU stay>30 days and PDT<48 hours after surgery (OR 3.519, 95% CI 1.242-9.976, p = 0.0018) were independent predictors of DSWI. In 15/34 patients (44.1%), similarity of microorganisms between sternotomy site and tracheal cultures was observed, indicating a possible cross-contamination. Conclusions: PDT within the first 10 postoperative days after cardiac surgery with median sternotomy can be performed safely without an increased risk of DSWI. In contrast, very early PDT within 48 hours after surgery is associated with an increased risk of mediastinitis and should, therefore, be avoided. abstract_id: PUBMED:20845690 Application of partial median sternotomy in cardiac surgery in patients with tracheostoma Median full-sternotomy carries a risk of sternal infection and lethal mediastinitis in cardiac surgery. We performed open-heart surgery through partial median sternotomy in 5 patients with tracheostomy. Coronary artery bypass grafting (CABG) was performed in 3 patients, aortic valve replacement in 1, and mitral valve replacement in 1. No operative deaths or complications related to wound infection occurred. Partial sternotomy represents a safe alternative in cardiac surgery in patients with tracheostoma. abstract_id: PUBMED:20078708 Is early tracheostomy a risk factor for mediastinitis after median sternotomy? Early tracheostomy may increase the risk of mediastinitis after median sternotomy. Patients who had postoperative tracheostomy after cardiac surgery in the period 2000-2005 were retrospectively analyzed (total: 5095 patients) to evaluate the incidence of mediastinitis and sternal wound infections. Fifty-seven cases (1.1% of all operated patients) had postoperative tracheostomy at an average 5.6 +/- 0.7 days postoperatively. None of these patients had mediastinitis. Eleven cases of aseptic sternal instability and ten cases of mild-to-moderate infection limited to subcutaneous planes were observed. There was no correlation between the time to performance of tracheostomy and the isolation of bacteria from the thoracic wounds (p = 0.61). The bacterial strains isolated from subcutaneous infection were qualitatively and quantitatively different from those isolated from bronchial secretions. We conclude that in this study there is no demonstrable link between early tracheostomy after sternotomy and mediastinitis. Early tracheostomy should not be denied due to concerns of increasing the risk of mediastinitis. abstract_id: PUBMED:21881380 Usefulness of a partial median sternotomy for acute infectious endocarditis in patients with tracheostoma. Infectious endocarditis patients occasionally need emergency cardiac surgery even if they have a tracheostoma. However, a median full-sternotomy approach carries increased risk for sternal infection and lethal mediastinitis in cardiac surgery for patients with tracheostomas. We successfully performed valve replacement procedures using a lower partial median sternotomy approach in 6 infectious endocarditis patients with tracheostomas. There were neither operative deaths nor complications related to wound infection in these cases. The partial sternotomy approach represents a safe alternative in cardiac surgery for acute infectious endocarditis patients with tracheostomas who need emergent surgery. abstract_id: PUBMED:38410052 Use of temporary tracheostomy occlusion to reduce the risk of sternal wound infection after sternotomy in congenital cardiac surgery. Objective: To describe a method of reducing the risk of sternal wound infection after sternotomy in children with a pre-existing tracheostomy. To report our outcomes using this method from 1 January, 2013 to 31 August, 2023. Methods: We describe a method for temporarily occluding the tracheal stoma with a removable implant with the primary goal of reducing the risk of sternotomy wound infection by preventing soilage due to tracheostomal secretions. We then performed a retrospective review of all children who underwent temporary tracheostomal occlusion between 1 January, 2013 and 31 August, 2023 at our quaternary care children's hospital. Clinical variables were extracted from the hospital medical records. The rates of antibiotic use and minor and major complications during the period when the stoma plug was in place were recorded. Results: Totally, 19 patients underwent tracheal stoma plugging prior to sternotomy and were included in our analysis. There were two cases of sternal wound infection; one case occurred while the stoma plug was in place, and one developed four days following plug removal. There was one minor complication, with one patient requiring stoma revision via serial dilation at bedside at the time of recannulation. There were no deaths. Conclusion: Temporary occlusion of the tracheal stoma with an impermeable plug is a viable option for reducing the risk of sternal wound infection in children with a pre-existing tracheostomy who are undergoing sternotomy. abstract_id: PUBMED:12735588 Median sternotomy for pneumonectomy in patients with pulmonary complications of tuberculosis. Background: Traditionally, a thoractomy incision is used for pulmonary complications of tuberculosis. An attractive alternative is being presented by the use of midline sternotomy in such patients, which is the aim of this study. Methods: Five patients (four male, one female) with pulmonary complications of tuberculosis requiring surgical therapy in 1993 and 1994 were treated through a median sternotomy. The median patient age at time of surgery was 40.2 years and the median patient follow-up was 4.0 years (range 1.0 to 5.0 years) in this retrospective review. Results: All patients had uncomplicated operative courses and were discharged from the hospital. One patient's in-hospital postoperative course was complicated by prolonged ventilator dependency requiring temporary tracheostomy; he died 1 year postoperatively after hospital discharge due to recurrent multidrug-resistant tuberculosis. Sternal wound infections and bronchopleural fistulas were not observed in any patients. Conclusions: Surgical treatment of pulmonary complications of tuberculosis was traditionally performed through a thoracotomy approach. Many patients with tuberculous lungs have pulmonary adhesions or intrathoracic scarring from previous surgery, which would require extrapleural resection. Bleeding was a frequent complication of this procedure. Sternotomy offers excellent exposure of the intrapericardial vessels, and reduced postoperative disability compared to the standard thoracotomy, which may be an advantage given that the majority of patients in this population have poor pulmonary function. We recommend median sternotomy as an alternative operative approach in selected patients with pulmonary complications of tuberculosis. abstract_id: PUBMED:34993904 Total arch replacement with both frozen elephant trunk and transthoracic aorto-axillary extra-anatomical bypass through partial sternotomy for acute aortic dissection in a tracheostomy patient. Median sternotomy near an existing tracheostoma risks deep sternal wound infection after cardiac surgery. We present herein a case of acute type A aortic dissection in a patient with a permanent tracheostoma after laryngectomy. Total arch replacement with both frozen elephant trunk and extra-anatomical bypass for supra-aortic trunks was performed through T-shaped partial sternotomy, resulting in recovery without deep sternal wound infection. abstract_id: PUBMED:16846421 Modified sternotomy to minimize infection risk in patients with prior laryngectomy and permanent tracheostomy. Patients with prior laryngectomy and permanent tracheostomy undergoing complete sternotomy historically are at increased risk for wound infection, osteomyelitis, mediastinitis, bleeding, tracheal injury, and poor wound healing. We describe three patients who underwent cardiac surgery via low midline incision with transverse flap, providing the exposure of complete sternotomy and decreased infectious risk. Patient selection, technique, and management principles are discussed. abstract_id: PUBMED:18328721 Is post-sternotomy percutaneous dilatational tracheostomy a predictor for sternal wound infections? Objective: Early post-sternotomy tracheostomy is not infrequently considered in this era of percutaneous tracheostomy. There is, however, some controversy about its association with sternal wound infections. Methods: Consecutive patients who had percutaneous tracheostomy following median sternotomy for cardiac operation at our institution from March 1998 through January 2007 were studied, and compared to contemporaneous patients. We identified risk factors for tracheostomy, and investigated the association between percutaneous tracheostomy and deep sternal wound infection (mediastinitis) by multivariate analysis. Results: Of 7002 patients, 100 (1.4%) had percutaneous tracheostomy. The procedure-specific rates were: 8.6% for aortic surgery, 2.7% for mitral valve repair/replacement (MVR), 1.1% for aortic valve replacement (AVR), and 0.9% for coronary artery bypass grafting (CABG). Tracheostomy patients differed vastly from other patients on account of older age, severe symptoms, preoperative support, lower ejection fraction, more comorbidities, more non-elective and complex operations and higher EuroScore. Risk factors for tracheostomy were New York Heart Association class III/IV (OR 6.01, 95% CI 2.28-16.23, p<0.0001), chronic obstructive pulmonary disease (OR 1.84, 95% CI 1.01-3.37, p=0.05), preoperative renal failure (OR 3.57, 95% CI 1.41-9.01, p=0.007), prior stroke (OR 3.08, 95% CI 1.75-5.42, p<0.0001), ejection fraction<0.30% (OR 2.73, 95% CI 1.23-6.07, p=0.01), and bypass time (OR 1.008, 95% CI 1.004-1.012, p<0.0001). The incidences of deep (9% vs 0.7%, p<0.0001) and superficial sternal infections (31% vs 6.5%, p<0.0001) were significantly higher among tracheostomy patients. Multivariate analysis identified percutaneous tracheostomy as a predictor for deep sternal wound infection (OR 3.22, 95% CI 1.14-9.31, p<0.0001). Conclusions: Tracheostomy, often performed in high-risk patients, may further complicate recovery with sternal wound infections, including mediastinitis, therefore, patients and timing should be carefully selected for post-sternotomy tracheostomy. abstract_id: PUBMED:30083867 Off-pump coronary artery bypass grafting in a tracheostomy patient. In patients who have undergone laryngectomy and have a tracheal stoma, a full median sternotomy substantially increases the risk of wound infection, osteomyelitis, mediastinitis, bleeding, tracheal injury, and poor wound healing. Several reports have been published on sternotomies and skin incisions in tracheostoma patients. Transverse bilateral thoracosternotomy, T-shaped partial sternotomy (manubrium-sparing sternotomy) with transverse skin flaps and anterolateral thoracotomy with partial sternotomy are described as successful approaches to the mediastinum for cardiac surgery. We present a successful case in which off-pump coronary artery bypass grafting (CABG) was performed in a tracheostoma patient using a low T-shaped partial sternotomy and the PAS-Port system. Good long-term results were achieved. Answer: The timing of tracheostomy after cardiac surgery with median sternotomy does appear to be a significant factor in the risk of developing deep sternal wound infections (DSWI). A retrospective study found that performing percutaneous dilatational tracheotomy (PDT) within the first 10 postoperative days did not increase the risk of DSWI. However, PDT performed within 48 hours after surgery was associated with an increased risk of mediastinitis and should be avoided (PUBMED:26146136). This study also identified obesity, use of bilateral internal mammary arteries, ICU stay longer than 30 days, and PDT less than 48 hours after surgery as independent predictors of DSWI. In contrast, another study reported that early tracheostomy did not increase the risk of mediastinitis after median sternotomy, as none of the patients who had postoperative tracheostomy developed mediastinitis (PUBMED:20078708). This suggests that early tracheostomy may not necessarily be a risk factor for mediastinitis, although the timing and patient selection should be carefully considered. Alternative surgical approaches, such as partial median sternotomy or modified sternotomy techniques, have been described as safe options for patients with tracheostomies undergoing cardiac surgery, with no operative deaths or complications related to wound infection reported (PUBMED:20845690, PUBMED:21881380, PUBMED:34993904, PUBMED:16846421, PUBMED:30083867). These techniques may minimize the risk of infection by avoiding full sternotomy near the tracheostoma. Furthermore, temporary occlusion of the tracheal stoma with an impermeable plug has been used as a method to reduce the risk of sternal wound infection in children with pre-existing tracheostomies undergoing sternotomy, with only two cases of sternal wound infection reported (PUBMED:38410052). In summary, while the timing of tracheostomy after cardiac surgery with median sternotomy is an important consideration for the risk of DSWI, the evidence suggests that early tracheostomy is not universally a risk factor for mediastinitis. Careful patient selection and the use of alternative surgical approaches or temporary occlusion techniques may help to mitigate the risk of infection.
Instruction: Do mammographic tumor features in breast cancer relate to breast density and invasiveness, tumor size, and axillary lymph node involvement? Abstracts: abstract_id: PUBMED:24814360 Do mammographic tumor features in breast cancer relate to breast density and invasiveness, tumor size, and axillary lymph node involvement? Background: Breast density and mammographic tumor features of breast cancer may carry prognostic information. The potential benefit of using the combined information obtained from breast density, mammographic tumor features, and pathological tumor characteristics has not been extensively studied. Purpose: To investigate how mammographic tumor features relate to breast density and pathological tumor characteristics. Material And Methods: This retrospective study was carried out within the Malmö Diet and Cancer Study: a population-based cohort study recruiting 17,035 women during 1991-1996. A total of 826 incident breast cancers were identified during follow-up. Mammography images were collected and analyzed according to breast density and tumor features at diagnosis. Pathological data were retrieved from medical reports. Mammographic tumor features in relation to invasiveness, tumor size, and axillary lymph node involvement were analyzed using logistic regression yielding odds ratios (OR) with 95% confidence intervals (CI) and adjusted for age at diagnosis, mode of detection, and breast density. Results: Tumors presenting as an ill-defined mass or calcifications were more common in dense breasts than tumors presenting as a distinct mass or with spiculated appearance. Invasive cancer was more common in tumors with spiculated appearance than tumors presenting as a distinct mass (adjusted OR, 5.68 [1.81-17.84]). Among invasive tumors, an ill-defined mass was more often large (>20 mm) compared with a distinct mass, (adjusted OR, 3.16 [1.80-5.55]). Conclusion: Tumors presenting as an ill-defined mass or calcifications were more common in dense breasts. Spiculated appearance was related to invasiveness, and ill-defined mass to larger tumor size, regardless of mode of detection and breast density. The potential role of mammographic tumor features in clinical decision-making warrants further investigation. abstract_id: PUBMED:32256110 Establishment of Simple Nomograms for Predicting Axillary Lymph Node Involvement in Early Breast Cancer. Purpose: Axillary lymph node (ALN) involvement is an important prognostic factor of early invasive breast cancer. The objective of this study was to establish simple nomograms for predicting ALN involvement based on ultrasound (US) characteristics and evaluate the predictive value of US in the detection of ALN involvement. Patients And Methods: A total of 1328 patients with cT1-2N0 breast cancer by physical exam were retrospectively analyzed. Univariate analysis was used for the comparison of variables, and multivariate analysis was performed by binary logistic regression analysis. The R software was used to establish simple nomograms based on the US characteristics alone. The receiver operating characteristic (ROC) curves of the prediction model and the verification group were drawn, and the area under the curve (AUC) was calculated to evaluate the discrimination of the prediction model. A calibration curve was plotted to assess the nomogram predictions vs the actual observations of the ALN metastasis rate and axillary tumor burden rate. Results: The ALN metastasis rates of the training group and the validation group were 35.1% and 34.1%, respectively. Multivariate analysis showed that molecular subtype, lymphovascular invasion, mass descriptors (size, margin, microcalcification and blood flow signal) and LN descriptors (shape, cortical thickness and long-to-short ratio) were independent impact factors in early breast cancer. The AUC of ALN metastasis rate of prediction model based on US features was 0.802, the AUC of high tumor burden rate was 0.873, and the AUC of external validation group was 0.731 and 0.802, respectively. The calibration curve of the nomogram showed that the nomogram predictions are consistent with the actual metastasis rate and the high tumor burden rate. The results showed that preoperative US had a sensitivity of 59.4% and a specificity of 88.9% for predicting the ALN metastasis rate. Conclusion: The successfully established nomograms based on US characteristics to predict ALN metastasis rate and high axillary tumor burden rate in early breast cancer can achieve individual prediction. Compared with other nomogram predictions, it is more intuitive, and can help clinical decision-making; thus, it should be promoted. However, at this time US features alone are insufficient to replace sentinel lymph node biopsy. abstract_id: PUBMED:24647779 Axillary lymph node status in multicentric breast tumors and breast tumors with nipple involvement. Background: Axillary lymph node dissection plays an important role in breast cancer management in terms of staging, prediction of prognosis, determination of adjuvant therapy, and local control of the primary tumor. The objective of this study was to evaluate the axillary lymph node involvement in multicentric breast tumors and breast tumors with nipple involvement in comparison with unifocal tumors. Patients And Methods: We reviewed the records of 267 patients with stage I or IIA disease. The rates of axillary lymph node metastasis (ALNM) in patients with unifocal tumors, multicentric tumors, or nipple involvement were compared. Results: 209 (78%) patients had unifocal tumors, 24 (8%) had multicentric tumors, and 34 (12%) had nipple involvement. The incidence of ALNM was 9.76% in patients with unifocal tumors, 24.84% in patients with multicentric tumors, and 36.71% in patients with nipple involvement. Hence, the incidence of ALNM was significantly higher in patients with nipple involvement or multicentric tumors than in patients with unifocal tumors. Conclusion: Our data suggest that compared to unifocal tumors, breast tumors with nipple involvement or multiple foci show a significantly higher incidence of ALNM which is a predictor of a poor prognosis. abstract_id: PUBMED:28629690 Axillary Lymph Node Sonographic Features and Breast Tumor Characteristics as Predictors of Malignancy: A Nomogram to Predict Risk. The purpose of this study was to build a mathematical model to predict the probability of axillary lymph node metastasis based on the ultrasonographic features of axillary lymph nodes and the tumor characteristics. We included 74 patients (75 axillae) with invasive breast cancer who underwent axillary ultrasonography ipsilateral to the tumor and fine-needle aspiration of one selected lymph node. Lymph node pathology results from sentinel lymph node biopsy or surgical dissection were correlated with lymph node ultrasonographic data and with the cytologic findings of fine-needle aspiration. Our mathematical model of prediction risk of lymph node metastasis included only pre-surgical data from logistic regression analysis: lymph node cortical thickness (p = 0.005), pre-surgical tumor size (p = 0.030), menopausal status (p = 0.017), histologic type (p = 0.034) and tumor location (p = 0.011). The area under the receiver operating characteristic curve of the model was 0.848, reflecting an excellent discrimination of the model. This nomogram may assist in the choice of the optimal axillary approach. abstract_id: PUBMED:29936787 Frequency and Predictors of Axillary Lymph Node Metastases in Iranian Women with Early Breast Cancer Background: Axillary lymph node metastasis is the most important predictive factor for recurrence risk and survival in patients with invasive breast carcinoma. The aim of this study was to determine factors associated with metastatic involvement of axillary lymph nodes in Iranian women with early breast cancer. Methods: This article reports a retrospective study of 774 patients with T1-T2 breast cancer who underwent resection of the primary tumor and axillary staging by SLNB and/or ALND between 2005 and 2015 at our institution. Results: Of the 774 patients included in this study, 35.5% (275 cases) had axillary lymph node involvement at the time of diagnosis. Factors associated with nodal involvement in univariate analyses were tumor size, lymphovascular invasion (LVI), tumor grade, ER/PR status and HER2 expression. All factors identified with univariate analyses were entered into a multivariate logistic regression model and tumor size (OR= 3.01, CI 2.01–4.49, P <0.001), ER/PR positivity (OR = 1.74, CI 1.1.16–2.62, P = 0.007) and presence of LVI (OR = 3.3.8, CI 2.31–4.95, P <0.001) remained as independent predictors of axillary lymph node involvement .Conclusions: In conclusion, the results of this study suggests that positive hormonal receptor status, LVI and tumor size are predictive factors for ALNM in Iranian women with early breast cancer. abstract_id: PUBMED:27504343 Effect of Lump Size and Nodal Status on Prognosis in Invasive Breast Cancer: Experience from Rural India. Introduction: Breast cancer is now the leading cause of cancer among Indian women. Usually large tumour size and axillary lymph node involvement are linked with adverse outcome and this notion forms the basis of screening programs i.e. early detection. Aim: The present study was carried out to analyse relationship between tumour size, lymph node status and there relation with outcome after treatment. Materials And Methods: Fifty patients with cytology-proven invasive breast tumours were evaluated for size, clinical and pathologic characteristics of tumour, axillary lymph node status and outcome data recorded on sequential follow-up. Results: Mean age of all participated patients was 52.24±10 years. Most common tumour location was in the upper outer quadrant with mean size of primary tumour being 3.31±1.80cm. On pathology number of lymph nodes examined ranged from 10 to 24 and 72% of patients recorded presence of disease in axilla. Significant positive correlation (p<0.013; r(2)=0.026) between tumour size and axillary lymph node involvement on linear regression. Also an indicative correlation between size and grade of tumour and axillary lymph node status was found with survival from the disease. Conclusion: The present study highlights that the size of the primary tumour and the number of positive lymph nodes have an inverse linear relationship with prognosis. Despite advances in diagnostic modalities, evolution of newer markers and genetic typing both size of tumour as T and axillary lymphadenopathy as N form an integral part of TNM staging and are of paramount importance for their role in treatment decisions and illustrate prognosis in patients with invasive breast cancer. abstract_id: PUBMED:32165830 A Study Correlating the Tumor Site and Size with the Level of Axillary Lymph Node Involvement in Breast Cancer. Background: Breast cancer is the leading cancer in women. The most common histologic type of breast cancer is infiltrating ductal carcinoma. The mainstay of the treatment of breast cancer is surgery when the tumor is localized, followed by chemotherapy as well as radiotherapy (when indicated) and in estrogen receptor and progesterone receptor positive tumors, adjuvant hormonal therapy. Aims And Objectives: The aim of this study is to correlate tumor site and size with the level of axillary lymph node involvement (ALNI) in early and locally advanced breast cancer. Materials And Methods: This prospective and observational study was conducted on fifty female patients of carcinoma breast with early and locally advanced breast cancer. Results: The age distribution showed two peaks at 41-50 years and 51-60 years with 42 and 24 patients, respectively, in both the age groups. Preoperative lymph node positivity by ultrasonography matched with postoperative histopathological examination (HPE) report. Preoperative ultrasonographically determined tumor size was similar to the final histopathological T stage. Conclusion: As size of tumor increases, there is an increase in ALNI which suggests that nodal metastasis is indicative of tumor chronology. Ultrasonography is a good tool to objectively measure tumor size and lymph node involvement preoperatively. Quadrant of involvement can emerge as a clinically useful prognostic cancer in breast cancer as there is a higher incidence of lymph node positivity with increasing size of the breast tumor and for tumors located at the upper outer quadrant of the breast. abstract_id: PUBMED:12454812 Computer-generated nuclear features compared with axillary lymph node status and tumor size as indicators of breast cancer survival. The extent to which malignant cells deviate from normal is generally accepted to be a prognostic indicator. However, assessing the degree of deviation has been subjective and poorly reproducible. Our goal is to develop a computer program for objectively measuring nuclear size, shape, and texture from histologic slides and to make the program available on the Internet. We used this program to analyze 353 histologic sections obtained from patients with invasive breast cancer who were diagnosed and treated from 1981 through 1995 and who had determinable outcomes. The median follow-up was 8.3 years. We compared the relationship of survival with our computer-derived nuclear features versus axillary lymph node status and tumor size. We believe that our results are generally applicable because our patient survival, when stratified by lymph node status, was similar to that of the 24,000 breast cancer patients in the National Cancer Institute's Surveillance, Epidemiology, and End Results program. In multivariate analysis, the strongest prognostic factor was the largest nuclear area, followed by tumor size and the extent of axillary lymph node involvement. The mean area of the 3 largest nuclei when combined with tumor size identified 30% of all breast cancer patients who had an 87% 15-year breast cancer-specific survival. Inclusion of lymph node status added little to this 2-factor model. Routine axillary lymph node surgery for prognostic purposes may become unnecessary, because nuclear features may provide sufficient information. abstract_id: PUBMED:23961380 Is the sentinel lymph node biopsy more sensitive for the identification of positive lymph nodes in breast cancer than the axillary lymph node dissection? Since the routine clinical use of the sentinel lymph node (SLN) procedure, questions have been raised concerning an increase in the overall percentage of node-positive patients. The goal of our study was to compare the sensitivity of the SLN procedure and the axillary lymph node dissection (ALND) for the identification of positive lymph nodes in breast cancer. The incidence of axillary node metastasis in SLNB and ALND specimens from patients undergoing operative treatment of a primary breast carcinoma was compared retrospectively. Logistic regression models were used to analyze the effect of various predictors on the presence of positive lymph nodes. We constructed a multivariate model including the procedure and these predictors that have shown to be related to lymph node involvement in univariate analysis. The probability of finding positive lymph nodes was thus calculated in both groups correcting for relevant predictors of lymph node involvement. The SLNB group included 830 patients, the ALND group 320. In a multivariate analysis, adjusting for the number of foci, tumor location in the breast, tumor size, LVI, ER, PR, tumor grade and histological subtype, the probability of finding positive lymph nodes was higher with SLNB procedure than with an ALND. However, this difference was not statistically significant (OR 0.7635; CI 0.5334-1.0930, p 0.1404). For comparable tumors, SLNB procedure is at least as sensitive as ALND for detecting positive lymph nodes. abstract_id: PUBMED:10394511 Microinvasive ductal carcinoma of the breast. Role of axillary lymph node dissection The role of axillary lymph node dissection for microinvasive ductal carcinoma in situ of the breast was analyzed in a series of 60 consecutive cases. Forty-four cases were subclinical mammographically-detected carcinomas revealed by the clusters of microcalcifications. Although pathologists differ in their criteria for microinvasion, the maximal size considered in this retrospective study was 2 mm. Axillary lymph node involvement was found in 3 cases (i.e. 5%) which harbored poor histologic features: comedocarcinoma subtype, high nuclear grade, and size of the ductal carcinoma in situ greater than 3 cm, requiring total mastectomy. While there is no need for axillary dissection in women with pure ductal carcinoma in situ, the management is quite different in proven microinvasion. Owing to the weakness of prognostic information given by cellular, biochemical and molecular features, instead of lymph node status, axillary dissection is still recommended in microinvasive ductal carcinoma in situ. Answer: Mammographic tumor features in breast cancer do relate to breast density, invasiveness, tumor size, and axillary lymph node involvement. A study within the Malmö Diet and Cancer Study found that tumors presenting as an ill-defined mass or calcifications were more common in dense breasts than tumors presenting as a distinct mass or with spiculated appearance. Moreover, invasive cancer was more common in tumors with spiculated appearance than in those presenting as a distinct mass. Among invasive tumors, an ill-defined mass was more often large (>20 mm) compared with a distinct mass (PUBMED:24814360). Additionally, axillary lymph node (ALN) involvement, which is an important prognostic factor in early invasive breast cancer, can be predicted using simple nomograms based on ultrasound characteristics. These nomograms consider factors such as molecular subtype, lymphovascular invasion, and mass descriptors (size, margin, microcalcification, and blood flow signal) (PUBMED:32256110). Furthermore, multicentric breast tumors and breast tumors with nipple involvement have been shown to have a significantly higher incidence of axillary lymph node metastasis (ALNM) compared to unifocal tumors, indicating a poorer prognosis (PUBMED:24647779). Other studies have also developed nomograms to predict the risk of axillary lymph node metastasis based on ultrasonographic features of axillary lymph nodes and tumor characteristics, including lymph node cortical thickness, pre-surgical tumor size, menopausal status, histologic type, and tumor location (PUBMED:28629690). In Iranian women with early breast cancer, factors such as positive hormonal receptor status, lymphovascular invasion (LVI), and tumor size were identified as predictive factors for ALNM (PUBMED:29936787). Similarly, a study from rural India found a significant positive correlation between tumor size and axillary lymph node involvement, with larger tumors and a higher number of positive lymph nodes being associated with a worse prognosis (PUBMED:27504343). The correlation between tumor site and size with the level of axillary lymph node involvement (ALNI) was also observed, with an increase in ALNI as tumor size increases, and a higher incidence of lymph node positivity for tumors located at the upper outer quadrant of the breast (PUBMED:32165830).
Instruction: Does anaesthetic management affect early outcomes after lung transplant? Abstracts: abstract_id: PUBMED:19224927 Does anaesthetic management affect early outcomes after lung transplant? An exploratory analysis. Background: Primary graft dysfunction (PGD) is a predominant cause of early morbidity and mortality after lung transplantation. Although substantial work has been done to understand risk factors for PGD in terms of donor, recipient, and surgical factors, little is understood regarding the potential role of anaesthetic management variables in its development. Methods: We conducted a retrospective exploratory analysis of 107 consecutive lung transplants to determine if anaesthesia factors were associated with early graft function quantified by Pa(O(2))/Fi(O(2)). Multivariate regression techniques were used to explore the association between anaesthetic management variables and Pa(O(2))/Fi(O(2)) ratio 12 h after operation. The relationship between these variables and both time to tracheal extubation and intensive care unit (ICU) length of stay was further examined using the Cox proportional hazards. Results: On multivariate analysis, increasing volume of intraoperative colloid, comprising predominantly Gelofusine (succinylated gelatin), was independently associated with a lower Pa(O(2))/Fi(O(2)) 12 h post-transplantation [beta coefficient -42 mm Hg, 95% confidence interval (CI) -7 to -77 mm Hg, P=0.02] and reduced rate of extubation [hazard ratio (HR) 0.65, 95% CI 0.49-0.84, P=0.001]. There was a trend for intraoperative colloid to be associated with a reduced rate of ICU discharge (HR 0.79, 95% CI 0.31-1.02, P=0.07). Conclusions: We observed an inverse relationship between volume of intraoperative colloid and early lung allograft function. The association persists, despite detailed sensitivity analyses and adjustment for potential confounding variables. Further studies are required to confirm these findings and explore potential mechanisms through which these associations may act. abstract_id: PUBMED:32736384 Perioperative Anaesthetic Management of Patients after Heart and Lung Transplantation The outcome after heart and lung transplantation has improved significantly. Consequently, many patients are admitted to the hospital for routine surgical interventions that are initially non-transplant-specific. Some disorders lead to hospital admissions that affect other organ systems due to late consequences of the underlying disease or can be seen as early and late complications of the transplantation itself. Many of these surgical interventions are certainly carried out in the responsible transplant centre. Some surgeries are also performed in hospitals that do not primarily transplant and do not regularly care for heart and lung transplant patients. In these situations, the understanding of the physiology of the transplanted heart and lung, the consequences of the underlying disease and the post-transplant treatment with its peculiarities and risks is paramount. The anaesthetic management of these patients requires preoperative risk stratification and perioperative anaesthetic planning, but also responsibility for a suitable post-operative monitoring. This review article deals with the special anaesthetic consideration in patients after heart and lung transplantation. abstract_id: PUBMED:33938838 Anaesthetic challenges in pulmonary alveolar microlithiasis - a rare disease treated with bilateral lung transplantation, first case from India. Pulmonary Alveolar Microlithiasis (PAM) is a rare autosomal recessive end stage lung condition characterized by widespread alveolar deposition of calcium microliths. It is worth emphasis india has reported 80 cases out of 1022 cases worldwide but there is no report on lung transplant and and anaesthetic management in this category of patient. This report present the anaesthetic challenge in management of first indian patient with the diagnosis of PAM, who underwent bilateral lung transplantation. Bilateral lung transplantation is one of the most challenging surgeries that require the care of a cardiothoracic anesthesiologist. Utilization of extracorporeal circulation has allowed a safer performance of this procedure in patients with severe cardiopulmonary compromise. Intraoperative management is a pivotal part of the patient's care, as it contributes to the patient's overall outcome. abstract_id: PUBMED:31838950 Impact of Affect on Lung Transplant Candidate Outcomes. Background: We examined the association of adult lung transplant candidates' self-reported affect with transplant-related outcomes, evaluating whether a positive (vs negative) frame of mind might be protective. Method: Consenting waitlisted candidates from 6 centers completed the questionnaires including the Positive and Negative Affect Schedule annually and posttransplant. Univariate logistic regression analysis was performed to determine the association of baseline affect with outcomes of death or delisting. Models were subsequently adjusted for age, marital status, and education. Results: Questionnaires were completed by 169 candidates (77.9% participation). Mean positive affect, negative affect, and positive-to-negative affect ratio (positivity ratio) were similar to expected norms. The scores of the questionnaire did not change significantly over time. Fifteen (8.9%) waitlisted participants died. Candidates who died while waiting had lower positivity ratios compared to those who survived (1.82 vs 2.45; P = .02). A more negative affect was associated with increased death on the waiting list (adjusted odds ratio [OR] 1.10; P = .021). Conversely, a higher positivity ratio was associated with decreased death while waiting (adjusted OR: 0.45; P = .027). Conclusion: Negative affect may represent a novel risk factor for death on the waitlist. Enhancing positive affect may represent a useful target for psychological optimization in lung transplant candidates. abstract_id: PUBMED:37035359 Perioperative fluid management for lung transplantation is challenging. Lung transplantation is the definitive end-stage treatment for many lung diseases, and postoperative pulmonary oedema severely affects survival after lung transplantation. Optimizing perioperative fluid management can reduce the incidence of postoperative pulmonary oedema and improve the prognosis of lung transplant patients by removing the influence of patient, donor's lung and ECMO factors. Therefore, this article reviews seven aspects of lung transplant patients' pathophysiological characteristics, physiological characteristics of fluids, the influence of the donor lung on pulmonary oedema as well as current fluid rehydration concepts, advantages or disadvantages of intraoperative monitoring tools or types of fluids on postoperative pulmonary oedema, while showing the existing challenges in section 7. The aim is to show the specificity of perioperative fluid management in lung transplant patients and to provide new ideas for individualised fluid management in lung transplantation. abstract_id: PUBMED:31378136 Early Graft Dysfunction Following Heart Transplant: Prevention and Management. Heart transplant can be considered as the "gold standard" treatment for end-stage heart failure, with nearly 5.7 million adults in the United States carrying a diagnosis of heart failure. According to the International Society for Heart and Lung Transplantation registry, nearly 3300 orthotopic heart transplants were performed in 2016 in North America. In spite of significant improvements in overall perioperative care of heart transplant recipients for the past few decades, the risk of 30-day mortality remains 5% to 10%, primarily related to early failure of the allograft. Early graft dysfunction (EGD) occurs within 24 hours after transplant, manifesting as left ventricular dysfunction, right ventricular dysfunction, or biventricular dysfunction. EGD is further classified into primary and secondary graft dysfunction. This review focus on describing overall incidences of EGD, potential risk factors associated with EGD, perioperative preventive measures, and various management options. abstract_id: PUBMED:25447584 Contemporary analysis of early outcomes after lung transplantation in the elderly using a national registry. Background: With an increasing number of potential recipients and a comparatively static number of donors, lung transplantation (LT) in the elderly has come under significant scrutiny. Previous studies have been limited by single-center experiences with small population sizes and often mixed results. Using a national registry, we sought to evaluate the following: (1) differences in survival outcomes in septuagenarians compared with sexagenarians; and (2) the effect of temporal trends on the development of other comorbidities in this population. Methods: We analyzed the Scientific Registry of Transplant Recipients (SRTR) data files from the United Network for Organ Sharing (UNOS) database to identify recipients who underwent LT between the years 2000 and 2013. The study period was divided into two equal eras. Using Kaplan-Meier analysis, we compared the 30-day, 3-month, 1-year, 3-year and 5-year patient survival between septuagenarians and sexagenarians in both eras. Separate multivariate analyses were performed to estimate the risk of renal failure, risk of rejection and length of hospital stay (LOS) post-LT in each of these time periods. Results: A total of 6,596 patients were identified comprising 1,726 (26.2%) during 2000 to 2005 and 4,870 (73.8%) during 2006 to 2012. In the "early era," 32 (1.9%) septuagenarians and 1,694 (98.1%) sexagenarians underwent LT, whereas 543 (11.1%) septuagenarians and 4,327 (88.9%) sexagenarians underwent transplantation in the "latter era." A comparison of patient survival between the two groups in the early era revealed no difference at 30 days (95.7% vs 93.8%, p = 0.65). However, 3-month (91.2% vs 75%, p = 0.04) and 1-year patient survival (79.5% vs 62.5%, p = 0.048) were both lower in the septuagenarian group. In the later era, however, there were no differences in 30-day (96.2% vs 96.8, p = 0.5), 3-month (92.7% vs 91.9%, p = 0.56) or 1-year (81.7% vs 78.6%, p = 0.12) patient survival between the two age groups. Survival rates at 3 years (63.7% vs 49.3%, p < 0.001) and 5 years (47.5% vs 28.2%, p < 0.001) were each significantly lower in the septuagenarian group. Conclusion: Overall, LT outcomes for the elderly have improved significantly over time and early outcomes in the modern era rival those found in younger recipients. abstract_id: PUBMED:9067048 Perioperative anaesthetic considerations for patients undergoing lung transplantation. Purpose: Five thousand, two hundred and eight lung transplants were performed worldwide before April, 1996. This review will discuss lung transplantation from an historical perspective, its indications, donor and recipient selection criteria, donor lung preparation, surgical considerations, perioperative anaesthetic management, and associated morbidity and mortality. Source: Recent literature on perioperative anaesthetic management of lung transplantation and experience from international centres including the Toronto Lung Transplant Group and the St. Louis Lung Transplant Group. Principal Findings: Lung transplantation comprises of a family of operations, including single lung transplant, bilateral single lung transplant, lobar transplant and block heart-lung transplant. Improved donor lung preservation techniques have increased the duration of cold ischaemic time. The advent of bilateral single lung transplant has decreased the requirement for cardiopulmonary bypass, and airway complications have been reduced by adoption of the telescoping bronchial anastomoses. Advances in perioperative monitoring (including transoesophageal echocardiography), pulmonary vasodilators (e.g., nitric oxide and prostaglandin E1), cardiopulmonary bypass and ventilatory management, and a better understanding of the pathophysiological processes during the procedure have improved perioperative anaesthetic management. Also, advances in broad spectrum antibiotics and immunosuppressant drugs have improved the outcome by better management of the complications of infection and rejection. Conclusion: Lung transplantation improves the quality of life with marginal improvement in life expectancy of the recipients. It is an expensive procedure requiring continued resources for long term management of these patients. abstract_id: PUBMED:23856213 Impact of nutritional state on lung transplant outcomes: the weight of the evidence. Despite advances in perioperative and post-operative management, lung transplant recipients with select pre transplant risk factors have been shown to experience worse post-transplant outcomes in comparison to those without such risk factors. Among these variables, previous studies have shown that select markers of poor nutritional status prior to transplant, such as low body mass index (BMI) and hypoalbuminemia, have been associated with increased post-transplant mortality. In a past issue of the journal, Chamogeorgakis el al. examine a comprehensive battery markers previously associated with malnutrition to determine their impact on outcomes after lung transplantation. The authors find that hypoalbuminemia is associated with worse survival, but does not appear to affect the risk of post-transplant infections. This article reviews the study presented by Chamogeorgakis et al. to discuss how it furthers our understanding of the impact of nutritional status on transplant-related outcomes and consider areas for future investigation. abstract_id: PUBMED:24518214 Early postoperative management of lung transplantation. Lung transplantation is a widely accepted therapeutic option for patients with end-stage lung disease. However, despite the significant medical progress achieved since the first human lung transplant was performed in 1963, perioperative and long-term patient outcomes are still jeopardised by a variety of complications. Infection and rejection are among the most feared complications in the early post-transplantation period because they are difficult to treat and can have a long-lasting impact on the quality of life and overall life expectancy. Multidisciplinary management of lung transplant recipients focusing on the prevention or early detection of complications, particularly during the early postoperative phase, may improve the short and long-term outcomes of those patients benefiting from this life-saving intervention. However, in the absence of internationally recognised guidelines, the optimal strategies for managing lung transplant recipients remain unclear. This article presents a practical approach to the management of the early post-transplantation period aiming at standardising clinical care and improving patient outcome. Answer: Yes, anaesthetic management does affect early outcomes after lung transplant. An exploratory analysis of 107 consecutive lung transplants found that increasing volumes of intraoperative colloid, predominantly Gelofusine (succinylated gelatin), were independently associated with a lower Pa(O(2))/Fi(O(2)) ratio 12 hours post-transplantation and reduced rates of extubation. There was also a trend for intraoperative colloid to be associated with a reduced rate of ICU discharge, suggesting that anaesthetic management variables, particularly fluid management, can influence early lung allograft function (PUBMED:19224927). Perioperative fluid management is highlighted as a challenge in lung transplantation, with postoperative pulmonary oedema severely affecting survival. Optimizing perioperative fluid management can reduce the incidence of postoperative pulmonary oedema and improve prognosis by removing the influence of patient, donor's lung, and ECMO factors (PUBMED:37035359). Furthermore, the perioperative anaesthetic management of patients after heart and lung transplantation requires understanding the physiology of the transplanted organs, the consequences of the underlying disease, and the post-transplant treatment with its peculiarities and risks. Preoperative risk stratification, perioperative anaesthetic planning, and suitable post-operative monitoring are crucial for these patients (PUBMED:32736384). In addition, the perioperative anaesthetic considerations for lung transplantation have evolved over time with improvements in donor lung preservation techniques, perioperative monitoring, pulmonary vasodilators, cardiopulmonary bypass and ventilatory management, and management of complications of infection and rejection, all contributing to improved perioperative anaesthetic management and outcomes (PUBMED:9067048). Overall, these findings indicate that anaesthetic management, particularly fluid management, plays a significant role in the early outcomes of lung transplant patients.
Instruction: Is prenatal alcohol exposure related to inattention and hyperactivity symptoms in children? Abstracts: abstract_id: PUBMED:19298478 Is prenatal alcohol exposure related to inattention and hyperactivity symptoms in children? Disentangling the effects of social adversity. Background: Studies concerning whether exposure to low levels of maternal alcohol consumption during fetal development is related to child inattention and hyperactivity symptoms have shown conflicting results. We examine the contribution of covariates related to social adversity to resolve some inconsistencies in the extant research by conducting parallel analyses of three cohorts with varying alcohol consumption and attitudes towards alcohol use. Methods: We compare three population-based pregnancy-offspring cohorts within the Nordic Network on ADHD from Denmark and Finland. Prenatal data were gathered via self-report during pregnancy and birth outcomes were abstracted from medical charts. A total of 21,678 reports concerning inattention and hyperactivity symptoms in children were available from the Strengths and Difficulties Questionnaire or the Rutter Scale completed by parents and/or teachers. Results: Drinking patterns differed cross-nationally. Women who had at least some social adversity (young, low education, or being single) were more likely to drink than those better off in the Finnish cohort, but the opposite was true for the Danish cohorts. Prenatal alcohol exposure was not related to risk for a high inattention-hyperactivity symptom score in children across cohorts after adjustment for covariates. In contrast, maternal smoking and social adversity during pregnancy were independently and consistently associated with an increase in risk of child symptoms. Conclusions: Low doses of alcohol consumption during pregnancy were not related to child inattention/hyperactivity symptoms once social adversity and smoking were taken into account. abstract_id: PUBMED:24512965 Correspondence of parent report and laboratory measures of inattention and hyperactivity in children with heavy prenatal alcohol exposure. Clinical research and practice support a multi-method approach to validating behavioral problems in children. We examined whether parent-reported symptoms of hyperactivity and inattention (using the Disruptive Behavior Disorder Rating Scale) were substantiated by objective laboratory measures [hyperactivity measured by wrist-worn actigraphy (ACT) and inattention assessed using a 20-minute continuous performance task (CPT)] in three age- and demographically-matched groups of school-age children: children with prenatal alcohol exposure (AE), non-exposed children with idiopathic ADHD (ADHD), and controls (CON). Results indicated that the clinical groups (AE, ADHD) had significantly higher parent-reported levels for both domains compared to the CON group, and did not differ from each other. On the laboratory measures, the clinical groups were more inattentive than controls on the CPT, but did not differ from each other. In contrast, the ADHD group had higher objective activity on the ACT than AE and CON, which did not differ from each other. Thus, laboratory measures differentially validated parent reports in a group-dependent manner. Actigraphy substantiated parent-reported hyperactivity for children in the ADHD group but not for children in the AE group, while the CPT validated parent-reported inattention for both clinical groups. Although the majority of children in the AE group met the criteria for ADHD, objective activity levels were not different from controls, indicating that hyperactivity may be a less prominent feature in the AE group. Thus, while there is considerable overlap between the effects of prenatal alcohol exposure and ADHD, differences in behavioral profiles may be clinically useful in differential diagnosis. Further, these data indicate that objective measures should be used to validate parent reports. abstract_id: PUBMED:22821287 Explanation of social inequalities in hyperactivity/inattention in children with prenatal alcohol exposure. Background: Hyperactivity and inattention are major effects of prenatal alcohol exposure (PAE). Although predominantly women from the high social class consume alcohol during pregnancy, children from the low social class are particularly affected by the adverse effects of PAE. This study aimed to test the hypothesis of a social gradient in hyperactivity/inattention in children with PAE. Methods: Children with PAE (N=996) enrolled in the German Health Interview and Examination Survey for Children and Adolescents (KiGGS) were studied. KiGGS was designed and conducted by Robert Koch Institute (RKI) as a nationwide representative survey on the health of German children and adolescents aged 0-17 years. The data include information given by parents and adolescents on the physical and mental health, sociodemographic features, life circumstances and conditions. Results: PAE children with a middle and low parental socioeconomic status (SES) are on a higher risk of developing hyperactivity/inattention compared to those with high parental SES. Cultural-behavioral factors had the strongest effect in the explanation of social inequalities in hyperactivity/inattention among children with PAE. Conclusions: Cultural-behavioral factors, particularly health-related behaviors, need a significant improvement in children from the low and middle social class. To reduce social inequalities in hyperactivity in children with PAE, interventions have to focus on the dietary and television habits of the child by reaching parents from the low and middle social class. abstract_id: PUBMED:38455925 Prenatal exposure to polybrominated diphenyl ethers and inattention/hyperactivity symptoms in mid to late adolescents. Introduction: Prenatal exposure to polybrominated diphenyl ethers (PBDEs) has been associated with increased symptoms of attention deficit/hyperactivity disorder (ADHD) in early to middle childhood, as well as early adolescence. However, data are limited for the long-lasting impact of exposure on outcomes assessed across the entire adolescent period and the sex-specificity of such associations. Methods: We investigated the association between continuous natural-log-transformed cord plasma PBDE concentrations and ADHD rating scale 4th edition (ADHD-RS-IV) score from mid adolescence (approximately 11 years old) to late adolescence (approximately 17 years old). The study sample includes a subset (n = 219) of the African American and Dominican children enrolled in the Columbia Center for Children's Environmental Health Mothers and Newborns birth cohort. We used generalized estimating equations to account for the repeated measure of ADHD-RS scores. We examined interactions between exposure to PBDE and sex using cross-product terms and sex-stratified models. In addition, we used linear regression using an age-stratified sample as a sensitivity analysis. Results And Discussion: Associations between prenatal exposure and parents' reports of ADHD symptoms varied by sex (p-interaction <0.20), with positive relationships observed among girls but not boys from sex-stratified models. Our finding suggests prenatal exposure to PBDE may affect ADHD symptoms assessed during middle to late adolescence and the sex-specificity of such impact. Our results can be confirmed by future studies with larger and more diverse samples. abstract_id: PUBMED:25447751 Objective assessment of ADHD core symptoms in children with heavy prenatal alcohol exposure. Attention deficits are often observed in children with prenatal alcohol exposure and attention-deficit/hyperactivity disorder (ADHD) is commonly diagnosed in this population. This study used an objective assessment tool to examine differences between alcohol-exposed and non-exposed children on core symptoms of ADHD: inattention, impulsivity, and hyperactivity. Two groups of individuals, aged 7-14years, participated in the study: alcohol-exposed children (AE, n=43), and non-exposed children (CON, n=54). Subjects were evaluated with the Quotient ADHD System, which provides objective data on ADHD core symptoms by combining an infrared motion tracking system and a computerized continuous performance task. Twelve separate ANCOVAs controlling for the effects of age and sex, were conducted on attention and motion variables. Results revealed that in comparison to the CON group, the AE group was significantly (p's<.05) less accurate, made an increased number of omission errors, had longer response latencies, and increased variability in response time. Moreover, the AE group spent less time staying still, and made an increased number of head movements, which traveled a larger distance, covered a greater area, and demonstrated a less complex movement pattern. No significant group differences were observed on the number of commission errors and temporal scaling. Our findings provide further support for the notion that inattention is a core deficit in children prenatally exposed to alcohol. Results from this study are also consistent with parent reports of increased hyperactivity. The Quotient ADHD System may be a useful objective measure of ADHD symptomatology in children with FASD. abstract_id: PUBMED:27155839 Inattention and impulsivity associated with prenatal alcohol exposure in a prospective cohort study with 11-years-old Brazilian children. This paper aimed to examine prenatal alcohol exposure and neuropsychological parameters and its relationship to impulsivity and inattention. Longitudinal prospective case-control cohort study starting with the risk drinking assessment of 449 third-trimester pregnant women, and a follow-up phase with 56 mother-child pairs (28 alcohol-exposed versus 28 non-exposed), with 11-12 years old children. The cohort study was followed up for 11 years. Quantity-frequency structured questions as well as AUDIT and T-ACE questionnaires were used to assess maternal alcohol consumption. A comprehensive set of neuropsychological testing instruments was used, including d2 Test, RCFT, RAVLT, WISC-III, among others. To control low IQ effects and intellectual disability diagnoses, as well differences in school skills biasing the neuropsychological comparison assessment, children with IQ <70 or learning disabilities were excluded of the sample. The two groups showed to be very comparable regarding sex, age, schooling, global IQ, laterality and maternal and social risk factors. Significant statistical differences were found for higher speed processing, total errors, and number of omission errors in the d2 Test. Likewise, there were differences found on RCFT test (lower scores for copy, immediate and delayed recall), and on semantic verbal fluency tests with a lower score. Prenatal alcohol-exposed children seems to be more inattentive and impulsive; they have poorer skills in verbal fluency, visuospatial working memory, and executive processing when compared to non-exposed children who were part of the same cohort sample. abstract_id: PUBMED:22817778 Prenatal alcohol exposure, attention-deficit/hyperactivity disorder, and sluggish cognitive tempo. Background: Children with heavy prenatal alcohol exposure often meet criteria for attention-deficit/hyperactivity disorder (ADHD). ADHD research has examined subtype differences in symptomatology, including sluggish cognitive tempo (SCT). This construct is defined by behavioral symptoms including hypoactivity and daydreaming and has been linked to increased internalizing behaviors. The current study examined whether similar findings are displayed in children with prenatal alcohol exposure. Methods: As part of a multisite study, caregivers of 272 children (8 to 16 years) completed the SCT Scale and Child Behavior Checklist (CBCL). Four groups were included: alcohol-exposed children with ADHD (ALC+; n = 75), alcohol-exposed children without ADHD (ALC-; n = 35), nonexposed children with ADHD (ADHD; n = 60), and nonexposed children without ADHD (CON; n = 102). SCT and CBCL scores were analyzed using 2 (exposure) × 2 (ADHD) analyses of variance. Pearson's correlations measured the relationships between SCT, CBCL, and Full Scale IQ (FSIQ). Discriminant function analysis examined whether SCT items could accurately classify groups. Results: Analyses revealed significant main effects of exposure and ADHD on SCT and internalizing and externalizing scores and significant interaction effects on SCT and internalizing scores. SCT significantly correlated with internalizing, externalizing, and attention ratings in all groups and with FSIQ in ALC+. Discriminant function analysis indicated that specific SCT items could distinguish ALC- from CON. Conclusions: Alcohol-exposed children exhibited elevated SCT scores. Elevations were related to increased parent ratings of internalizing and externalizing behaviors and attention. These findings are observed in alcohol-exposed children regardless of ADHD symptoms and specific SCT items proved useful in distinguishing exposed children, suggesting clinical utility for this measure in further defining the neurobehavioral profile related to prenatal alcohol exposure. abstract_id: PUBMED:31161393 Low-moderate prenatal alcohol exposure and offspring attention-deficit hyperactivity disorder (ADHD): systematic review and meta-analysis. Purpose: To evaluate the available evidence on the association between low-to-moderate prenatal alcohol exposure (PAE) and the development of attention-deficit hyperactivity disorder (ADHD) symptoms in the offspring. Methods: We systematically reviewed and meta-analysed studies reporting an association between low and/or moderate PAE and offspring ADHD symptoms (attention and/or hyperactivity). Systematic searches were performed in EMBASE, Pubmed, Medline, and PsycINFO and reviewed from selected references. Random effects modelling was conducted to pool adjusted odds ratios (OR) in different alcohol consumption levels (≤ 20 g/week, ≤ 50 g/week, and ≤ 70 g/week). Stratified analysis by sex per alcohol level was conducted to investigate the difference on OR and the magnitude between-study heterogeneity. Results: Ten studies were included in the systematic review and six in the meta-analysis. Eight studies found no association and two studies suggested an apparent protective effect of low PAE in hyperactivity/inattention symptoms in boys. These results were confirmed by the meta-analysis showing no association between ≤ 20 g/week [OR 1.01 (0.68-1.49)], ≤ 50 g/week [OR 0.94 (0.85-1.03)] and ≤ 70 g/week [OR 0.94 (0.86-1.02)] and ADHD symptoms, with no evidence of publication bias. Stratified analysis by sex for a PAE ≤ 50 g/week exposed less risk of ADHD symptoms in boys compared to girls [OR 0.89 (0.83-0.96)]. Conclusions: We found no increased risk of ADHD symptoms in offspring born to mothers who drank alcohol up to 70 g/week. abstract_id: PUBMED:30729552 Predicting children's school grades: Unique and interactive effects of parental beliefs and child inattention/hyperactivity symptoms. Parental beliefs about school involvement are key in predicting individual differences in children's academic success. The current study examined unique and interactive relations between parental beliefs and child inattention/hyperactivity symptoms in predicting children's achievement. Participants (N = 348) were caregivers of children aged 8-12. Caregivers completed questionnaires regarding their beliefs and their child's inattention/hyperactivity and achievement. Hierarchical regression analyses indicated lower child inattention/hyperactivity and greater parental confidence in their ability to help their child academically predicted better achievement. Parent/child interactions probed with simple slopes suggested an achievement gap for children with higher inattention/hyperactivity only when their parents felt less efficacious or more responsible for their child's academic success. This suggests parent self-efficacy may buffer the negative relation between children's inattention/hyperactivity symptoms and underachievement, and parents of children with higher inattention/hyperactivity may increasingly assume responsibility for their success due to feedback from the school. Statement of contribution What is already known on this subject? Academic achievement predicts several short- and long-term outcomes for children. Parental involvement beliefs are multi-faceted and predict children's academic success. Child inattention/hyperactivity symptoms are related to lower academic achievement. What does this study add? It provides specificity of previous relations for children with a range of inattention/hyperactivity symptoms. It identifies parental self-efficacy as a promising moderator of the relation between child behaviour and academics. It provides a preliminary evidence base for future work on the role of parental beliefs in child academic outcomes. abstract_id: PUBMED:23616727 Short sleep duration is associated with teacher-reported inattention and cognitive problems in healthy school-aged children. Purpose: Pediatric, clinical, and research data suggest that insufficient sleep causes tiredness and daytime difficulties in terms of attention-focusing, learning, and impulse modulation in children with attention deficit hyperactivity disorder (ADHD) or in those with ADHD and primary sleep disorders. The aim of the present study was to examine whether sleep duration was associated with ADHD-like symptoms in healthy, well-developing school-aged children. Patients And Methods: Thirty-five healthy children (20 boys, 15 girls), aged 7-11 years participated in the present study. Each child wore an actigraphic device on their nondominant wrist for two nights prior to use of polysomnography to assess their typical sleep periods. On the third night, sleep was recorded via ambulatory assessment of sleep architecture in the child's natural sleep environment employing portable polysomnography equipment. Teachers were asked to report symptoms of inattention and hyperactivity/impulsivity on the revised Conners Teacher Rating Scale. Results: Shorter sleep duration was associated with higher levels of teacher-reported ADHD-like symptoms in the domains of cognitive problems and inattention. No significant association between sleep duration and hyperactivity symptoms was evident. Conclusion: Short sleep duration was found to be related to teacher-derived reports of ADHD-like symptoms of inattention and cognitive functioning in healthy children. Answer: The relationship between prenatal alcohol exposure (PAE) and inattention and hyperactivity symptoms in children has been explored in various studies, with mixed results. Some studies have found that low doses of alcohol consumption during pregnancy were not related to child inattention/hyperactivity symptoms once social adversity and smoking were taken into account (PUBMED:19298478). However, other research has indicated that children with heavy prenatal alcohol exposure may exhibit symptoms of hyperactivity and inattention, as reported by parents and substantiated by objective laboratory measures such as actigraphy and continuous performance tasks (PUBMED:24512965). Additionally, social inequalities have been observed in the effects of PAE, with children from lower socioeconomic statuses being at higher risk of developing hyperactivity/inattention compared to those from higher socioeconomic statuses (PUBMED:22821287). Furthermore, prenatal exposure to other environmental toxins, such as polybrominated diphenyl ethers (PBDEs), has been associated with increased ADHD symptoms in adolescence, with sex-specific differences observed (PUBMED:38455925). Objective assessments have also supported the notion that inattention is a core deficit in children prenatally exposed to alcohol, with alcohol-exposed children showing less accuracy, more omission errors, longer response latencies, and increased variability in response time on ADHD symptomatology measures (PUBMED:25447751). A prospective cohort study with Brazilian children also found that prenatal alcohol-exposed children were more inattentive and impulsive and had poorer skills in verbal fluency, visuospatial working memory, and executive processing compared to non-exposed children (PUBMED:27155839). In contrast, a study examining sluggish cognitive tempo (SCT) found that alcohol-exposed children exhibited elevated SCT scores, which were related to increased parent ratings of internalizing and externalizing behaviors and attention, regardless of ADHD symptoms (PUBMED:22817778). A systematic review and meta-analysis found no increased risk of ADHD symptoms in offspring born to mothers who drank alcohol up to 70 g/week, suggesting that low-to-moderate PAE may not be associated with ADHD symptoms (PUBMED:31161393). Lastly, research has shown that short sleep duration is associated with teacher-reported inattention and cognitive problems in healthy school-aged children, indicating that factors other than PAE, such as sleep duration, can also contribute to these symptoms (PUBMED:23616727).
Instruction: Is HLA-DR6 a protective factor against posttransplantation diabetes mellitus? Abstracts: abstract_id: PUBMED:2714046 HLA A, B, and DR antigens and complotype in Tunisian patients with diabetes mellitus. The frequency of HLA A, B, and DR antigens as well as the Bf and C4 allotypes have been investigated in insulin-dependent diabetes mellitus (IDDM) and compared to that of healthy controls in the Tunisian population. An increase of A30, DR3, DR4, BfF1, C4Ao, and C4Bo and decrease of B40, DR2, DR5, and DR6 were found in diabetics when compared to the value observed in controls. The strongest association was noticed with HLA DR3 and DR4. Heterozygotes DR3 DR4 were very frequent in diabetics: 24.2 per cent versus 3.6 per cent in controls (relative risk 7.72). The protective role of DR2 and DR5 antigens were also confirmed. No supratypes of HLA, Bf, and C4 alleles associated with IDDM have been observed among these Tunisian patients. abstract_id: PUBMED:20832533 Correlation between HLA and posttransplantation diabetes mellitus in the Han population in South China. Background: Posttransplantation diabetes mellitus (PTDM) is a common metabolic complication in renal transplant recipients. Some studies have revealed predisposing or protective HLA genes for PTDM. Objective: To describe the characteristics of PTDM in patients in the south of China. Methods: The present study included 195 living-donor kidney transplant recipients. Informed consent was obtained from all participants, and the study was approved by our Institutional Ethics Committee. Each donor-recipient pair was related. Twenty-two patients had PTDM, whereas 173 did not. The frequency of each HLA phenotype was compared between these 2 groups. The 195 patients were divided into 2 groups according to immunosuppression regimen, a cyclosporine group and a tacrolimus group, and the incidence of PTDM was compared between the groups. Patients were then subdivided into 2 groups according to age, an elderly group (age 40 years or older) and a younger group (age younger than 40 years), and the incidence of PTDM was calculated and compared between these 2 groups. Results: HLA-A30 and HLA-DR7 seem to be predisposing genes for PTDM in patients in the south of China; Low dosages of calcineurin inhibitors were used in our center, There was no significant difference in the incidence of PTDM between the CsA and Tac groups; The incidence of PTDM in the elderly group was significantly higher than that in the younger group. Conclusion: Patients receiving HLA-A30 and HLA-DR7 antigens, and elderly patients are at higher risk of developing PTDM. Tacrolimus does not significantly increase the incidence of PTDM. abstract_id: PUBMED:2648900 HLA, A, B, C, DR, C4, Bf in insulin-dependent diabetics in the Tunisian population There is no doubt that the autoimmune process in human disease depends on genetic factors. Varying associations were noticed between HLA DR and autoimmune disorders. The frequency of HLA-A-B and DR antigens as well as the Bf and C4 allotypes have been investigated in insulinodependant diabetes mellitus (IDDM) and compared to that of healthy controls in Tunisian population. An increase of A30, DR3, DR4, BfF1, C4AQ0 and C4BQ0 and decrease of B40, DR2, DR5 and DR6 were found in diabetes when compared to the value observation controls. The strongest association was noticed with HLA, DR3 and DR4. The prospective role of DR2 and DR5 antigens were also confirmed. Examination of HLA, Bf and C4 alleles. Two supratypes associated with IDDM have been observed among the Tunisian patients. abstract_id: PUBMED:26368267 Renal posttransplantation diabetes mellitus: An overview. We conducted a literature review about renal posttransplantation diabetes mellitus (PTDM), a glucose metabolism alteration with high incidence. Immunosuppressive therapy, by different diabetogenic mechanisms, is the major risk factor for PTDM. Although corticosteroids and calcineurin inhibitors are the drugs most related to PTDM, the mechanisms of other drugs have been also investigated. Available data in the literature show that, in renal transplant patients, the oral glucose tolerance test presents the highest diagnostic sensitivity for PTDM, although fasting plasma glucose is the most requested test. In addition, we also discussed the possible role of hemoglobin A1C in the diagnosis of PTDM. Renal PTDM is associated with patient survival reduction, increased cardiovascular mortality and has been linked to worsening graft function. As crucial as the screening, the accurate management of the disease can ameliorate graft and patient survival as well as the quality of life of this especial population. Several drugs have been studied in PTDM patients in order to achieve a better glycemic control and, in the best scenario, prevent PTDM. We present proper management approaches that should help to reduce the possible long-term consequences of PTDM. abstract_id: PUBMED:7222846 The HLA system and diabetes mellitus Diabetes is a heterogeneous disease, and its pathogenesis and etiology are still largely unknown. Recent studies have brought new knowledge showing that HLA antigens and diabetes mellitus are related. It has been found that the relative risk of juvenile onset diabetes requiring insulin treatment is greater for persons who are HLA-A1, A2, B8, BW15, BW40, CW3, DW3, DW4, DRW3 and DRW4 positive. The relative risk of the disease is additive in persons who have two of the above mentioned HLA-B alleles. Some HLA antigens (HLA-B7, DW2, DRW2, A11) are associated with a significantly lower risk of the disease and probably have a "protective" character. Maturity onset diabetes (MOD) and maturity onset diabetes not requiring insulin treatment (MODY) are not related to the HLA system. This means that MOD is completely distinct from JOD with different symptoms, course and etiopathogenesis. abstract_id: PUBMED:28834572 Multiple genetic variants associated with posttransplantation diabetes mellitus in Chinese Han populations. Objectives: Posttransplantation diabetes mellitus (PTDM) is a major complication after solid organ transplantation. This study is to investigate the association of nine genetic variant factors and PTDM in Chinese Han patients. Methods: HLA-DP (rs3077, rs9277535), HLA-DQ (rs7453920), signal transducer and activator of transcription 4 (STAT4) (rs7574865), IL-28B (rs12979860, rs8099917, and rs12980275), and IL-18 (rs1946518 and rs187238) were investigated in 260 liver transplant recipients (PTDM vs non-PTDM) by high-resolution melting curve analysis. Serum interleukin (IL)-1β, IL-6, IL-8, IL-17, interferon-γ, inducible protein-10, monocyte chemoattractant protein-1, and macrophage inflammatory protein-1b were analyzed by a Bio-Plex suspension array system (Bio-Plex Multiplex Immunoassays, Bio-Rad, Hercules, CA, USA). Results: Signal transducer and activator of transcription 4 (rs7574865) T allele and IL-18 (rs1946518) A allele increase the risk for insulin resistance and PTDM. Conclusions: Recipients with STAT4 (rs7574865) T allele are associated with an increased concentration of IL-1β, interferon-γ, monocyte chemoattractant protein, and macrophage inflammatory protein-1b. The genetic variants of STAT4 (rs7574865) and IL-18 (rs1946518) may be new important markers for PTDM. abstract_id: PUBMED:12661465 Association of HLA molecules with autoimmune diseases The etiology of autoimmunity diseases includes an immunological reaction against various autoantigens. The HLA molecules play an important role in the antigen presentation process to the immune system. The HLA genes were found as the most significant genetic predisposition factor. The strongest association described in the literature is the association of HLA-B27 with ankylosing spondylitis. Associations were since usually observed with HLA-DR and DQ genes. We are also focused on Diabetes mellitus 1. type. Diabetes is an organ specific autoimmune disease characterised by destruction of beta-cells of Langerhansen pancreatic islets. The cause of the destruction process is not known yet, but it seems to be triggered by repeated presentation of self-antigen hGAD65 to the immune system. This review presents the most interesting solutions and hypotheses of the unanswered question about HLA association mechanism. abstract_id: PUBMED:6334852 The HLA system in diabetes mellitus The frequency of HLA antigens incidence with due consideration for diabetes type and age of the patient by the disease onset was studied and the index of the relative risk of diabetes development was calculated. In patients with type I diabetes mellitus, a significant increase of B18 antigen incidence and decrease of B7 antigen incidence were observed, as compared to the controls. No association with HLA antigens were detected in type II diabetes. No differences in HLA antigen incidence in type I diabetes mellitus patients who fell ill before 30 and those who fell ill after 30 years of age were recorded. The index of relative risk of the disease development in patients with type I diabetes is higher in the presence of B18 and lower in the presence of B7 antigens. abstract_id: PUBMED:10349159 HLA haplotypes in families with type 1 diabetes Background: Inherited susceptibility to type 1 diabetes is partially determined by HLA genes. HLA-DQA1 and DQB1 alleles have been chosen as the most sensitive susceptibility markers. Family studies are a good method to establish specific relationship between type 1 diabetes and specific haplotypes as risk markers for the disease. Aim: To analyse the role of class II HLA molecules and the distribution of haplotypes in the genetic predisposition to type 1 diabetes in Chilean families. Material And Methods: Twelve family groups constituted by 58 individuals were studied. Fourteen children (10 male) less than 15 years old with diabetes and their family members were included. The allele and haplotype frequency of the population was determined in 74 unrelated healthy children. Results: Risk haplotypes such as HLA-DR3/DQB1*0201/DQA1*0501 and HLA-DQB10302/DQA1*0501 were more common among diabetic patients and comparable to the haplotypes described in other Caucasian populations. Meanwhile, protective haplotypes found in relatives without diabetes, such as HLA-DR2/DQB1*0301/DQA1*0301 and HLA-DR8/DQB1*0402/DQA1*0301, were absent in children with diabetes. Conclusions: The general pattern of neutral or protective haplotypes, found with higher frequency in non diabetic individuals, indicates that their presence could confer protection against the disease, with a higher effect over those haplotypes associated to the disease. abstract_id: PUBMED:21094798 HLA-B27 is a potential risk factor for posttransplantation diabetes mellitus in autosomal dominant polycystic kidney disease patients. The aim of this work was to investigate HLA phenotype predisposition to posttransplantation diabetes mellitus (PTDM) in kidney transplant recipients stratified according to kidney failure etiology. Ninety-eight transplant recipient pairs with kidney grafts from the same cadaveric donor were qualified for the study. In each pair, 1 kidney was grafted to an individual with autosomal dominant polycystic kidney disease (ADPKD group) and 1 to recipient with a different cause of kidney failure (non-ADPKD group). All class II HLA antigens were determined with the PCR-SSP molecular method. To identify class I HLA molecules we used both molecular and serologic methods. Diabetes was diagnosed according to the American Diabetes Association criteria. The posttransplantation observation period was 12 months. In the ADPKD group, HLA-B27 was more common in PTDM than non-PTDM patients; 31.6% versus 11.4% (P = .069). The difference achieved significance when comparing insulin-treated with non-insulin-treated patients (44.4% vs 12.4%; P = .029). In the non-ADPKD group, HLA-A28 and HLA-B13 were observed more frequently in patients with PTDM than in recipients without diabetes (22.2% vs 2.5% [P = .0099] and 22.2% vs 3.8% [P = .020]). All of these associations were significant upon multivariate analysis. HLA-B27 allele is a factor predisposing ADPKD patients to insulin-dependent PTDM. Antigens predisposing to PTDM among kidney graft recipients without ADPKD include HLA-A28 and B13. Answer: Based on the provided abstracts, HLA DR6 is not specifically mentioned as a protective factor against posttransplantation diabetes mellitus (PTDM). However, HLA DR6 has been noted as a protective factor against insulin-dependent diabetes mellitus (IDDM) in the Tunisian population, as indicated by a decrease in the frequency of DR6 in diabetic patients compared to controls (PUBMED:2714046; PUBMED:2648900). This suggests that HLA DR6 may have a protective role in the context of IDDM, but there is no direct evidence from the abstracts to suggest that this protective role extends to PTDM. The abstracts do mention other HLA antigens that are associated with PTDM. For instance, HLA-A30 and HLA-DR7 have been identified as predisposing genes for PTDM in patients in the south of China (PUBMED:20832533), and HLA-B27 has been suggested as a potential risk factor for PTDM in autosomal dominant polycystic kidney disease patients (PUBMED:21094798). However, these findings do not directly address the role of HLA DR6 in PTDM. Therefore, based on the information provided in the abstracts, it cannot be concluded that HLA DR6 is a protective factor against posttransplantation diabetes mellitus.
Instruction: Can positron emission tomography be used as a staging tool for small-cell lung cancer? Abstracts: abstract_id: PUBMED:18282355 Can positron emission tomography be used as a staging tool for small-cell lung cancer? Purpose: Positron emission tomography (PET) is widely used for the staging evaluation of non-small-cell lung cancer; however, its use in small-cell lung cancer (SCLC) remains investigational. Patients And Methods: We did a retrospective study of 137 patients to evaluate the role of PET in SCLC. Fifty-one of 137 patients had computed tomography (CT) and PET scans during initial evaluation of a lung mass. Results: All 51 patients had PET-positive results for malignancy (100% sensitivity). In 40 of 51 cases (78%), the PET staging correlated with that on CT. Two of 51 patients (4%) had disease that was accurately upstaged by PET. Positron emission tomography accurately downstaged disease in 6 of 51 patients (12%). Positron emission tomography detected additional sites of disease in 13 of 42 patients (32%). Of the 13 additional sites of disease, PET detected supraclavicular nodes in 4 of 13 patients (30%) and bone lesions in 4 of 13 patients (30%). The sensitivity to detect brain lesions was 5 of 11 patients (45%) in this series. In this series, the PET results from 8 of 51 patients (16%) resulted in a change in disease management. Because of PET results, 6 of 51 patients (12%) who otherwise would not have been treated, were treated with radiation. Conclusion: Positron emission tomography is potentially useful for accurate initial staging of SCLC and can ensure that a patient's disease is not overstaged by CT scan, which might result in denied potentially curative treatment for limited-stage SCLC. It can identify the occult adrenal metastasis and metastasis to supraclavicular lymph nodes that are missed by CT; however, brain lesions are difficult to assess by PET. abstract_id: PUBMED:18267183 CT, positron emission tomography, and MRI in staging lung cancer. Lung cancer is a common malignancy and remains the leading cause of cancer-related deaths in both men and women in the United States. Imaging plays an important role in the detection, diagnosis, and staging of the disease as well as in assessing response to therapy and monitoring for tumor recurrence after treatment. This article reviews the staging of the two major histologic categories of lung cancer-non-small-cell lung carcinoma (NSCLC) and small-cell lung carcinoma-and emphasizes the appropriate use of CT, MRI, and positron emission tomography imaging in patient management. Also discussed are proposed revisions of the International Association for the Study of Lung Cancer's terms used to describe the extent of NSCLC in terms of the primary tumor, lymph nodes, and metastases descriptors. abstract_id: PUBMED:21524814 The role of positron emission tomography in management of small cell lung cancer. Accurate radiological staging of small-cell lung cancer (SCLC) is of paramount importance in selection of individual patients with limited stage disease for potentially curative treatment while avoiding toxic treatment in those with distant metastatic disease. [(18)F] flurodeoxy-D-glucose (FDG) positron emission tomography (PET) is an attractive tool for this purpose but there is limited evidence to support its use in the routine staging of SCLC. Whether therapeutic decisions based on FDG-PET imaging should be made remains uncertain. There is only preliminary evidence for use of FDG-PET as a prognostic biomarker, in the assessment of response to treatment and delineation of disease in conformal radiation planning. abstract_id: PUBMED:21768697 Role of positron emission tomography computed tomography in carcinoma lung evaluation. Lung cancer has graduated from merely a reportable disease of 1912 to being the most common cause of cancer death in developed countries in recent years. The annual number of lung cancer deaths is greater than the combined cancer deaths from breast, colon and prostate. Its association with tobacco has been proved and is related to the type, amount of tobacco used, the age at initiation and duration of use. Significant advances have been made in the diagnosis and management of lung cancer over the past decade. The primary treatment of lung cancer is surgery and the best chance for a complete cure comes from the total resection of localized disease. Once nodal or distant metastases have developed, primary surgical intervention is ruled out and patient is considered for adjuvant chemotherapy with or without radiation therapy. Accurate staging and delineation of disease extent is therefore critical in the treatment planning of lung carcinoma patients. 18 F fluoro deoxy glucose (FDG) positron emission tomography (PET) has been proven to be a valuable noninvasive imaging modality in the evaluation of patients with known or suspected lung cancer and its integration with computed tomography (CT) has changed the face of PET imaging in many ways. This article will review the current role of FDG PET CT in the evaluation of pulmonary nodules, diagnosis, staging and restaging of non-small-cell lung carcinoma (NSCLC), role of PET in small cell lung Carcinoma (Ca), pleural disease and will also discuss its potential future applications. abstract_id: PUBMED:18042932 18Fluorodeoxyglucose positron emission tomography in the diagnosis and staging of lung cancer: a systematic review. Lung cancer is the leading cause of cancer-related death in industrialized countries. The overall mortality rate for lung cancer is high, and early diagnosis provides the best chance for survival. Diagnostic tests guide lung cancer management decisions, and clinicians increasingly use diagnostic imaging in an effort to improve the management of patients with lung cancer. This systematic review, an expansion of a health technology assessment conducted in 2001 by the Institute for Clinical and Evaluative Sciences, evaluates the accuracy and utility of 18fluorodeoxyglucose positron emission tomography (PET) in the diagnosis and staging of lung cancer. Through a systematic search of the literature, we identified relevant health technology assessments, randomized trials, and meta-analyses published since the earlier review, including 12 evidence summary reports and 15 prospective studies of the diagnostic accuracy of PET. PET appears to have high sensitivity and reasonable specificity for differentiating benign from malignant lesions as small as 1 cm. PET appears superior to computed tomography imaging for mediastinal staging in non-small cell lung cancer (NSCLC). Randomized trials evaluating the utility of PET in potentially resectable NSCLC report conflicting results in terms of the relative reduction in the number of noncurative thoracotomies. PET has not been studied as extensively in patients with small-cell lung cancer, but the available data show that it has good accuracy in staging extensive- versus limited-stage disease. Although the current evidence is conflicting, PET may improve results of early-stage lung cancer by identifying patients who have evidence of metastatic disease that is beyond the scope of surgical resection and that is not evident by standard preoperative staging procedures. Further trials are necessary to establish the clinical utility of PET as part of the standard preoperative assessment of early-stage lung cancer. abstract_id: PUBMED:12168935 Whole-body 18F-2-deoxyglucose positron emission tomography in primary staging small cell lung cancer. The purpose of this study was investigate the role of 18F-2deoxyglucose positron emission tomography (FDG-PET) in staging small cell lung cancer (SCLC), its efficacy for the discrimination of limited disease (LD) and extensive disease (ED) stages and its regional sensitivity for different metastatic locations. Twenty-five patients with histologically confirmed SCLC and 42 radiologically-staged tumor sites were retrospectively investigated. The LD sample included 10 patients while the ED included 15 patients. All of the 25 primary tumor sites (100%) were visualized and 41 out of 42 (97.6%) of the metastases could be identified, but FDG-PET was needed for anatomical localization. The efficacy of FDG-PET was studied in the staging of SCLC patients and compared with the initial staging of conventional modality findings. FDG-PET down-staged (from ED to LD) one case and up-staged (from LD to ED) one case of SCLC. In summary, all of the patients with ED were correctly staged by FDG-PET alone. We conclude that FDG-PET is a substantial tool in the staging work-up of SCLC if it is performed initially to allow fast identification of patients with extensive disease stages and thus saves additional radiological or invasive examinations. Our preliminary results support the usefulness of whole body FDG-PET for staging SCLC. abstract_id: PUBMED:11243980 Positron emission tomography in the staging of small-cell lung cancer : a preliminary study. Study Objectives: Small-cell lung cancer (SCLC) has an unfavorable prognosis, especially when the disease is extensive at presentation. Accurate staging procedures are therefore needed for treatment planning. Positron emission tomography (PET) is a modern noninvasive imaging technique, the value of which for the staging of SCLC was investigated in the present study. Setting: University hospital. Patients: Thirty-one patients with suspected lung cancer were investigated for staging purposes using chest radiography, CT of the thorax and abdomen, abdominal ultrasound, and bone scanning. Twenty-five patients also received PET examinations during the staging procedures. Five of these patients were found to have SCLC, while two patients had mixed lesion types. Further analysis of the latter group was carried out. Results: PET detected the primary tumor in all patients, and lymph nodes in five patients. All lymph nodes were proved to be malignant by endoscopic ultrasonography-guided fine-needle aspiration. Only one patient had distant metastases, which were detected by both CT and PET. Conclusions: PET appears be a suitable imaging method in SCLC. A potential role for the technique as a standard staging procedure will need to be tested by investigating a larger number of patients in a prospective study. abstract_id: PUBMED:17278894 Staging and monitoring of small cell lung cancer using [18F]fluoro-2-deoxy-D-glucose-positron emission tomography (FDG-PET). Background: [18F]Fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET) scan is widely used for the staging evaluation of nonsmall cell lung cancer, however, its use in small cell lung cancer (SCLC) remains investigational. Patient And Methods: We designed a prospective study to evaluate the role of PET in SCLC. Patients with SCLC underwent PET scanning as well as conventional imaging before and after treatment. Results: A total of 39 PET scan examinations were performed in 21 patients with SCLC; 18 studies were performed before first-line chemotherapy and 21 studies were done during or after treatment. PET findings were compared with findings on CT scans of the chest or abdomen and bone scan. Discordant findings were detected in 14 out of 383 comparisons (4%) for 10 anatomic sites. In the thorax and the abdomen, PET agreed with CT scan in 92% to 100% of examinations assessing potential disease sites, including the contralateral chest, liver, and adrenals. PET agreed with bone scan in detecting bony lesions in 27 out of 32 imaging studies (84%): in 4 out of 5 discordant cases, PET findings were true and in 1 case indeterminate. Staging at baseline (limited, n = 6; extensive, n = 12) was identical when PET and sum of other staging procedures were compared. Response assessment was concordant between PET and CT scans in 8 of 9 patients who had evaluation before and after first-line chemotherapy. Conclusions: PET is potentially useful for the initial staging and monitoring of patients with SCLC and it may be superior to bone scan in detecting bone metastasis. The cost effectiveness of PET scan in SCLC remains to be determined. abstract_id: PUBMED:34794284 Distant Metastasis Patterns of Lung Cancer on Positron Emission Tomography/Computed Tomography Association with Age and Histological Subtype. Objective: To investigate whether age or other factors are determinants of distant metastasis in patients with lung cancer. Study Design: Observational study. Place And Duration Of Study: Department of Nuclear Medicine, Recep Tayyip Erdogan University, Rize, Turkey between December, 2018 and February, 2019. Methodology: A total of 152 patients with lung cancer, who underwent positron emission tomography/computed tomography (PET/CT) for staging, were included in this study. Patients were grouped according to age (>65 and <65 years) and distant metastasis status. Metastasis localisation of patients was evaluated by PET/CT. Univariate/multivariate regression analyses were performed to determine risk factors for distant metastasis. Results: No significant difference was found when the relation of distant metastasis with stage distribution was examined in both age groups. Distant metastasis rates were significantly higher in female patients than in male patients (p = 0.019) and in patients with small-cell lung carcinoma (SCLC)-adenocarcinoma than in those with other histopathological subtypes (p <0.001). Most of the patients in both groups had a stage 4 disease, and bone distant metastasis was the most common in both age groups. Univariate/multivariate analyses identified that female gender (p = 0.017/p = 0.003), SCLC subtype (p = 0.013/p = 0.008), T3/T4 tumour (p <0.001/p <0.001), and smoking history of >66 pack-years (p = 0.047/p = 0.047) were independent factors for the presence of distant metastasis. Conclusion: Although age is not a risk factor for distant metastasis in lung cancer, female gender, T3/4 tumour, SCLC subtype, and smoking history of >66 pack-years are high-risk factors. PET/CT is recommended as the first-choice imaging technique in patients with lung cancer indicated for distant metastasis scanning. Key Word: Lung cancer, PET/CT, Metastases, Histological subtype, T-stage. abstract_id: PUBMED:22245990 Staging and imaging of small cell lung cancer. Small cell lung cancer (SCLC) has been primarily classified as limited or extensive, with limited stage confined to the primary tumor and regional lymph nodes. In the future, the TNM staging system should be integrated into the classification of SCLC. The appropriate staging work-up for patients with SCLC has traditionally included contrast-enhanced computed tomography (CT) scans of the chest and abdomen, bone scan, and magnetic resonance imaging or CT scan of the brain. Recent data suggest that positron emission tomography can improve both staging accuracy and treatment planning in patients with SCLC. Treatment for limited-stage SCLC consists of chemotherapy plus radiotherapy, and such therapy can cure 20-25% of patients. Extensive-stage SCLC is incurable, but chemotherapy can improve quality of life and prolong life. Answer: Yes, positron emission tomography (PET) can be used as a staging tool for small cell lung cancer (SCLC). A retrospective study of 137 patients showed that PET had a 100% sensitivity for detecting malignancy and correlated with computed tomography (CT) staging in 78% of cases. PET was able to accurately upstage disease in 4% of patients and downstage in 12%, detecting additional sites of disease in 32% of patients. It was particularly useful in identifying occult adrenal metastasis and metastasis to supraclavicular lymph nodes that were missed by CT. However, PET was less sensitive for detecting brain lesions, with a sensitivity of 45% in the series. The use of PET resulted in a change in disease management for 16% of patients, with 12% receiving radiation treatment they would not have otherwise received due to PET findings (PUBMED:18282355). Other studies have supported the use of PET in SCLC staging. For example, a systematic review found that PET has good accuracy in staging extensive versus limited-stage disease (PUBMED:18042932). Another study concluded that PET is a substantial tool in the staging work-up of SCLC, allowing for fast identification of patients with extensive disease stages, thus potentially saving additional radiological or invasive examinations (PUBMED:12168935). A prospective study also found PET to be potentially useful for initial staging and monitoring of SCLC, and it may be superior to bone scan in detecting bone metastasis (PUBMED:17278894). Furthermore, PET/CT has been recommended as the first-choice imaging technique in patients with lung cancer indicated for distant metastasis scanning (PUBMED:34794284), and recent data suggest that PET can improve both staging accuracy and treatment planning in patients with SCLC (PUBMED:22245990). In summary, PET is a valuable tool for the accurate initial staging of SCLC and can help ensure that patients receive appropriate treatment based on a more precise assessment of disease extent. However, its sensitivity for brain lesions is limited, and the cost-effectiveness of PET in SCLC remains to be determined.
Instruction: Is Unwanted Birth Associated with Child Malnutrition in Bangladesh? Abstracts: abstract_id: PUBMED:26308260 Is Unwanted Birth Associated with Child Malnutrition in Bangladesh? Context: The association between unintended pregnancy and maternal and child health has been well documented. However, the relationship of unintended pregnancy with child malnutrition is not well understood, and may be important in countries such as Bangladesh that have high levels of these events. Methods: Data from the 2011 Bangladesh Demographic and Health Survey on 6,506 last-born, singleton children younger than five were used to investigate the relationship between pregnancy intendedness and the prevalence of stunting, wasting and underweight during early childhood. Multivariate logistic regression analyses were used to identify associations. Results: Substantial proportions of children were stunted (40%), wasted (16%) and underweight (35%) at the time of the survey. Mothers reported that at the time of conception, 14% of their index pregnancies had been unwanted and 16% had been mistimed. Compared with children whose conception had been intended, those whose conception had been unwanted were more likely to be stunted (46% vs. 39%), wasted (19% vs. 15%) or underweight (43% vs. 33%). In regression analyses, children who had been unwanted at the time of conception had an elevated risk of being stunted (odds ratio, 1.4), wasted (1.4) or underweight (1.3). Conclusion: Maternal pregnancy intentions are associated with child stunting, wasting and underweight. If these associations are causal, preventing unwanted pregnancies may help reduce the prevalence of childhood malnutrition in Bangladesh. abstract_id: PUBMED:27355682 Association of Low-Birth Weight with Malnutrition in Children under Five Years in Bangladesh: Do Mother's Education, Socio-Economic Status, and Birth Interval Matter? Background: Malnutrition in children under five years remains a significant problem in Bangladesh, despite substantial socio-economic progress and a decade of interventions aimed at improving it. Although several studies have been conducted to identify the important risk factors of malnutrition, none of them assess the role of low birth weight (LBW) despite its high prevalence (36%). This study examines the association between LBW and malnutrition using data from the Bangladesh Demographic and Health Survey (BDHS) 2011 and provides practical guidelines for improving nutritional status of children. Methods: Malnutrition in children is measured in terms of their height-for-age, weight-for-height, and weight-for-age. Children whose Z-scores for either of these indices are below two standard deviations (-2SD) from median of WHO's reference population are considered as stunted, wasted or underweight, respectively. The association between malnutrition and LBW was investigated by calculating adjusted risk-ratio (RR), which controls for potential confounders such as child's age and sex, mother's education and height, length of preceding-birth-interval, access to food, area of residence, household socio-economic status. Adjusted RR was calculated using both Cochran-Mantel-Haenszel approach and multivariable logistic regression models controlling for confounder. Results: The prevalence of malnutrition was markedly higher in children with LBW than those with normal birth-weights (stunting: 51% vs 39%; wasting: 25% vs 14% and underweight: 52% vs 33%). While controlling for the known risk factors, children with LBW had significantly increased risk of becoming malnourished compared to their counter part with RR 1.23 (95% CI:1.16-1.30), 1.71 (95% CI:1.53-1.92) and 1.47 (95% CI: 1.38-1.56) for stunting, wasting and underweight, respectively. The observed associations were not modified by factors known to reduce the prevalence of malnutrition, such as higher education of mother, better household socio-economic conditions and longer birth-interval. Conclusions: Higher education of mother, better household socio-economic conditions and prolonged birth intervals alone are not sufficient in bringing about substantial reductions in prevalence of child malnutrition in Bangladesh. Targeted interventions should be designed to reduce prevalence of LBW in addition to improving mother's education and other socio-demographic conditions. abstract_id: PUBMED:33988256 Adolescent birth and child undernutrition: an analysis of demographic and health surveys in Bangladesh, 1996-2017. Adolescent birth is a major global concern owing to its adverse effects on maternal and child health. We assessed trends in adolescent birth and examined its associations with child undernutrition in Bangladesh using data from seven rounds of Demographic and Health Surveys (1996-2017, n = 12,006 primiparous women with living children <5 years old). Adolescent birth (10-19 years old) declined slowly, from 84% in 1996 to 71% in 2017. Compared with adult mothers (≥20 years old), young adolescent mothers (10-15 years old) were more likely to be underweight (+11 pp), have lower education (-24 pp), have less decision-making power (-10 pp), live in poorer households (-0.9 SD) with poorer sanitation (-15 pp), and have poorer feeding practices (10 pp), and were less likely to access health and nutrition services (-3 to -24 pp). In multivariable regressions controlled for known determinants of child undernutrition, children born to adolescents had lower height-for-age Z-scores (-0.29 SD for young and -0.10 SD for old adolescents (16-19 years old)), weight-for-age Z-score (-0.18 and -0.06 SD, respectively) as well as higher stunting (5.9 pp) and underweight (6.0 pp) than those born to adults. In conclusion, birth during adolescence, a common occurrence in Bangladesh, is associated with child undernutrition. Policies and programs to address poverty and improve women's education can help delay marriage, reduce early childbearing, and improve child growth. abstract_id: PUBMED:26958818 Association between order of birth and chronic malnutrition of children: a study of nationally representative Bangladeshi sample. This paper examines the net effect of birth order on child nutritional status in Bangladesh using data from the Bangladesh Demographic Health Survey, 2011 (BDHS). Analyses were restricted to 4,120 surviving, lastborn singleton children who were younger than 36 months at the time of the survey. Logistic regression was used to assess the association between birth order and child nutritional status. Results indicate 38.1% children are stunted and 8.2% children are fifth or higher order birth. Order of birth is one of the significant predictors of child being stunted. Third order, fourth order, and fifth or higher order children are 24%, 30%, and 72%, respectively, more likely to be stunted after adjusting for all other variables. Besides birth order, results also indicate that child age, size at birth, birth intention, maternal education, maternal body mass index, wealth index, place of residence and mass media access exert strong influences over child malnutrition. Reducing birth rates which limit number of births and birth order as well may reduce child malnutrition in Bangladesh. abstract_id: PUBMED:16762947 Effect of prenatal food supplementation on birth weight: an observational study from Bangladesh. Background: National nutrition programs in Bangladesh have included prenatal food supplementation to reduce maternal and child malnutrition. The knowledge base is weak regarding the effect of prenatal food supplementation on the birth weight (BW) of infants in populations in whom low BW is prevalent and regarding any variation in effect based on maternal nutritional status. Objective: We examined whether observational data support an effect of daily prenatal food supplementation on BW by considering the duration of supplementation and whether the effect is modified by maternal postpartum weight (a proxy of prepregnancy weight) groups. Design: A cohort of undernourished pregnant women (n = 777) who received prenatal food supplementation (608 kcal/d) was followed. The association between the uptake of food supplements and BW was analyzed after adjustment for potential confounders (n = 619 with complete information). Differential effects in lower and higher maternal postpartum weight groups were examined. Results: The average BW was 2521 g. On average, the women received daily supplements for 4 mo, which resulted in an increase in BW of 118 g (1.0 g/d). The strongest effect was found for births occurring in January and February. There was a linear dose-response relation between duration of supplementation and BW for women with higher postpartum weights (> or = 42 kg, above the median). In women with lower weights (< 42 kg, below median), a shorter duration of supplementation (< 4 mo) had no such dose-response relation with BW, but there was a linear dose-response relation for longer durations of supplementation. Conclusions: The association between duration of prenatal food supplementation and BW varies with maternal postpartum weight. A large effect was observed after the season with food insecurity (mid-August to mid-November). abstract_id: PUBMED:32153954 Correlates of stunting among under-five children in Bangladesh: a multilevel approach. Background: Child malnutrition still remains a major cause of childhood morbidity and mortality in Bangladesh. This study aims to determine the prevalence and identify the associated risk factors of child malnutrition in Bangladesh using multilevel logistic regression model on data from the Bangladesh Demographic and Health Survey (BDHS), 2014. Methods: A total sample of 6965 children aged 0-59 months was extracted from BDHS 2014. We performed descriptive analysis and multilevel generalized linear regression analysis with clustered data structure. Results: Our findings show that among children the prevalence of moderate and severe values was respectively: 25 and 12% for stunting; 11 and 3.1% for wasting; 25 and 7.9% for underweight. The probability of stunting increased with age, with highest rate among children aged 36-47 months, which was significantly higher than children aged less than 6 months (OR = 6.71, 95% CI = 4.46, 10.10). Female children are found to be 11% less likely to be stunted than male children (OR = 0.89, 95% CI = 0.78, 1.02). Children with birth interval less than 24 months were significantly more likely to be stunted than children of first birth by 36% (OR = 1.36, 95% CI = 1.11, 1.67). Mothers with a normal BMI were 16% less likely to have children with stunting compared to mothers who are underweight (OR = 0.84, 95% CI = 0.76, 0.93). Other factors which were associated with a higher risk of stunting included parents with lower educational levels, children from the poorest wealth index, and mothers aged less than 20 years as first birth. Conclusion: Government and non-government organization should generate effective program to aware women of reproductive age about adverse effect of short birth interval, and to aware parents about standard height and weight according to age and gender of children. Overall, necessary steps may be taken to make people educated and to reduce household wealth inequality to improve nutritional status of children. abstract_id: PUBMED:34814880 Urban-rural differences in the associated factors of severe under-5 child undernutrition based on the composite index of severe anthropometric failure (CISAF) in Bangladesh. Introduction: Severe undernutrition among under-5 children is usually assessed using single or conventional indicators (i.e., severe stunting, severe wasting, and/or severe underweight). But these conventional indicators partly overlap, thus not providing a comprehensive estimate of the proportion of malnourished children in the population. Incorporating all these conventional nutritional indicators, the Composite Index of Severe Anthropometric Failure (CSIAF) provides six different undernutrition measurements and estimates the overall burden of severe undernutrition with a more comprehensive view. This study applied the CISAF indicators to investigate the prevalence of severe under-5 child undernutrition in Bangladesh and its associated socioeconomic factors in the rural-urban context. Methods: This study extracted the children dataset from the 2017-18 Bangladesh Demographic Health Survey (BDHS), and the data of 7661 children aged under-5 were used for further analyses. CISAF was used to define severe undernutrition by aggregating conventional nutritional indicators. Bivariate analysis was applied to examine the proportional differences of variables between non-severe undernutrition and severe undernutrition group. The potential associated socioeconomic factors for severe undernutrition were identified using the adjusted model of logistic regression analysis. Results: The overall prevalence of severe undernutrition measured by CISAF among the children under-5 was 11.0% in Bangladesh (rural 11.5% vs urban 9.6%). The significant associated socioeconomic factors of severe undernutrition in rural areas were children born with small birth weight (AOR: 2.84), children from poorest households (AOR: 2.44), and children aged < 36 months, and children of uneducated mothers (AOR: 2.15). Similarly, in urban areas, factors like- children with small birth weight (AOR: 3.99), children of uneducated parents (AOR: 2.34), poorest households (APR: 2.40), underweight mothers (AOR: 1.58), mothers without postnatal care (AOR: 2.13), and children's birth order ≥4 (AOR: 1.75), showed positive and significant association with severe under-5 undernutrition. Conclusion: Severe undernutrition among the under-5 children dominates in Bangladesh, especially in rural areas and the poorest urban families. More research should be conducted using such composite indices (like- CISAF) to depict the comprehensive scenario of severe undernutrition among the under-5 children and to address multi-sectoral intervening programs for eradicating severe child undernutrition. abstract_id: PUBMED:34075819 Residential district multi-hazard risk is associated with childhood undernutrition: evidence from Bangladesh. Child undernutrition and natural disasters are major public health concerns in Bangladesh, but research into their relationship is lacking. This study assessed the association between residential district multi-hazard-risk and undernutrition among children aged less than 5 years (under-5) in Bangladesh. Data for 22,055 under-5 children were extracted from the 2019 Multiple Indicator Cluster Survey of Bangladesh. Multi-hazard risk was categorized as low (score<10), moderate (score 10-20), and high (score>20) using a combined score of four major hazards: tornado, cyclone, earthquake, and flood. We found that children from high multi-hazard risk districts were 19% more likely to be stunted and 23% more likely to be underweight compared to low-risk districts. However, wasting was not associated with multi-hazard risk. Strategies such as agricultural adaptation and coping mechanisms, long-term post-disaster nutritional response, extended periods of relief supports, and enhanced quality maternal and child care services may help to reduce undernutrition burdens in Bangladesh. abstract_id: PUBMED:32787991 Socio-demographic risk factors for severe malnutrition in children aged under five among various birth cohorts in Bangladesh. Tackling malnutrition is a major health priority for a developing country like Bangladesh. This study explored the differences in prevalence of having only one form, and multiple forms, of severe malnutrition (stunting, wasting and underweight) among under-5 children in Bangladesh, and aimed to identify the important factors affecting these. Data were extracted from the Bangladesh Demographic Health Surveys conducted in 2007, 2011 and 2014. The outcome measures were 'only one form' and 'multiple forms' of severe malnutrition in children aged under 5 years. A Chi-squared test was performed to find the association of outcome variables with selected socio-demographic factors and logistic regression models were applied to identify risk factors. A total of 19,874 children aged under 5 years were included in the analysis. The overall proportion with one form of severe child malnutrition was approximately 12%, and the proportion with multiple forms was 8%. Age, mother's education, father's occupation, mother currently working, watching television, source of water, solid waste used in cooking, intimate partner violence (IPV), wealth index, urban/rural place of residence and birth cohort were found to be significant factors for both having only one and having multiple forms of severe child malnutrition. Children with an uneducated mother of poor socioeconomic class had a higher risk of severe malnutrition. Children of fathers with a professional occupation were at lower risk of having multiple forms of severe malnutrition. The proportions of children aged under 5 years with one or multiple forms of severe malnutrition were shown to be high in Bangladesh. The prevention of malnutrition in the country should be seen as a significant public health issue and given top priority. abstract_id: PUBMED:36701381 Prevalence and risk predictors of childhood stunting in Bangladesh. Background: The child nutritional status of a country is a potential indicator of socioeconomic development. Child malnutrition is still the leading cause of severe health and welfare problems across Bangladesh. The most prevalent form of child malnutrition, stunting, is a serious public health issue in many low and middle-income countries. This study aimed to investigate the heterogeneous effect of some child, maternal, household, and health-related predictors, along with the quantiles of the conditional distribution of Z-score for height-for-age (HAZ) of under five children in Bangladesh. Methods And Materials: In this study, a sample of 8,321 children under five years of age was studied from BDHS-2017-18. The chi-square test was mainly used to identify the significant predictors of the HAZ score and sequential quantile regression was used to estimate the heterogeneous effect of the significant predictors at different quantiles of the conditional HAZ distribution. Results: The findings revealed that female children were significantly shorter than their male counterparts except at the 75th quantile. It was also discovered that children aged 7-47 months were disadvantaged, but children aged 48-59 months were advantaged in terms of height over children aged 6 months or younger. Moreover, children with a higher birth order had significantly lower HAZ scores than 1st birth order children. In addition, home delivery, the duration of breastfeeding, and the BCG vaccine and vitamin A received status were found to have varied significant negative associations with the HAZ score. As well, seven or fewer antenatal care visits was negatively associated with the HAZ score, but more than seven antenatal care visits was positively associated with the HAZ score. Additionally, children who lived in urban areas and whose mothers were over 18 years and either normal weight or overweight had a significant height advantage. Furthermore, parental secondary or higher education had a significant positive but varied effect across the conditional HAZ distribution, except for the mother's education, at the 50th quantile. Children from wealthier families were also around 0.30 standard deviations (SD) taller than those from the poorest families. Religion also had a significant relationship with the conditional HAZ distribution in favor of non-Muslim children. Conclusions: To enhance children's nutritional levels, intervention measures should be designed considering the estimated heterogeneous effect of the risk factors. This would accelerate the progress towards achieving the targets of Sustainable Development Goals (SDGs) related to child and maternal health in Bangladesh by 2030. Answer: Yes, unwanted birth is associated with child malnutrition in Bangladesh. According to the study using data from the 2011 Bangladesh Demographic and Health Survey, children whose conception had been unwanted were more likely to be stunted (46% vs. 39%), wasted (19% vs. 15%), or underweight (43% vs. 33%) compared to children whose conception had been intended. The regression analyses indicated that children who had been unwanted at the time of conception had an elevated risk of being stunted (odds ratio, 1.4), wasted (1.4), or underweight (1.3) (PUBMED:26308260). This suggests that maternal pregnancy intentions are associated with child stunting, wasting, and underweight, and preventing unwanted pregnancies may help reduce the prevalence of childhood malnutrition in Bangladesh.
Instruction: Should children at risk for familial adenomatous polyposis be screened for hepatoblastoma and children with apparently sporadic hepatoblastoma be screened for APC germline mutations? Abstracts: abstract_id: PUBMED:16317745 Should children at risk for familial adenomatous polyposis be screened for hepatoblastoma and children with apparently sporadic hepatoblastoma be screened for APC germline mutations? Background: Hepatoblastoma (HB) is the most frequent liver tumor in childhood, occurring in the first few years of life. Surgery combined with chemotherapy has resulted in dramatic improvements in prognosis. However, even today, about one quarter of affected children do not survive the disease. Compared to the general population, the risk of HB is 750-7,500 times higher in children predisposed to familial adenomatous polyposis (FAP), an autosomal-dominant cancer predispostion syndrome caused by germline mutations in the tumor suppressor gene APC. Only limited data exist about the frequency of APC germline mutations in cases of apparently sporadic HB without a family history of FAP. Procedure: In our sample of 1,166 German FAP families, all known cases of HB were registered. In addition, 50 patients with apparently sporadic HB were examined for APC germline mutations. Results: In the FAP families, seven unrelated cases of HB are documented; three had been detected at an advanced stage. In patients with apparently sporadic HB, germline mutations in the APC gene were identified in 10%. Conclusions: These data raise the issue of the appropriate screening for HB in children of FAP patients. To date, the efficiency of surveillance for HB is unclear. In Beckwith-Wiedemann syndrome (BWS), recent studies suggest an earlier detection of both Wilms tumor and HB by frequent screening. We discuss the rationale and implications of a screening program; besides the examination procedure itself, screening for HB in children of FAP patients would have important consequences for the policy of predictive testing in FAP. In a substantial fraction of sporadic HB, the disease is obviously the first manifestation of a de novo FAP. These patients should be identified by routine APC mutation screening and undergo colorectal surveillance thereafter. abstract_id: PUBMED:18955873 Germline APC mutations are not commonly seen in children with sporadic hepatoblastoma. Hepatoblastoma is the most common primary liver tumor in childhood and occurs more commonly in families with familial adenomatous polyposis. Germline mutations of the gene responsible for familial adenomatous polyposis--adenomatous polyposis coli (APC)--are described in patients with hepatoblastoma even without a family history. We investigated children presenting with apparently sporadic hepatoblastoma between 1991 and 2004. Blood samples were available from 29 children (18 boys) whose conditions were diagnosed at a median age of 22 months (range 6-119 months). No germline APC mutations were found, which does not support the need for routine screening in sporadic hepatoblastoma in the absence of a suggestive family history of colorectal cancer or suspicion of familial adenomatous polyposis. abstract_id: PUBMED:29251405 Germline APC mutations in hepatoblastoma. Background: Conflicting reports on the frequency of germline adenomatous polyposis coli (APC) gene mutations in patients with hepatoblastoma (HB) have called into question the clinical value of APC mutation testing on apparently sporadic HB. Methods: An Institutional Review Board approved retrospective review of clinical data collected from patients with HB who received APC testing at our institution was conducted. All HB patients seen at Cincinnati Children's Hospital Medical Center were eligible for testing. Potential genotype/phenotype correlations were assessed. Results: As of July 2015, 29 patients with HB had received constitutional APC testing. Four (14%) were found to have APC pathogenic truncations of the APC protein and in addition two (7%) had APC missense variants of unknown clinical significance. Two patients (7%) had family histories indicative of familial adenomatous polyposis (FAP). Response to chemotherapy tracked differently in APC pathogenic cases, with a slower imaging response despite an equivalent or slightly faster α-fetoprotein (AFP) response. Conclusion: The prevalence of pathogenic APC variants in apparently sporadic HB may be higher than previously detected. Differences in time to imaging response, despite similar AFP response, may impact surgical planning. All patients with HB warrant germline APC mutation testing for underlying FAP. abstract_id: PUBMED:8764128 Somatic mutations of the APC gene in sporadic hepatoblastomas. Hepatoblastoma is a rare hepatic malignancy that occurs in children with an average age of 2 or 3 years and is known to be one of the extracolonic manifestations of familial adenomatous polyposis. Only a single hepatoblastoma with a germ-line mutation of the adenomatous polyposis coli (APC) gene has been reported thus far. To elucidate the possible roles of APC gene alterations in sporadic hepatoblastomas, we examined loss of heterozygosity (LOH) at the APC and MCC loci and performed a sequencing analysis of a part of the APC gene, including the mutation cluster region, in 13 hepatoblastomas of non-familial adenomatous polyposis patients. LOH at the APC and/or MCC loci was observed in four of seven (57%) informative cases. Of the 13 cases, somatic mutations were detected in 8 (61.5%), with 9 (69%) cases showing genetic alterations in the APC gene as LOH or somatic mutations. Two cases demonstrated double mutations. Furthermore, the nature of the somatic mutations observed in the present study was unusual because 9 of the 10 mutations were missense, with only 1 case featuring a frame-shift mutation due to an insertion. Previous reports have described almost all (>90%) mutations of the APC gene in colorectal tumors to result in a truncated APC protein due to either frame-shift or nonsense mutations. These findings suggest that a mutation of the APC gene may play an important role in the genesis of sporadic hepatoblastomas, and the mechanisms of APC gene alteration may be different from those reported previously for colorectal tumors. abstract_id: PUBMED:31069152 APC germline hepatoblastomas demonstrate cisplatin-induced intratumor tertiary lymphoid structures. Hepatoblastoma (HB) is the most common liver cancer in children. We aimed to characterize HB related to APC (Adenomatous Polyposis Coli) germline mutation (APC-HB). This French multicentric retrospective study included 12 APC-HB patients under 5 at diagnosis. Clinical features of APC-HB were compared to the French SIOPEL2-3 cohort of HB patients. Molecular and histopathological analyses of APC-HB were compared to 15 consecutive sporadic HB treated at Bicêtre hospital from 2013 to 2015 (non-APC-HB). APC-HB patients have a peculiar spectrum of germline APC mutations, with no events in the main hotspot of classical APC mutations at codon 1309 (P < .05). Compared to sporadic HB, they have similar clinical features including good prognosis since all patients are alive in complete remission at last follow-up. APC-HB are mostly well-limited tumors with fetal predominance and few mesenchymal components. All APC-HB have an activated Wnt/β-catenin pathway without CTNNB1 mutation, confirming that germline APC and somatic CTNNB1 mutations are mutually exclusive (P < .001). Pathological reviewing identified massive intratumor tertiary lymphoid structures (TLS) containing both lymphocytes and antigen-presenting cells in all 11 APC-HB cases who received cisplatin-based neoadjuvant chemotherapy but not in five pre-chemotherapy samples (four paired biopsies and one patient resected without chemotherapy), indicating that these TLS are induced by chemotherapy (P < .001). Conclusion: APC-HB show a good prognosis, they are all infiltrated by cisplatin-induced TLS, a feature only retrieved in a minority of non-APC-HB. This suggests that APC inactivation can synergize with cisplatin to induce an immunogenic cell death that initiates an anti-tumor immune response. abstract_id: PUBMED:17026565 Singapore familial adenomatous polyposis (FAP) patients with classical adenomatous polyposis but undetectable APC mutations have accelerated cancer progression. Objectives: Germline mutation in adenomatous polyposis coli (APC) is detected in up to 80% of familial adenomatous polyposis (FAP) patients worldwide. In this study, we evaluated clinical features and APC mutations of Singapore FAP patients and contrasted genotype-phenotype correlation with Caucasians from other regions of the world and between FAP patients with and without detectable APC mutations. Methods: We screened 242 members from 57 unrelated FAP families using a combination of cDNA protein truncation test, multiplex ligation-dependent probe amplification, and differential expression techniques. Results: APC germline mutations were detected in 50 families. In contrast to Caucasians, fundic gland polyposis in Singapore patients was associated with APC mutations throughout the coding region and osteomas were also not confined to codon 767-1573. There was also no FAP-associated hepatoblastoma or medullablastoma. APC mutation-negative patients from four families with mixed (adenomatous/hyperplastic/atypical juvenile) polyps were subsequently reclassified as hereditary mixed polyposis syndrome (HMPS) patients. APC mutation-negative patients with classical adenomatous polyposis were negative for MYH, beta-catenin, and Axin 1 mutations. These patients had a significantly older age at diagnosis (P < 0.001) and more colorectal cancers (P= 0.017) than patients with APC mutations. Conclusions: We achieved a 94% (50/53) APC mutation detection rate via a combination of techniques, suggesting that the current detection rate is probably not exhaustive. Singapore patients have some features similar to and other features distinct from Caucasians. Furthermore, APC mutation-negative patients have accelerated cancer progression that merits closer surveillance. abstract_id: PUBMED:23715166 Multifocal hepatic neoplasia in 3 children with APC gene mutation. Hepatoblastoma (HB), the most common hepatic neoplasm in children is associated with germline mutations in adenomatous polyposis coli tumor-suppressor gene that cause familial adenomatous polyposis syndrome. Individuals with familial adenomatous polyposis have a 750 to 7500× the risk of developing HB. We report 3 children with APC gene mutation, who underwent resection or liver transplant for HB. In addition to HB, all 3 patients had multiple independent adenoma-like nodules lacking qualities of intrahepatic metastases. Twenty-five nodules were subjected to immunohistochemical analysis using a panel of antibodies including glypican-3 (GPC3), β-catenin, cytokeratin AE1/AE3, CD34, Ki-67, glutamine synthetase (GS), and fatty acid binding protein. The nodules were round, ranged in size from 0.2 to 1.5 cm, and paler than the background liver. All lacked the chemotherapy effect. The nodules were circumscribed but nonencapsulated and composed of well-differentiated hepatocytes with occasional minor atypical features and absent or rare portal tracts. One lesion displayed a "nodule-within-nodule" pattern. The nodules demonstrated diffuse GS overexpression. Nine (36%) nodules were focally reactive for GPC3, and 1 (4%) displayed focal nuclear β-catenin expression. The associated HB showed diffuse expression of GS, GPC3, and β-catenin nuclear staining. We interpret these nodules as neoplastic with most being adenomas (GPC3 negative) that show features of independent origin and represent early stages of carcinogenesis, implying potential to progress to HB or hepatocellular carcinoma. To our knowledge, this is the first report of multifocal neoplasms in patients with HB and APC gene mutation. abstract_id: PUBMED:34001798 Early Development of Colonic Adenocarcinoma With Minimal Polyposis in a Young Child With Metastatic Hepatoblastoma and Germline APC Mutation. Germline adenomatous polyposis coli (APC) gene mutation is a cancer-predisposing condition commonly presenting as familial adenomatous polyposis. We describe a patient first diagnosed at the age of 3 years with metastatic hepatoblastoma. With a positive family history, germline testing confirmed maternally inherited APC mutation (p.Thr899Ansfs*13). The patient was subsequently diagnosed at 8 years with colonic adenocarcinoma in the absence of macroscopic polyposis. Total colectomy with adjuvant chemotherapy was delivered and the patient remained disease-free for 5 years since the second diagnosis. This report demonstrates the importance of considering germline APC mutation in children with hepatoblastoma, who may benefit from the early institution of colonoscopic surveillance. abstract_id: PUBMED:9927029 Childhood hepatoblastomas frequently carry a mutated degradation targeting box of the beta-catenin gene. Hepatoblastomas (HBs) are embryonal tumors affecting young children and representing the most frequent malignant liver tumors in childhood. The molecular pathogenesis of HB is poorly understood. Although most cases are sporadic, the incidence is highly elevated in patients with familial adenomatous polyposis coli. These patients carry germline mutations of the APC tumor suppressor gene. APC controls the degradation of the oncogene product beta-catenin after its NH2-terminal phosphorylation on serine/threonine residues. APC, as well as beta-catenin, has been found to be a central effector of the growth promoting wingless signaling pathway in development. To find out if this pathway is involved in the pathogenesis of sporadic HBs, we examined 52 biopsies and three cell lines from sporadic HBs for mutations in the APC and beta-catenin genes. Using single-strand conformational polymorphism analysis, deletion screening by PCR, and direct sequencing, we found a high frequency of beta-catenin mutations in sporadic HBs (48%). The mutations affected exon 3 encoding the degradation targeting box of beta-catenin leading to accumulation of intracytoplasmic and nuclear beta-catenin protein. The high frequency of activating mutations in the beta-catenin gene indicates an important role in the pathogenesis of HB. abstract_id: PUBMED:30619485 Whole-Genome Sequencing Identifies a Novel Variation of WAS Gene Coordinating With Heterozygous Germline Mutation of APC to Enhance Hepatoblastoma Oncogenesis. Hepatoblastoma (HB), a leading primary hepatic malignancy in children, originates from primitive hepatic stem cells. This study aimed to uncover the genetic variants that are responsible for HB oncogenesis. One family, which includes the healthy parents, and two brothers affected by HB, was recruited. Whole-genome sequencing (WGS) of germline DNA from all the family members identified two maternal variants, located within APC gene and X-linked WAS gene, which were harbored by the two brothers. The mutation of APC (rs137854573, c.C1606T, p.R536X) could result in HB carcinogenesis by activating Wnt signaling. The WAS variant (c.G3T, p.M1-P5del) could promote HB cell proliferation and inhibit T-cell-based immunity by activating PLK1 signaling and inactivating TCR signaling. Further analysis reflected that WAS deficiency might affect the antitumor activity of natural killer and dendritic cells. In summary, the obtained results imply that an APC mutant together with an X-linked WAS mutant, could lead to HB tumorigenesis by activating Wnt and PLK1 signaling, inhibiting TCR signaling, and reducing the antitumor activity of natural killer and dendritic cells. Answer: The question of whether children at risk for familial adenomatous polyposis (FAP) should be screened for hepatoblastoma (HB) and whether children with apparently sporadic HB should be screened for APC germline mutations is a complex one, with differing viewpoints in the literature. On one hand, there is evidence suggesting that children predisposed to FAP have a significantly higher risk of developing HB compared to the general population, with the risk being 750-7,500 times higher. In a sample of 1,166 German FAP families, seven unrelated cases of HB were documented, and in patients with apparently sporadic HB, germline mutations in the APC gene were identified in 10% of cases (PUBMED:16317745). This suggests that there may be a rationale for screening children of FAP patients for HB, as early detection could potentially improve outcomes. Additionally, the study found that in a substantial fraction of sporadic HB, the disease is the first manifestation of a de novo FAP, indicating that these patients should be identified by routine APC mutation screening and undergo colorectal surveillance thereafter (PUBMED:16317745). Conversely, another study found no germline APC mutations in a cohort of 29 children with sporadic HB, suggesting that routine screening for APC mutations in sporadic HB may not be necessary in the absence of a suggestive family history of colorectal cancer or suspicion of FAP (PUBMED:18955873). However, a retrospective review at Cincinnati Children's Hospital Medical Center found that 14% of patients with HB had pathogenic truncations of the APC protein, and an additional 7% had APC missense variants of unknown clinical significance. This study concluded that the prevalence of pathogenic APC variants in apparently sporadic HB may be higher than previously detected and that all patients with HB warrant germline APC mutation testing for underlying FAP (PUBMED:29251405). In summary, while there is evidence to support the screening of children at risk for FAP for HB and the screening of children with apparently sporadic HB for APC germline mutations, the necessity and efficiency of such screening programs are not universally agreed upon. The decision to screen may depend on individual risk factors, family history, and the prevalence of APC mutations in the specific population being considered.
Instruction: Does feedback matter? Abstracts: abstract_id: PUBMED:36575844 Understanding the Feedback Loops between Energy, Matter and Life. This review gathers recent findings in biophysics that shed light on the biological principle of self-organization, spanning from molecules to more complicated systems with higher information processing capacity. The focus is on "feedback loops" from information and matter to an exchange component with a more fundamental meaning than "cybernetic regulation" and "maintenance of homeostasis". This article proposes that electric and electromagnetic forces are the most important mediators over large distances. Field-like mediation is distinguished from cell-to-cell communication by special electric- or ion-guiding mechanisms that create additional pathways to the "classical" mediators such as nerve conduction or blood flow. Resonance phenomena from phonons and photons in the visible range will be discussed in relation to organelles, cytoskeletal elements and molecules. In this context, the aqueous surrounding of molecules and cells is an important aspect. Many of these phenomena are caused by quantum physics, such as the tunneling of electrons in enzymes or in other coherent working systems. This suggests that quantum information processing is also spread over large-scale areas of an organism. abstract_id: PUBMED:30559479 Invasion of white matter tracts by glioma stem cells is regulated by a NOTCH1-SOX2 positive-feedback loop. Early invasive growth along specific anatomical structures, especially the white matter tract, is regarded as one of the main causes of poor therapeutic outcome of people with gliomas. We show that some glioma stem cells (GSCs) are preferentially located along white matter tracts, which exhibit a demyelinated phenotype, at the invasive frontier of glioma tissues. These GSCs are CD133+Notch1+, whereas the nerve fibers express the Notch ligand Jagged1. The Notch-induced transcription factor Sox9 promotes the transcription of SOX2 and the methylation level of the NOTCH1 promoter is attenuated by the upregulation of SOX2 to reinforce NOTCH1 expression in GSCs. This positive-feedback loop in a cohort of glioma subjects is correlated with a poor prognosis. Inhibition of Notch signaling attenuates the white-matter-tract tropism of GSCs. These findings provide evidence indicating that the NOTCH1-SOX2 positive-feedback loop controls GSC invasion along white matter tracts. abstract_id: PUBMED:3687169 Marital feedback behavior: relations between feedback activity of the partner and feedback quality, duration of the marriage and ability of the marriage to function The effect of the quality of feedback and the feedback activity of partners on the feedback behavior of married controls, who differed with regard to the duration and the functioning of their marriages, was investigated. 106 married couples were divided into a group of disturbed marriages in the first half of life (GJE), a group of disturbed marriages in the second half of life (GAE), and harmonious marriages (HAE). During a conflict-conversation that was structured according to the technique of revealed differences the partners exchanged positive and negative feedback optically and acoustically. Corresponding to the frequency of the feedback active and less active partners were differentiated. In comparison with the couples of the HAE group, couples of the GAE group gave less feedback, especially less positive feedback. The less active partner of the GAE group gave significantly more negative feedback than the comparable partner of the HAE group. Differences between the GAE and the GJE group that can be called statistically significant do not exist. The results have been discussed with respect to their application. abstract_id: PUBMED:35899738 Feedback in medical education - separate coaching for improvement from summative assessment A supervisor's feedback can change a medical learner's behaviour consistently if the learner views the supervisor as a credible role model. A learner's trust in the supervisor is a prerequisite for feedback to contribute to effective learning. In current educational practice, coaching for improvement and summative assessment are frequently mixed, which leads medical learners to experience workplace based assessments as tests and makes them unresponsive to formative feedback. Carefully separating coaching for improvement from summative assessment is required to allow the learner to accept and apply the feedback given by the supervisor. Supervisors should focus their attention to providing formative feedback, not to documenting it. The R2C2 model (rapport - receptivity - content - coaching) is a useful tool to effectively provide constructive formative feedback. abstract_id: PUBMED:27334086 Peer feedback for trainers in general practice In medical specialist training programmes it is common practice for residents to provide feedback to their medical trainers. The problem is that due to its anonymous nature, the feedback often lacks the specificity necessary to improve the performance of trainers. If anonymity is to be abolished, there is a need for residents to feel safe in giving their feedback. Another way to improve the performance of trainers might be peer feedback. For peer feedback it is necessary that trainers observe each other during their training sessions with the residents. In speciality training in general practice peer feedback is done in group sessions of 12 trainers. They show videos of their training sessions and get feedback from their fellow trainers. Trainers also visit each other in their practices to observe training sessions and provide feedback. In order to improve trainer performance there is a need for more focus on peer feedback in medical specialist training programmes. abstract_id: PUBMED:18976615 Feedback in postgraduate medical training Feedback may be described as a process comprising communication of information and reactions to such communication. It has been defined as specific information about the difference between a trainee's observed performance and a given standard with the intent of achieving performance improvement. Feedback is essential in medical education and has great implications for the educational climate. It has been shown that a common language regarding the principles of feedback has a sustained effect on quality and frequency of feedback. Further research is needed on feedback and educational climate, and on how to motivate trainees to improve future learning through feedback. abstract_id: PUBMED:2269169 Acoustic feedback for probing at constant force Probing of the gingival crevice is generally recognized as the most important diagnostic procedure in periodontitis. Reliable measurements are only possible by probing at constant force. A freshly isolated porcine mandible was used to test, if acoustic feedback enhances the reliability of probing. Our results indicate that the feedback significantly reduces the total probing force and also the variance between single measurements. abstract_id: PUBMED:23739608 Feedback during laparoscopic training A recent Danish study showed that instructor feedback significantly reduced the duration of training time needed for acquiring laparoscopic skills. While there is a clear advantage to trainees reaching a predetermined expert level of performance more rapidly, this does not necessarily imply that the skills were also acquired more efficiently. Experiencing continual feedback while undergoing a training task could reduce the level of difficulty in performing it; the presence of an instructor can also heighten emotional tension. Both of these factors can impair the learning process. For this reason, we recommend self-directed feedback during training on complex laparoscopic skills. abstract_id: PUBMED:36420849 Audit & Feedback: how it works. This article is the first of a series that aims to describe the Audit & Feedback (A&F) methodology. Some key elements focus on what A&F is and how it works. While it is an effective tool for promoting change in professional behaviour and improving the quality of care, there is still substantial uncertainty concerning how to implement A&F interventions to maximize its effects. The article explains how to design effective A&F on relevant issues, considering the available literature and direct experiences conducted in the National Health System (NHS). A&F interventions should aim to achieve clear, attainable, and evaluable objectives, which concern aspects of care for which there is solid evidence of literature and potential space for improvement. Based on data that measure any distance between what is expected and observed in local practice, the feedback must turn to those who can pursue the proposed change and who must trust the data collection and analysis process. Feedback should be provided more than once, in verbal and written form, and might include explicit objectives and an action plan. When planning A&F interventions, it is essential to provide specific data (e.g., aggregated at the level of a team, department, or individual doctor) rather than general, sending them directly to the professional or department involved rather than generically to the healthcare organization involved. In addition, it is essential to simplify the message so that the staff who receives the feedback can quickly understand the quality of the performance addressed and how to change it. Finally, it is necessary to encourage collaboration between the various healthcare professionals responsible for the quality of care and competence for improvement interventions (health professions, health management, quality expert personnel, and methodologists). Networking between staff improves the knowledge and effectiveness of A&F. This article finally proposes practical examples of two main aspects of A&F planning from the context of the EASY-NET program: how to increase the participation and involvement of the recipients of the intervention and the related pros and cons regarding the choice between the use of routinely available data from health information systems (SIS) and data collected ad hoc. abstract_id: PUBMED:26753788 Enhanced air pollution via aerosol-boundary layer feedback in China. Severe air pollution episodes have been frequent in China during the recent years. While high emissions are the primary reason for increasing pollutant concentrations, the ultimate cause for the most severe pollution episodes has remained unclear. Here we show that a high concentration of particulate matter (PM) will enhance the stability of an urban boundary layer, which in turn decreases the boundary layer height and consequently cause further increases in PM concentrations. We estimate the strength of this positive feedback mechanism by combining a new theoretical framework with ambient observations. We show that the feedback remains moderate at fine PM concentrations lower than about 200 μg m(-3), but that it becomes increasingly effective at higher PM loadings resulting from the combined effect of high surface PM emissions and massive secondary PM production within the boundary layer. Our analysis explains why air pollution episodes are particularly serious and severe in megacities and during the days when synoptic weather conditions stay constant. Answer: Yes, feedback matters significantly across various domains, as evidenced by the provided abstracts. In biophysics, feedback loops are crucial for understanding the self-organization of biological systems, where electric and electromagnetic forces play a vital role in mediating interactions over large distances, suggesting that quantum information processing might be involved in large-scale areas of an organism (PUBMED:36575844). In medical research, feedback loops have been identified as key mechanisms in the invasion of white matter tracts by glioma stem cells, with a NOTCH1-SOX2 positive-feedback loop being associated with poor prognosis in glioma patients (PUBMED:30559479). In the context of social relationships, marital feedback behavior has been studied, showing that the quality and activity of feedback between partners can influence the functioning and duration of marriages (PUBMED:3687169). In medical education, feedback is essential for changing a learner's behavior, with the effectiveness of feedback being influenced by the learner's trust in the supervisor and the separation of coaching for improvement from summative assessment (PUBMED:35899738). Peer feedback among medical trainers is also highlighted as a means to improve performance, suggesting the importance of direct observation and feedback in professional development (PUBMED:27334086). Feedback is further recognized as a critical component in postgraduate medical training, where it serves as a communication tool to help trainees understand the gap between their performance and a given standard, with the intent of performance improvement (PUBMED:18976615). In dental diagnostics, acoustic feedback has been shown to enhance the reliability of probing at constant force (PUBMED:2269169). During laparoscopic training, feedback can reduce training time, although self-directed feedback is recommended for complex skills to avoid impairing the learning process (PUBMED:23739608). Audit & Feedback (A&F) methodology is discussed as an effective tool for promoting change in professional behavior and improving the quality of care within the healthcare system, emphasizing the need for clear, attainable, and evaluable objectives (PUBMED:36420849). Lastly, feedback mechanisms in environmental science are shown to exacerbate air pollution episodes in China, where aerosol-boundary layer feedback leads to increased particulate matter concentrations and reduced boundary layer height (PUBMED:26753788). In summary, feedback is a critical element that influences outcomes in biological systems, medical practices, social relationships, education, and environmental processes.
Instruction: Are 30 minutes of rest between two incremental shuttle walking tests enough for cardiovascular variables and perceived exertion to return to baseline values? Abstracts: abstract_id: PUBMED:25789556 Are 30 minutes of rest between two incremental shuttle walking tests enough for cardiovascular variables and perceived exertion to return to baseline values? Objective: To verify whether 30 minutes of rest between two incremental shuttle walking tests (ISWT) are enough for cardiovascular variables and perceived exertion to return to baseline values in healthy subjects in a broad age range. Method: The maximal exercise capacity of 334 apparently healthy subjects (age ≥ 18) was evaluated using the ISWT. The test was performed twice with 30 minutes of rest in between. Heart rate (HR), arterial blood pressure (ABP), dyspnea, and leg fatigue were evaluated before and after each test. Subjects were allocated to 6 groups according to their age: G1: 18-29 years; G2: 30-39 years; G3: 40-49 years; G4: 50-59 years; G5: 60-69 years and G6: ≥ 70 years. Results: All groups had a good performance in the ISWT (median >90% of the predicted distance). The initial HR (HRi) of the second ISWT was higher than the first ISWT in the total sample (p<0.0001), as well as in all groups (p<0.0001). No difference was observed in the behavior of ABP (systolic and diastolic) and dyspnea between the two tests, but this difference occurred for leg fatigue (greater before the second ISWT) in G1 (p<0.05). Most subjects (58%) performed better in the second test. Conclusion: 30 minutes of rest between two ISWTs are not enough for all cardiovascular variables and perceived exertion to return to baseline values. However, this period appears to be sufficient for blood pressure and performance to recover in most subjects. abstract_id: PUBMED:34917386 Walking distance in incremental shuttle walking test as a predictor of physical activity in patients diagnosed with low-risk chronic obstructive pulmonary disease. Objectives: Research on the determinants of physical activity in mildly symptomatic patients with chronic obstructive pulmonary disease is lacking. This study examined the predictors of physical activity in patients with low-risk chronic obstructive pulmonary disease. Methods: A total of 41 male patients with chronic obstructive pulmonary disease belonging to Group A of the Global Initiative for Chronic Obstructive Lung Disease were included. Regarding the objective index, the physical activity (number of steps/day and the amount of Ex (metabolic equivalent × hours)/day) of the participants was measured with a tri-axis accelerometer. In addition, regarding the evaluation index, respiratory function and dynamic lung hyperinflation were measured by a spirometer, skeletal muscle mass was measured using bioelectrical impedance analysis, skeletal muscle strength (grip and lower limb muscle strength) was measured using a dynamometer, exercise capacity was measured by the incremental shuttle walking test, and health-related quality of life was measured. Results: Significant correlations were found between the number of steps per day and age (ρ = -0.501, p < 0.01), forced vital capacity predictive values (ρ = 0.381, p < 0.05), dynamic lung hyperinflation (ρ = 0.454, p < 0.01), grip strength (ρ = 0.318, p < 0.05), and walking distance in incremental shuttle walking test (ρ = 0.779, p < 0.01), but not skeletal muscle mass, lower limb muscle strength, or health-related quality of life. A multiple-regression analysis with the number of steps per day as the dependent variable extracted only walking distance in incremental shuttle walking test, yielding a moderate single-regression equation (steps/day = -934.909 + 11.052 × walking distance in incremental shuttle walking test, adjusted R2 = 0.548, p < 0.001). Conclusion: It was suggested that the amount of physical activity of patients with low-risk chronic obstructive pulmonary disease could be predicted by walking distance in incremental shuttle walking test. abstract_id: PUBMED:12831268 Effect of incremental and submaximal constant load tests: protocol on perceived exertion (CR10) values. The purpose of the present investigation was to determine to what extent the protocol (incremental and constant load exercises) affects the ratings of perceived exertion for a given submaximal intensity on a cycle ergometer. 10 healthy and well-trained male students (M age=23 yr., SD=2.9) performed an incremental exercise to determine maximal oxygen uptake (VO2 max), maximal aerobic power, and the ratings of perceived exertion (CR10) corresponding to 70% and 75% VO2 max (CR10(70i) and CR10(75i)). Two 30-min. constant load exercises set at these intensities were performed to establish the corresponding CR10 values. CR10(70i) (5.6 +/- 1.9) and CR10(75i) (6.2 +/- 1.9) were significantly higher than all CR10 values collected at Minutes 5, 10, 15, 20, 25, and 30 of the constant load exercises. These results indicate that, for a given exercise intensity (70% and 75% VO2 max), subjects are rated higher during an incremental exercise than during a 30-min. submaximal constant load test. Therefore, when coaches and physiotherapists want to use CR10 values reported during an incremental standard exercise to set training loads, they have to be prudent since the same relative exercise intensity does not seem to be perceived identically during a submaximal constant exercise as during an incremental one. abstract_id: PUBMED:27492301 Application of the 6-Minute Walking Test and Shuttle Walking Test in the Exercise Tests of Patients With COPD Exercise training improves the management of stable chronic obstructive pulmonary disease (COPD). COPD patients benefit from exercise training programs in terms of improved VO2 peak values and decreased dyspnea, fatigue, hospital admissions, and rates of mortality, increasing exercise capacity and health-related quality of life (HRQOL). COPD is often associated with impairment in exercise tolerance. About 51% of patients have a limited capacity for normal activity, which often further degrades exercise capacity, creating a vicious circle. Exercise testing is highly recommended to assess a patient's individualized functions and limitations in order to determine the optimal level of training intensity prior to initiating an exercise-training regimen. The outcomes of exercise testing provide a powerful indicator of prognosis in COPD patients. The six-minute walking test (6MWT) and the incremental shuttle-walking test (ISWT) are widely used in exercise testing to measure a patient's exercise ability by walking distances. While nursing-related articles published in Taiwan frequently cite and use the 6MWT to assess exercise capacity in COPD patients, the ISWT is rarely used. This paper introduces the testing method, strengths and weaknesses, and application of the two tests in order to provide clinical guidelines for assessing the current exercise capacity of COPD patients. abstract_id: PUBMED:26933305 Incremental shuttle walk test: Reference values and predictive equation for healthy Indian adults. Purpose: Physical inactivity in Indians is leading to an increase in noncommunicable disorders at an early age in life. Early identification and quantification of the lack of physical activity using simple and reliable exercise testing is the need of the hour. The incremental shuttle walk test (ISWT) is an externally paced walk test widely used for the evaluation of exercise capacity. Currently the normative values available for clinical reference are generated from Western populations. Hence, the study was conducted to find normative values for the ISWT in healthy Indian adults (17-75 years). Materials And Methods: A convenience sample of 862 subjects was recruited after ethical approval was obtained. All subjects were divided into groups as per age and gender. For age, the grouping was as follows: Group 1: Young adulthood (17-40 years), group 2: Middle adulthood (40-65 years), and group 3: Old adulthood (>65 years). The ISWT was performed as per standard protocol by Sally Singh. Results: The average distance walked were 709.2m,556.4m and 441.3m in females and 807.9 m, 639.6 m and 478.2 m in males in the three respective age groups. Stepwise regression analysis revealed age and gender as key variables correlating with incremental shuttle walk distance (ISWD). The derived predictive equations for males and females may be given as follows: 740.351 - (5.676 × age) + (99.007 × gender). Conclusion: Reference values were generated for healthy Indian adults. Physiological response to the ISWT was shown to be affected by gender and increasing age. Easily measurable variables explained 68% of the variance seen in the test, making the reference equation a relevant part of the evaluation of the ISWT. abstract_id: PUBMED:32160828 Respiratory responses and rating of perceived exertion of severely obese adolescents during continuous and intermittent graded walking protocols: Application to cardiorespiratory field tests. During 20 m shuttle tests, obese adolescents may have difficulty achieving maximum cardiorespiratory performance due to the presence of braking-relaunch phases (BRP). Nineteen obese adolescents aged 15.2 ± 1.5 years (body mass index [BMI] = 39.7 ± 5.9 kg.m-2) performed three graded walking exercises on a 50 m track at speeds between 3 and 6 km/h: a continuous-straight-line protocol (C), a continuous protocol that required turning back every 30 sec (C-BRP) and an intermittent protocol that consisted of successively walking then resting for 15 sec (15-15). Oxygen uptake (VO2), aerobic cost of walking (Cw), ventilation (VE) and rating of perceived exertion (RPE) were measured at each stage during the protocols. During C-BRP, the responses were not significantly higher compared with C (p > 0.30). During 15-15, the VO2, Cw and VE were ~ 15 to 25% lower than during C beginning at 4 km/h (p < 0.05). In obese adolescents, the respiratory impact of sudden directional changes during the 20 m shuttle-type test appeared to be minor at walking speeds. During the 15-15 test, the intensity increases more progressively, and this design may encourage obese adolescents to walk further than during a continuous test. abstract_id: PUBMED:23602411 Predictors of shuttle walking test performance in patients with cardiovascular disease. Objective: The incremental shuttle walking test (ISWT) is used to estimate cardiorespiratory fitness, but data from healthy individuals suggest that demographic and anthropometric measures account for much of the variance in test performance. The aim of this study was to determine whether anthropometric, demographic and selected gait measures also predict ISWT performance (i.e. distance walked) in patients with cardiovascular disease. Design: Observational study. Setting: A community-based cardiac rehabilitation centre (Cohort 1) and a hospital outpatient cardiac rehabilitation programme (Cohort 2). Participants: Sixteen patients with clinically stable cardiovascular disease (Cohort 1) and 113 patients undergoing cardiac rehabilitation (Cohort 2). Interventions: Patients in Cohort 1 performed the ISWT on two occasions. Anthropometric data and walking and turning variables were collected. Linear regression analyses were used to identify the predictors of test performance. The authors subsequently attempted to validate the equation created by comparing predicted and actual ISWT values in a larger (n=113) validation sample (Cohort 2). Main Outcome Measures: Distance walked during ISWT, step length and height. Results: No gait or turning measures were significantly associated with ISWT performance. Distance walked correlated most strongly with step length (r=0.83, P<0.05) and height (r=0.74, P<0.05). Given the similarity of these correlations and the rarity of step length assessment in clinical practice, ISWT performance was predicted using patient's height; this explained 55% of the variance in ISWT performance. Height was also the best predictor in Cohort 2, explaining 17% of test variance (P<0.01). Body mass index explained an additional 3% of variance (P<0.05) in ISWT performance. Conclusions: Routine clinical measures, particularly patient's height, are predictive of ISWT performance. The findings of the present study are in partial agreement with similar studies performed in healthy individuals, and it remains unclear whether the ISWT performance of patients with cardiovascular disease is influenced by the same factors as the ISWT performance of healthy individuals. abstract_id: PUBMED:29953297 Influence of Exogenous β-Hydroxybutyrate on Walking Economy and Rating of Perceived Exertion. This study investigates the effect of a supplementary ketone, β-hydroxybutyrate (BHB), on walking economy and ratings of perceived exertion in apparently healthy individuals. In a repeated-measures, crossover design, ten non-aerobically trained participants (three males; seven females) performed two stages of a duration-modified Bruce treadmill protocol. Participants blindly consumed either 1 ounce of an exogenous BHB solution (KETO) or a noncaloric placebo (CON) 30 minutes prior to exercise testing. Blood ketone and glucose concentrations were measured prior to supplementation (baseline), immediately before exercise, and after exercise. Oxygen consumption (VO2), respiratory exchange ratio (RER), energy expenditure (EE), and rating of perceived exertion (RPE) were recorded during the last two minutes of each stage. Blood BHB concentrations were significantly elevated at the pre-exercise and postexercise time points as compared to the CON condition (p < .001), and blood glucose was significantly elevated postexercise in both conditions as compared to baseline levels (p < .001). No significant between-trial differences (p > .05) were found for VO2, RER, EE, or RPE. The intervention of this study did not produce evidence of an ergogenic benefit from BHB supplementation in a healthy subject pool. abstract_id: PUBMED:9453816 Comparison of a shuttle walking test with an exertion test with cycloergometer in patients with COPD To validate a shuttle walking test in a group of patients with COPD, comparing responses to those produced by a conventional stress test limited by symptoms on the cycle ergometer, and to analyze the relation between the shuttle walking test and maximal effort parameters. We enrolled 20 patients with COPD, mean age 60 years (SD 7), FEV1/46.6% (SD 19.2) of theoretical value and FEV1/FVC% 47.3% (SD 11.2). All the patients underwent a maximal effort test on the cycle ergometer, determination of initial dyspnea by applying Mahler's baseline dyspnea index, and a quality of life questionnaire specific to COPD the Chronic Respiratory Disease Questionnaire). To compare the responses of patients to the two tests, we analyzed heart rate, dyspnea and lactic acid at the end of each test. To see the relations between the two tests, we examined distance walked and maximal effort parameters. The relation between the walking test and lung function parameters at rest, initial dyspnea and quality of life were also analyzed. No statistically significant differences were found for heart rate [128.2 (SD 19.8) versus 131.6 (SD 12.9)], dyspnea 17.1 (SD 1.78) versus 7.24 (SD 2.64)] or lactic acid [5.24 (SD 2.34) versus 6.19 (SD 2.12)] at the end of the tests. Distance covered on the shuttle walking test correlated significantly with V'O2ml/min/Kgmax (r = 0.71), as well as with the remaining parameters of maximal effort. There was only a slight relation between lung function at rest and quality of life. The COPD patient's cardiovascular, metabolic and subjective responses to the shuttle walking test are similar to those of the cycle ergometer test, and oxygen consumption is significantly correlated. The shuttle walking test gives a valid estimate of the functional capacity of COPD patients. abstract_id: PUBMED:33925075 Evaluation of Prolonged Walking in Persons with Multiple Sclerosis: Reliability of the Spatio-Temporal Walking Variables during the 6-Minute Walk Test. Background: Walking disorders represent the most disabling condition in persons with Multiple Sclerosis (PwMS). Several studies showed good reliability of the 6-min walk test (6MWT) (i.e., especially distance traveled), but little is known about the reliability of the Spatio-temporal (ST) variables in the 6MWT. Objective: To evaluate the test-retest reliability of ST variables and perceived exertion during the 6MWT in PwMS and comparable healthy persons. Methods: We explored three 1-min intervals (initial: 0'-1', middle: 2'30″-3'30″, end: 5'-6') of the 6MWT. Six ST variables and perceived exertion were measured (respectively, using the GAITRite system and the Borg Scale). These measurements were performed twice, 1 week apart. The test-retest effects were assessed using the intraclass correlation coefficient (ICC) or the weighted kappa. Results: Forty-five PwMS and 24 healthy persons were included. The test-retest reliability of ST variables values was good-to-excellent for PwMS (ICC range: 0.858-0.919) and moderate-to-excellent for healthy persons (ICC range: 0.569-0.946). The test-retest reliability values of perceived exertion were fair for PwMS (weighted kappa range: 0.279-0.376) and substantial for healthy persons (weighted kappa range: 0.734-0.788). Conclusion: The measurement of ST variables during these 6MWT intervals is reliable and applicable in clinical practice and research to adapt rehabilitation care in PwMS. Answer: No, 30 minutes of rest between two incremental shuttle walking tests (ISWT) are not enough for all cardiovascular variables and perceived exertion to return to baseline values. A study involving 334 apparently healthy subjects across a broad age range found that the initial heart rate (HRi) of the second ISWT was higher than the first ISWT in the total sample and in all age groups, indicating incomplete cardiovascular recovery. However, no difference was observed in the behavior of arterial blood pressure (ABP) and dyspnea between the two tests. There was a difference in leg fatigue, which was greater before the second ISWT, particularly in the youngest age group (G1: 18-29 years). Despite the incomplete recovery of some cardiovascular variables and perceived exertion, most subjects (58%) performed better in the second test (PUBMED:25789556).
Instruction: Is frequent attendance a persistent characteristic of a patient? Abstracts: abstract_id: PUBMED:35386909 Factors Contributing to Persistent Frequent Attendance in Primary Care Among the Oldest Old: Longitudinal Evidence From the AgeCoDe-AgeQualiDe Study. Objective: Since there is a lack of longitudinal studies in this area, our aim was to identify the determinants of persistent frequent attendance in primary care among the oldest old in Germany. Methods: Longitudinal data (follow-up wave 7-9) were taken from the multicenter prospective cohort "Study on needs, health service use, costs, and health-related quality of life in a large sample of oldest-old primary care patients (85+)" (AgeQualiDe), covering primary care patients ≥ 85 years (FU7 n = 741, mean age 88.9 years (SD 2.9; 85-100)). Persistent frequent attenders of general practitioner (GP) services (the patients in the top decile of the number of GP consultations in two or more consecutive waves) were our main outcome of interest. Logistic random-effects models were used. Results: Our analysis included 1,891 observations (766 individuals). Across three waves, we identified 56 persistent frequent attenders. Results of random-effects logistic regressions showed that the odds of being persistent frequent attender were higher for widowed individuals (OR = 4.57; 95% CI [1.07-19.45]). Moreover, a one-point increase in the frailty score and having one more chronic condition increased the odds of being a persistent frequent attender by 68% (OR =1.68; 95% CI [1.05-2.69]) and 23% (OR=1.23, 95% CI [1.05-1.44]), respectively. Conclusion: Our study stressed the longitudinal association between frailty and widowhood as well as chronic diseases and persistent frequent attendance among the oldest old in Germany. abstract_id: PUBMED:30477466 Comparing occasional and persistent frequent attenders in occupational health primary care - a longitudinal study. Background: The aim of the study was to compare occasional and persistent frequent attenders in occupational health (OH) primary care and to identify the diagnoses associated with persisting frequent attendance. Methods: This is a longitudinal study using electronic medical record data from 2014 to 2016 from an OH service provider. Frequent attenders were defined as patients in the top decile of annual visits to healthcare professionals (frequent attender 10%, FA10). FA10 were categorized to three groups according to the persistence of frequent attendance (1-year-FA, 2 year-FA, and persistent-FA = frequent attenders in all three years). This was used as the dependent variable. We used patient sex, age, employer size, industry and distribution of visits and diagnostic codes to characterize the different frequent attender groups. Results: In total, 66,831 patients were included, of which 592 persistent frequent attenders (0.9% of the study population) consulted the OH unit on average 13 times a year. They made altogether 23,797 visits during the study years. The proportion of women and employees of medium and large employers increased among persistent-FAs when compared to the other groups. Multinomial logistic regression accentuated musculoskeletal disorders and to a lesser extent diseases of the respiratory and nervous system and mental disorders. One in five FA becomes a persistent-FA. Conclusions: Our results indicate that in the context of a working population the association of musculoskeletal disorders and persistent frequent attendance is emphasized. Persistent frequent attenders also create a substantial demand on physician resources. When planning interventions aimed at working age frequent attenders, subgroups suffering from musculoskeletal disorders should be identified as they are associated with persisting frequent attendance. abstract_id: PUBMED:15255488 Is frequent attendance a persistent characteristic of a patient? Repeat studies of attendance pattern at the family practitioner. Objective: To assess the extent of frequent attendance as a persistent characteristic of patients by repeat studies of attendance at a health centre. Design: A follow-up of frequent attenders and compared patients in 1991 among frequent attenders in 1996. Setting: Mariehem health centre in Umeå in northern Sweden providing care for 10,500 and 12,000 inhabitants in 1991 and 1996, respectively. Subjects: Frequent attenders, defined as patients who had at least 5 consultations with physicians during 1 year at the health centre, compared to attenders who had between 1 and 4 consultations in 1991 and 1996. Results: The proportion of people who lived in the area and consulted a doctor at the health centre at least once during 1 year increased from 40.0% in 1991 to 45.2% in 1996. The number of frequent attenders increased from 179 to 303 and they took 15% and 20% of all consultations in 1991 and 1996, respectively. Twenty-five patients (21 females and 4 males) were identified as frequent attenders in both years. Conclusion: With the exception of a small group of patients, mostly females, frequent attendance seems not to be a persistent characteristic of patients. abstract_id: PUBMED:33634146 Determinants of Frequent Attendance in Primary Care. A Systematic Review of Longitudinal Studies. Introduction: There is a lack of a systematic review synthesizing longitudinal studies investigating the determinants of frequent attendance in primary care. The goal of our systematic review was to fill this gap in knowledge. Methods: Three electronic databases (Medline, PsycINFO, and CINAHL) were searched. Longitudinal observational studies analyzing the predictors of frequent attendance in primary care were included. Data extraction covered methods, sample characteristics, and main findings. Selection of the studies, extracting the data and evaluation of study quality was performed by two reviewers. In the results section, the determinants of frequent attendance were presented based on the (extended) Andersen model. Results: In total, 11 longitudinal studies have been included in our systematic review. The majority of studies showed that frequent attendance was positively associated with the predisposing characteristics lower age, and unemployment. Moreover, it was mainly not associated with enabling resources. Most of the studies showed that need factors, and in particular worse self-rated health, lower physical functioning and physical illnesses were associated with an increased likelihood of frequent attendance. While most studies were of good quality, several of the included studies did not perform sensitivity analysis or described how they dealt with missing data. Discussion: Our systematic review showed that particularly lower age, unemployment and need factors are associated with the likelihood of becoming a frequent attender. Enabling resources are mainly not associated with the outcome measure. Future research should concentrate on the determinants of persistent frequent attendance due to the high economic burden associated with it. abstract_id: PUBMED:32557095 What is the prevalence of frequent attendance to emergency departments and what is the impact on emergency department utilisation? A systematic review and meta-analysis. Patients who frequently attend to emergency departments are a varying group and have complex health care needs. This systematic review and meta-analysis aimed to determine the prevalence of patients who have frequent attendance to emergency departments. A systematic review was performed in line with PRISMA guidelines. A database search was conducted, and studies were included in the final review if they analysed a population of frequent attendance. Meta-analysis was performed only on population-based studies to estimate prevalence. The search yielded 2922 nonduplicate publications, of which 27 were included in the meta-analysis. The most common definition used for frequent attendance was greater than three presentations a year. The proportion of people who frequently attended as a percentage of the total study population ranged from 0.01 to 20.9%, with emergency department presentations from frequent attenders ranging from 0.2 to 34%. When limiting the definition of frequent attendance to greater than three visits in a 12-month period, people who frequently attended contributed between 3 and 10% [pooled estimate 6%; CI 4-7%] of emergency department presentations and between 12 and 34% [pooled estimate 21%; CI 15-27%] of total emergency department presentations. Meta-analysis found substantial heterogeneity between estimates [I2 > 50%]. The prevalence of frequent attendance compared to the total population of patients seeking emergency care was small, but the impact on emergency department utilisation is significant. Early identification of people attending for frequent care at an emergency department provides the opportunity to implement alternative models of care. abstract_id: PUBMED:34772360 Risk factors for the frequent attendance of older patients at community health service centers in China: a cross-sectional study based on stratified sampling. Background: This study aimed to investigate the characteristics of frequent attenders (FAs) among older patients in Shanghai, China, and explore the associated factors. Methods: This cross-sectional study was conducted in six community health service centers in Shanghai, China, from August to December 2018 based on stratified sampling. On the basis of our preliminary study, FAs were defined as those attending at least four consultations in a month. A self-administered questionnaire was used to collect the clinicodemographic data of the participants. Social support, pain severity, depression, and anxiety were evaluated using the Social Support Revalued Scale, six-point Behavioral Rating Scale, Patient Health Questionnaire-9, and Generalized Anxiety Disorder Scale, respectively. Results: This study included 619 patients aged > 60 years. Among these patients, 155 (25%) were FAs to a community health service center, 366 (59.1%) had 1 or 2 chronic diseases, 179 (28.9%) had ≥3 chronic diseases, 313 (50.4%) opted for a family doctor service, and 537 (86.8%) chose a community health service center for the first consultation. The following were identified as independent risk factors for frequent attendance: widowed status, unmarried status, the presence of > 3 chronic diseases, first consultation at a community health service center, high medical expenses, frequent attendance of the spouses, long-term medication, the use of both traditional Chinese and Western medicine services, and depression. Conclusions: This study summarizes the characteristics of older FAs to community health service centers in China and identifies 10 risk factors significantly associated with frequent attendance. abstract_id: PUBMED:32466103 Determinants of Frequent Attendance in Primary Care. Study Protocol for a Systematic Review of Longitudinal Studies. Thus far, no study has systematically synthesized longitudinal studies investigating the determinants of frequent attendance in primary care. Consequently, the purpose of our systematic review is to give an overview of evidence based on longitudinal observational studies analyzing the determinants of frequent attendance. Three electronic databases (Medline, PsycINFO, CINAHL) will be searched. Moreover, the reference lists of studies included in our systematic review will be searched manually. Longitudinal observational studies examining the determinants of frequent attendance in primary care will be included. Disease-specific samples will be excluded. Data extraction focuses on methods (e.g., measurement of frequent attendance, statistical analysis), characteristics of the sample and key results. Furthermore, the quality of the studies included will be examined using an appropriate tool. Two reviewers will perform study selection, data extraction, and quality assessment. A meta-analysis will be conducted (if possible). abstract_id: PUBMED:26450918 Clinical characteristics of persistent frequent attenders in primary care: case-control study. Background: Most frequent attendance in primary care is temporary, but persistent frequent attendance is expensive and may be suitable for psychological intervention. To plan appropriate intervention and service delivery, there is a need for research involving standardized psychiatric interviews with assessment of physical health and health status. Objective: To compare the mental and physical health characteristics and health status of persistent frequent attenders (FAs) in primary care, currently and over the preceding 2 years, with normal attenders (NAs) matched by age, gender and general practice. Methods: Case-control study of 71 FAs (30 or more GP or practice nurse consultations in 2 years) and 71 NAs, drawn from five primary care practices, employing standardized psychiatric interview, quality of life, health anxiety and primary care electronic record review over the preceding 2 years. Results: Compared to NAs, FAs were more likely to report a lower quality of life (P < 0.001), be unmarried (P = 0.03) and have no educational qualifications (P = 0.009) but did not differ in employment status. FAs experienced greater health anxiety (P < 0.001), morbid obesity (P = 0.02), pain (P < 0.001) and long-term pathological and ill-defined physical conditions (P < 0.001). FAs had more depression including dysthymia, anxiety and somatoform disorders (all P < 0.001). Conclusions: Persistent frequent attendance in primary care was associated with poor quality of life and high clinical complexity characterized by diverse and often persistent physical and mental multimorbidity. A brokerage model with GPs working in close liaison with skilled psychological therapists is required to manage such persistent complexity. abstract_id: PUBMED:19192367 Predictability of persistent frequent attendance: a historic 3-year cohort study. Background: Few patients who attend GP consultations frequently continue to do so long term. While transient frequent attendance may be readily explicable, persistent frequent attendance often is not. It increases GPs' workload while reducing work satisfaction. It is neither reasonable, nor efficient to target diagnostic assessment and intervention at transient frequent attenders. Aim: To develop a prediction rule for selecting persistent frequent attenders, using readily available information from GPs' electronic medical records. Design Of Study: A historic 3-year cohort study. Method: Data of 28 860 adult patients from 2003 to 2005 were examined. Frequent attenders were patients whose attendance rate ranked in the (age- and sex-adjusted) top 10% during 1 year (1-year frequent attenders) or 3 years (persistent frequent attenders). Bootstrapped multivariable logistic regression analysis was used to determine which predictors contained information on persistent frequent attendance. Results: Of 3045 1-year frequent attenders, 470 (15.4%) became persistent frequent attenders. The prediction rule could update this prior probability to 3.3% (lowest value) or 43.3% (highest value). However, the 10th and 90th centiles of the posterior probability distribution were 7.4% and 26.3% respectively, indicating that the model performs modestly. The area under the receiver operating characteristic curve was 0.67 (95% confidence limits 0.64 and 0.69). Conclusion: Among 1-year frequent attenders, six out of seven are transient frequent attenders. With the present indicators, the rule developed performs modestly in selecting those more likely to become persistent frequent attenders. abstract_id: PUBMED:34109878 How GPs can Recognize Persistent Frequent Attenders at Finnish Primary Health Care Using Electronic Patient Records. Introduction: The proportion of patients who are frequent attenders (FAs) varies from few percent to almost 30% of all patients. A small group of patients continued to visit GPs year after year. In previous studies, it has been reported that over 15% of all 1-year FAs were persistent frequent attenders (pFAs). Objectives: This study aimed to identify typical features of pFAs from the textual content in their medical entries, which could help GPs to recognize pFAs easily and facilitated treatment.Methods: A retrospective register study was done, using 10 years of electronic patient records. The data were collected from Finnish primary health care centers and used to analyze chronic symptoms and diagnoses of pFAs and to calculate the inverse document frequency weight (IDF) of words used in the patient records. IDF was used to determine which words, if any, are typical for pFAs. The study group consisted of the 5-year pFAs and control group of 1-year FAs. The main background variables were age, gender, occupation, smoking habits, use of alcohol, and BMI. Results: Out of 4392 frequent attenders, 6.6% were pFAs for 3 years and 1.1% were pFAs for 5 years. Of the pFAs, 65% were female and 35% were male. The study group had significantly more depressive episodes (P = .004), heart failure (P = .019), asthma (P = .032), COPD (P = .036), epilepsy (P = .035), and lumbago (P = .046) compared to the control group. GPs described their 5-year pFAs by words related to lung and breathing issues, but there was no statistical difference to the 1-year FAs' descriptions. Conclusion: A typical pFA seems to be a woman, aged about 55 years with depressive episodes, asthma or COPD, and lower back pain. Physicians describe pFAs with ordinary words in patient records. It was not possible to differentiate pFAs from 1-year FAs in this way. Answer: The persistence of frequent attendance in primary care varies among patients. Some studies suggest that frequent attendance is not a persistent characteristic for most patients. For instance, a study conducted at a health center in Sweden found that with the exception of a small group of patients, mostly females, frequent attendance does not seem to be a persistent characteristic of patients (PUBMED:15255488). Similarly, a historic 3-year cohort study indicated that among 1-year frequent attenders, six out of seven are transient frequent attenders, and the prediction rule developed performs modestly in selecting those more likely to become persistent frequent attenders (PUBMED:19192367). However, other studies have identified factors that contribute to persistent frequent attendance. A longitudinal study from the AgeCoDe-AgeQualiDe Study in Germany stressed the longitudinal association between frailty, widowhood, and chronic diseases with persistent frequent attendance among the oldest old (PUBMED:35386909). Another study in the context of a working population highlighted the association of musculoskeletal disorders with persistent frequent attendance (PUBMED:30477466). Additionally, a systematic review of longitudinal studies found that lower age, unemployment, and need factors are associated with the likelihood of becoming a frequent attender (PUBMED:33634146). In the context of older patients in China, risk factors such as widowed status, the presence of multiple chronic diseases, high medical expenses, and depression were identified as significantly associated with frequent attendance (PUBMED:34772360). A Finnish study found that a typical persistent frequent attender (pFA) seems to be a woman, aged about 55 years with depressive episodes, asthma or COPD, and lower back pain (PUBMED:34109878). In summary, while frequent attendance may not be a persistent characteristic for the majority of patients, certain factors such as age, health conditions, marital status, and socioeconomic factors can contribute to the persistence of frequent attendance in primary care.
Instruction: Adjuvant radiotherapy after mastectomy for pT1-pT2 node negative (pN0) breast cancer: is it worth the effort? Abstracts: abstract_id: PUBMED:13129629 Adjuvant radiotherapy after mastectomy for pT1-pT2 node negative (pN0) breast cancer: is it worth the effort? Purpose: Postoperative radiotherapy (RT) for pT1-2 pN0 breast cancer was the standard treatment in our department. Since little data on the importance of RT in this subgroup are known, we reviewed the clinical records with regard to overall survival. Material And Methods: From 1984 until 2000, 1789 files were submitted to retrospective analyses; 731 had a pT1 (n=427) or pT2 (n=304) pN0 lesion. They were treated with breast conserving surgery (BCS) (n=343) or mastectomy (ME) (n=388), axillary lymph node dissection (ALND) and post-operative RT. The outcome was analyzed and compared with the patients included in the SEER-Data 1988-1997 (NCI-Surveillance, Epidemiology and End Results, release 2000) that were treated according to the standard treatment: BCS+ALND+RT, or ME+ALND no RT. Results: The actuarial overall survival (OS) at 5 and 10 years after BCS was 93.3% and 85.1% for pT1 and 88.3% and 75.4% for pT2 tumors. These results are comparable with the SEER (93.9%, 84.9% for pT1, and 87.3%, 76.7% for pT2, respectively). For our ME patients the OS was 91.8% and 79.9%, respectively (pT1 at 5 and 10 years, respectively), and 83.6% and 70.4% (pT2 at 5 and 10 years). In the SEER data the analyses resulted in 89.3% and 73.8% (pT1), and 81.1% and 63.5% (pT2), respectively. Discussion: Although both databases are retrospective, the comparable survival in BCS patients pleads for the similarity of the two populations. The better OS observed in ME patients treated with RT compared to the SEER patients argues in favor of a benefit due to adjuvant radiotherapy. Conclusion: Radiotherapy after mastectomy might improve survival in low-risk node negative patients. Our data shows an absolute benefit of between 2.5% and 6.9% OS in favor of post ME radiotherapy, compared to the SEER data. abstract_id: PUBMED:31153730 Breast cancer hormone receptor negativity, triple-negative type, mastectomy and not receiving adjuvant radiotherapy were associated with axillary recurrence after sentinel lymph node biopsy. Background: Axillary lymph node dissection (ALND) can be avoided in selected patients with positive sentinel lymph nodes (SLNs). However, regional lymph node recurrence may occur after SLN biopsy (SLNB). This study aimed to identify the risk factors for axillary recurrence to ensure safe axillary surgery. Methods: Between June 2004 and December 2017, a total of 1056 women underwent SLNB without ALND. Patient data were prospectively entered into the breast cancer database at Kawaguchi Municipal Medical Center. From October 2012, we did not perform ALND in patients with (a) 1 or 2 positive SLNs, (b) positive SLNs that were unmatted or did not show gross extra nodal extension, (c) a clinical tumor size ≤5 cm, and in (d) those who received adjuvant endocrine therapy or chemotherapy and radiotherapy. Subsequently, appropriate adjuvant systemic and/or radiation treatment was administered. Results: Of the 1056 patients included, 996 had negative SLNs, 49 had positive SLNs, and 11 had undetectable SLNs. The identification rate for SLNs was 99.0%, and the median number of removed SLNs was 2. During the median 6.9-year follow-up period, 10 patients (1.0%) showed axillary recurrence without SLNs metastasis. Hormone receptor negativity (p < 0.01), triple-negative type (0.047), mastectomy (<0.01) and not receiving adjuvant radiotherapy (<0.01) were significantly related to axillary recurrence. Conclusion: ALND can be safely avoided in selected patients with early breast cancer. Patients with hormone receptor-negative tumors, especially triple-negative breast cancer, patients who underwent mastectomy without ALND or those who did not receive adjuvant radiotherapy should be followed up carefully. abstract_id: PUBMED:26018878 Prognostic role of adjuvant radiotherapy in triple-negative breast cancer: A historical cohort study. The value of adjuvant radiotherapy in triple-negative breast cancer (TNBC) is currently debated. We assessed the association between adjuvant radiotherapy and survival in a large cohort of Asian women with TNBC. Women diagnosed with TNBC from 2006 to 2011 in five Asian centers (N = 1,138) were included. Survival between patients receiving mastectomy only, breast-conserving therapy (BCT, lumpectomy and adjuvant radiotherapy) and mastectomy with radiotherapy were compared, and adjusted for demography, tumor characteristics and chemotherapy types. Median age at diagnosis was 53 years (range: 23-96 years). Median tumor size at diagnosis was 2.5 cm and most patients had lymph node-negative disease. The majority of patients received adjuvant chemotherapy (n = 861, 76%) comprising predominantly anthracycline-based regimes. In 775 women with T1-2, N0-1, M0 TNBCs, 5-year relative survival ratio (RSR) was highest in patients undergoing mastectomy only (94.7%, 95% CI: 88.8-98.8%), followed by BCT (90.8%, 95% CI: 85.0-94.7%), and mastectomy with radiotherapy (82.3%, 95% CI: 73.4-88.1%). The adjusted risks of mortality between the three groups were not significantly different. In 363 patients with T3-4, N2-3, M0 TNBCs, BCT was associated with highest 5-year RSR (94.1%, 95% CI: 81.3-99.4%), followed by mastectomy with radiotherapy (62.7%, 95% CI: 54.3-70.1%), and mastectomy only (58.6%, 95% CI: 43.5-71.6%). Following multivariable adjustment, BCT and mastectomy with radiotherapy remained significantly associated with lower mortality risk compared to mastectomy only. Overall, adjuvant radiotherapy was associated with higher survival in women aged <40 years, but not in older women. Adjuvant radiotherapy appears to be independently associated with a survival gain in locally advanced as well as in very young TNBC. abstract_id: PUBMED:18996609 Radiotherapy to the chest wall following mastectomy for node-negative breast cancer: a systematic review. Background: Although nodal status is the major determinant of risk of locoregional relapse (LRR), other factors also contribute, and these assume a greater significance for those with node-negative breast cancer. Previous reviews of post-mastectomy radiotherapy have included studies using radiotherapy techniques or doses no longer considered clinically appropriate. Objectives: To determine the effectiveness of post-mastectomy radiotherapy in women with node-negative breast cancer with particular reference to those patient and tumour factors which contribute to an increased risk of LRR. Methods: A systematic literature review was conducted. Trials using inadequate or orthovoltage radiotherapy were excluded. Data linking potential risk factors, either individually or in combination, to the occurrence of LRR were handled qualitatively. Data from randomised trials of post-mastectomy radiotherapy were included in a meta-analysis. Results: Baseline risk of LRR is increased in the presence of lymphovascular invasion, a grade 3 tumour, tumours greater than 2 cm or a close resection margin and in patients who are pre-menopausal or aged less than 50. Those with no risk factors have a baseline risk of LRR of approximately 5% or less rising to a risk of 15% or more for those with two or more risk factors. In the meta-analysis of three randomised trials of mastectomy and axillary clearance (667 patients), the addition of radiotherapy resulted in an 83% reduction in the risk of LRR (P < 0.00001) and in a 14% improvement in survival (P = 0.16). Conclusion: The use of post-mastectomy radiotherapy for women with node-negative breast cancer requires re-evaluation. Radiotherapy should be considered for those with two or more risk factors. abstract_id: PUBMED:7917371 Mastectomy only versus radical mastectomy and postoperative radiotherapy in node negative, resectable breast cancer. A randomized trial. Patients with clinically node negative resectable breast cancer were randomized to either mastectomy only or radical mastectomy and radiotherapy, and followed for 15-20 years. During follow-up axillary metastases occurred with the same frequency after mastectomy as was initially observed in the group that underwent axillary dissection. There was no significant survival difference between the two groups. When adjusting the treatment effect for differences in age, tumour size, lymph node metastases, and histology, the outcome after radical mastectomy plus irradiation was significantly inferior. Comedo carcinoma proved also in this study to carry a poor prognosis. abstract_id: PUBMED:14101448 ADJUVANT CHEMOTHERAPY (TRIETHYLENE THIOPHOSPHORAMIDE) WITH RADICAL MASTECTOMY AND RADIOTHERAPY IN BREAST CANCER. Triethylene thiophosphoramide (Thio-TEPA) as an adjuvant to radical mastectomy for the treatment of carcinoma of the breast has been shown to be effective in reducing tumour recurrences. In many centres radiotherapy is considered valuable in breast cancer treatment, and in the minds of many there has existed the question of the safety of giving triethylene thiophosphoramide, a radiomimetic drug, at the time of surgery to the patient destined to receive post-operative radiotherapy. Fears had been expressed that the additive effects of triethylene thiophosphoramide and radiotherapy would result in serious complications and preclude the use of this valuable drug.A total of 70 patients with breast cancer were treated by radical mastectomy and administration of triethylene thiophosphoramide. Most patients received postoperative radiotherapy. Results of this study showed that the benefit of adjuvant chemotherapy with triethylene thiophosphoramide need not be withheld and that the drug may be administered with safety to the patient with breast carcinoma for whom radical mastectomy and postoperative radiotherapy are planned. abstract_id: PUBMED:22525383 Sentinel lymph node biopsy before mastectomy and immediate breast reconstruction may predict post-mastectomy radiotherapy, reduce delayed complications and improve the choice of reconstruction. Background: Adjuvant post-mastectomy radiotherapy (RT), which is often unpredicted, is known to increase complications following immediate breast reconstruction (IBR). Aim: To investigate the role of sentinel lymph node biopsy (SLN) in predicting RT and improving the choice of IBR. Patients And Methods: All patients who had mastectomy and IBR between January 2004 and January 2007 were reviewed retrospectively. Axillary staging (clearance or SLN) was performed at the same time until October 2005 (Group 1), when the Unit's protocol was updated to perform SLN initially prior to mastectomy and IBR (Group 2). Patients in Group 2 with positive SLN were offered either a delayed reconstruction or a temporary subpectoral immediate tissue expander, while all options were offered if SLN was negative and in Group 1 patients. Results: One hundred and thirty-nine patients were reviewed. 20 patients received unexpected RT in Group 1 (14 tissue expander, 4 Latissimus Dorsi flap with an implant and 2 DIEP flaps) compared to 11 patients in Group 2 who had a temporary tissue expander due to expected RT (P=0.03). Unexpected RT caused delayed complications in 14 patients (70%) compared to no delayed complications in patients who received expected RT in Group 2. Conclusion: SLN biopsy before IBR helps to predict RT and avoids its complications on breast reconstruction. Patients with positive SLN biopsy are best offered a temporary subpectoral tissue expander for IBR. abstract_id: PUBMED:37929785 Repeat sentinel lymph node surgery for locally recurrent breast cancer after prior mastectomy. Background And Objectives: Current NCCN guidelines discourage repeat sentinel lymph node (SLN) surgery in patients with local recurrence (LR) of breast cancer following prior mastectomy. This study addresses the feasibility and therapeutic impact of this approach. Methods: We identified 73 patients managed with repeat SLN surgery for post-mastectomy isolated LR. Lymphatic mapping was performed using radioisotope with or without lymphoscintigraphy and/or blue dye. Successful SLN surgery was defined as retrieval of ≥1 SLN. Results: SLN surgery was successful in 65/73 (89%), identifying a median of 2 (range 1-4) SLNs, with 10/65 (15%) SLN-positive. Among these, 5/10 (50%) proceeded to ALND. In unsuccessful cases, 1/8 (13%) proceeded to ALND. Seven of 10 SLN-positive patients and 50/55 SLN-negative patients received adjuvant radiotherapy. Chemotherapy was administered in 31 (42%) and endocrine therapy in 50 of 57 HR+ patients (88%). After 28 months median follow-up, eight patients relapsed with the first site local in two, distant in five, and synchronous local/distant in one. No nodal recurrences were observed. Conclusions: SLN surgery for patients with LR post-mastectomy is feasible and informative. This approach appears oncologically sound, decreases axillary dissection rates and may be used to tailor adjuvant radiation target volumes and systemic therapies. abstract_id: PUBMED:24668644 Clinical activity of adjuvant cytokine-induced killer cell immunotherapy in patients with post-mastectomy triple-negative breast cancer. Purpose: Triple-negative breast cancer (TNBC) is a high risk form of this disease, even after surgery, due to the absence of targets for hormone treatment and anti-Her-2 therapy. Chemotherapy is the main therapeutic strategy for such patients with breast cancer, although the outcome is often unsatisfactory. Thus, the development of combination adjuvant therapies is essential for improved prognosis in patients with TNBC. In this study, we investigated the efficacy of a sequential combination of cytokine-induced killer cell (CIK) infusion and chemotherapy for patients with post-mastectomy TNBC. Experimental Design: From 2008 to 2012, 90 patients with post-mastectomy TNBC were included in this retrospective study: 45 cases received chemotherapy alone or with sequential radiotherapy; a further 45 cases received chemotherapy with/without radiotherapy and sequential CIK infusion. Results: Survival analysis showed significantly higher disease-free survival (DFS) and overall survival (OS) rates in the CIK treatment group compared with the control group (P = 0.0382, P = 0.0046, respectively; log-rank test). Multivariate survival analysis showed that CIK adjuvant treatment was an independent prognostic factor for OS of patients with TNBC. In subgroup analyses, CIK adjuvant treatment significantly increased the DFS rate of patients with pathologic grade 3, and significantly increased the OS rate of patients in N1, N2, N3, IIB, III TNM (tumor-node-metastasis) stages, and with pathologic grade 3. Conclusions: These data indicate that adjuvant CIK treatment combined with chemotherapy is an effective therapeutic strategy to prevent disease recurrence and prolong survival of patients with TNBC, particularly those with lymph node metastasis, advanced TNM stage, and poor pathologic grade. Clin Cancer Res; 20(11); 3003-11. ©2014 AACR. abstract_id: PUBMED:23773780 Pre-mastectomy sentinel lymph node biopsy: a strategy to enhance outcomes in immediate breast reconstruction. The pre-mastectomy sentinel lymph node biopsy (PM-SLNB) is a technique that provides knowledge regarding nodal status prior to mastectomy. Because radiation exposure is associated with poor outcomes in breast reconstruction and reconstructed breasts can interfere with the planning and delivery of radiation therapy (RT), information regarding nodal status has important implications for patients who desire immediate breast reconstruction. This study explores the safety and utility of PM-SLNB as part of the treatment strategy for breast cancer patients desiring immediate reconstruction. We reviewed the charts of adult patients (≥18 years old) who underwent PM-SLNB from January 2004 to January 2011 at our institution. PM-SLNB was offered to patients with stage I or IIa, clinically and/or radiographically node-negative breast cancer who desired immediate breast reconstruction following mastectomy. PM-SLNB was also offered to patients with ductal carcinoma in situ if features concerning for invasive carcinoma were present. Ninety-one patients underwent PM-SLNB of 94 axillae. PM-SLNB was positive in 25.5% of breasts (n = 24). Nineteen node-positive patients (79.2%) have undergone or planning to undergo delayed reconstruction at our institution. Seventeen of these 19 node-positive patients (89.5%) have received adjuvant RT. Two patients (10.5%) elected against RT despite our recommendation for it. No biopsy-positive patient underwent immediate reconstruction or suffered a radiation-induced complication with their breast reconstruction. There were two minor complications associated with PM-SLNB, both in node-negative patients. This study demonstrates the utility of PM-SLNB in providing information regarding nodal status, and therefore the need for adjuvant RT, prior to mastectomy. This knowledge can be used to appropriately counsel patients regarding optimal timing of breast reconstruction. Answer: The question of whether adjuvant radiotherapy after mastectomy for pT1-pT2 node-negative (pN0) breast cancer is worth the effort has been explored in several studies. According to the data presented in the abstracts, there are arguments in favor of adjuvant radiotherapy in certain cases. One study suggests that adjuvant radiotherapy might improve survival in low-risk node-negative patients, showing an absolute benefit of between 2.5% and 6.9% in overall survival (OS) in favor of post-mastectomy radiotherapy compared to the SEER data (PUBMED:13129629). Another study found that hormone receptor negativity, triple-negative type, mastectomy, and not receiving adjuvant radiotherapy were significantly related to axillary recurrence after sentinel lymph node biopsy (PUBMED:31153730). This indicates that adjuvant radiotherapy could potentially reduce the risk of axillary recurrence, particularly in patients with more aggressive tumor types like triple-negative breast cancer. Furthermore, a historical cohort study on triple-negative breast cancer (TNBC) patients showed that adjuvant radiotherapy was associated with higher survival in women aged under 40 years and in those with locally advanced disease (PUBMED:26018878). A systematic review also concluded that post-mastectomy radiotherapy should be considered for node-negative breast cancer patients with two or more risk factors, as it resulted in an 83% reduction in the risk of locoregional relapse (LRR) and a 14% improvement in survival (PUBMED:18996609). However, a randomized trial comparing mastectomy only versus radical mastectomy and postoperative radiotherapy in node-negative, resectable breast cancer found no significant survival difference between the two groups (PUBMED:7917371). This suggests that the benefit of adjuvant radiotherapy may not be universal for all node-negative patients. In conclusion, while adjuvant radiotherapy after mastectomy for pT1-pT2 node-negative breast cancer may not be necessary for all patients, it appears to offer a survival benefit in certain subgroups, such as those with aggressive tumor types, younger patients, and those with additional risk factors for LRR. Therefore, the decision to use adjuvant radiotherapy should be individualized based on the patient's risk profile and tumor characteristics.
Instruction: Is gastroscopy still a valid diagnostic tool in detecting gastric MALT lymphomas? Abstracts: abstract_id: PUBMED:12404054 Is gastroscopy still a valid diagnostic tool in detecting gastric MALT lymphomas? A dilemma beyond the eye. Mucosa-associated lymphoid tissue. Background: Primary gastric lymphomas are distinct entities with an indolent clinical course and biologic behavior. They account for 2-8% of all gastric malignancies. We conducted this study to evaluate the role of gastroscopy as the principal diagnostic method in detecting gastric MALT lymphomas. Methods: Sixty-three consecutive patients with gastric MALT lymphomas, who were evaluated and treated at our institution between January 1978 and December 1997, are retrospectively reviewed. There were 36 males and 27 females, with a mean age of 53 years (range 20-80 years). All patients underwent the standard diagnostic evaluation, including gastroscopy and biopsy. Patients were staged according to revised Musshof modification of the Ann Arbor classification system, whereas histological evaluation was made according to the Isaacson classification system for gastric MALT lymphomas. Results: According to endoscopic findings, the antrum harbored the neoplasm in 31 patients (49%), the body in 38% (24 patients), and the fundus in 4% (3 patients), whereas in 5 patients (8%) the neoplasm occupied the entire stomach. The macroscopic appearance was not in most cases pathognomonic of the disease. Three macroscopic patterns were recognized at endoscopy: (a) the ulcerative in 51% (32 patients), (b) the polypoid in 33% (21 patients) and (c) the diffuse infiltrative in 16% (10 patients). The neoplasm was characterized as a benign disease in 24 patients (38%), with malignancy being suspected in 62% (39 patients). Endoscopy displayed a sensitivity of 61% in detecting malignancy; however, the sensitivity dropped to 27% when endoscopic diagnosis of non-Hodgkin's lymphoma was hypothesized. Conclusions: Although the impact of gastroscopy as a diagnostic tool in the patients of our study was of limited value, due to the nonspecific gross pattern of gastric MALT lymphomas, it should be maintained in the diagnostic intervention of the upper GI tract pathologies. abstract_id: PUBMED:19174093 Analysis of the clinical benefits and cost-effectiveness of performing a systematic second-look gastroscopy in benign gastric ulcer Introduction: We analyzed the need to routinely perform a second gastroscopy after an initial diagnosis of benign gastric ulcer. Method: A total of 226 consecutive cases of gastric ulcer were reviewed. Sensitivity (S), specificity (Sp), positive and negative predictive value (PPV and NPV) and the accuracy of the initial gastroscopy plus biopsy were analyzed, both overall and according to the initial endoscopist's experience (attending or resident physician). The diagnostic accuracy of the initial and second-look gastroscopies was compared. The number of second endoscopies required to diagnose a new case of malignant gastric ulcer and their cost was calculated, both overall and according to the endoscopist's experience. Results: There were 178 benign ulcers (79%) and 48 malignant ulcers (21%). The initial gastroscopy (S: 87.2%; Sp: 100%; PPV: 100%; PNV: 96.7%; accuracy: 96.7%) was performed by an attending physician in 74% of the patients and by a resident physician in the remaining 26%. Diagnostic accuracy was higher for attending physicians than for residents (98.2% vs. 94.8%; p=0.18). The accuracy of second-look endoscopy was 100%, with a significant improvement when compared with the initial procedure (p=0.035). Three new cases of MALT lymphoma and three new cases of gastric adenocarcinoma were diagnosed and could be treated with curative intent. The number of second gastroscopies required to diagnose a new case of malignant gastric ulcer and their economic cost was: 37.3 (4,675 Euros) for the whole group, 55.2 (6,845 Euros) for attending physicians and 19.3 (2,393 Euros) for residents. Conclusions: Initial gastroscopy showed high diagnostic accuracy, which was slightly lower when performed by resident physicians. Second-look gastroscopy significantly improved the results, confirming the clinical benefit of this procedure in diagnosing potentially curable malignant lesions. The mean cost of each new diagnosis of malignancy was 4,675 Euros, which was three times lower if the initial gastroscopy was performed by a less experienced endoscopist. abstract_id: PUBMED:34090833 Prevalence and characteristics of post-gastroscopy gastric cancer: A retrospective study from an academic medical center. Background And Study Aims: Gastric cancer is diagnosed by endoscopy but false negative rates of up to 10% in the west and 40% in Asia have been reported. In Lebanon, little is known about the rates of post-gastroscopy gastric cancer (PGGC), defined as the proportion of patients diagnosed with gastric cancer with a negative previous examination within 2 years of diagnosis. We aimed to examine the rate of PGGC and its risk factors, clinico-pathologic and endoscopic characteristics at a University medical Center. Patients And Methods: Retrospective analysis of patients with histologically proven gastric malignancy over the last 14 years. Patients with history of upper endoscopy preceding the index diagnostic endoscopy by 6 to 24 months were included. Results: 18,976 patients underwent upper endoscopy and gastric cancer was diagnosed in 323 (1.7%). Of those, only 4 (1.2%) had a preceding endoscopy within 6 to 24 months of diagnosis: 3 adenocarcinoma and one MALT lymphoma. Upon review of the initial endoscopy, a mucosal abnormality had been noted in all 4 patients and biopsies taken in 3 were negative for cancer. The mean time to cancer diagnosis was 8 months (range 6-13 months). Conclusion: A small proportion of gastric carcinomas are missed on endoscopy in this study. Patients with endoscopic evidence of mucosal abnormalities and negative biopsies should undergo repeat examination with multiple biopsies. Proper endoscopic technique, lesion recognition and adoption of performance improvement measures are important to optimize endoscopic practice. abstract_id: PUBMED:30113456 Perforation caused by gastric mucosa associated lymphoid tissue lymphoma: A case report and literature review. Rationale: Gastric mucosa-associated lymphoid tissue (MALT) lymphoma is the most common and best-studied extranodal marginal zone lymphoma of the MALT. It is characterized by an indolent clinical course and excellent survival compared with other malignant tumor. Complications such as obstruction, perforation or bleeding are rarely observed. The treatment strategy is still controversial. Patient Concerns: A 59-year-old man, who had been diagnosed with MALT lymphoma by gastroscopy and biopsy one month before, came to the hospital for a sudden onset of abdominal pain after breakfast. Diagnoses: MALT lymphoma; gastric perforation. Interventions: Emergency surgery. Outcomes: Gastric perforation repair and jejunostomy was performed. The patient recovered well and is preparing for combined chemotherapy. Lessons: This case report illustrates the challenges in diagnosis and treatment of MALT lymphoma. We discussed the particularity of its clinical characteristics, treatment strategies and prognosis combined with literature review, and we think that early diagnosis and timely appropriate chemotherapy is of great importance. abstract_id: PUBMED:9869691 Gastroscopic-assisted laparoscopic wedge resection of B-cell gastric mucosa-associated lymphoid tissue (MALT) lymphoma. Helicobacter pylori has been implicated as an etiologic agent for B-cell gastric mucosa-associated lymphoid tissue (MALT) lymphoma. This tumor is considered a low-grade entity with an associated indolent course. Some instances of tumor regression with adequate antibiotic treatment and irradication of H. pylori have been reported. Gastric malignancy and more specifically, lymphoma, should be considered in all patients with recurrent or nonhealing ulcers. Diagnostic and treatment strategies for these patients remain controversial. Two patients were evaluated, one for recurrent disease, another for a 3-cm nonhealing ulcer. Both patients refused total gastrectomy. Both patients underwent gastroscopy with simultaneous laparoscopic visualization. The mucosal lesions were identified and removed via laparoscopic stapled wedge resection. A combined endoscopic and laparoscopic approach can be considered as an alternative to radical surgery in the treatment strategy of these tumors. Wedge resection allows for staging and thorough pathologic classification. Further studies are necessary to define the natural history and the role of laparo-endoscopy in the diagnosis, staging, and treatment of B-cell gastric MALT lymphoma. abstract_id: PUBMED:10522067 Lymph node involvement rate in low-grade gastric mucosa-associated lymphoid tissue lymphoma--too high to be neglected. Background/aims: Anti-Helicobacter pylori (H. pylori) treatment for low-grade gastric mucosa-associated lymphoid tissue (MALT) lymphoma has been the subject of attention. The aim of this study was to determine the proportion of such cases which could be suitable candidates for H. pylori eradication for the purpose of cure; we focused on gross morphology and lymph node metastasis. Methodology: We retrospectively reviewed the medical records of 53 patients diagnosed and treated for gastric MALT lymphoma at Seoul National University Hospital between 1992 and 1996. Results: According to Isaacson's classification, 60% of cases were low-grade, and H. pylori was detected in 88% of them. In low-grade disease, gastroscopy revealed superficial lesions in 56% of cases, ulcerofungating lesions were found in as much as 19%, and ulceroinfiltrating in 25%. Even in low-grade disease, invasion of proper muscle, or deeper, was seen in 28% of patients, and lymph node involvement in 36%; even in low-grade disease confined to mucosa and submucosa, the rate of lymph node involvement was 40%. All cases which, on gastroscopy, appeared to be gastritis or benign ulcer-like lesions were free of lymph node metastasis, but in low-grade disease, this proportion was only 16%. In 33% of cases, pre-operative clinical stage I--as shown by abdominal CT--was found post-operatively to be stage II. The negative predictive value of lymph node detection by CT was 68%. Conclusions: In low-grade gastric MALT lymphoma, the lymph node involvement rate was too high to be neglected. In detecting lymph node metastasis, the diagnostic accuracy of CT was too low. The proportion of suitable candidates for anti-H. pylori treatment for low-grade gastric MALT lymphoma was not high, and in clinical practice, anti-H. pylori treatment in such cases should at present be very carefully applied. abstract_id: PUBMED:8289496 Early diagnosis of gastric lymphoma: gene rearrangement analysis of endoscopic biopsy samples. The diagnosis of gastric lymphoma in endoscopic biopsy specimens remains difficult despite the emergence of accepted criteria for the histologic diagnosis of lymphomas originating from mucosa-associated lymphoid tissue (MALT). The sensitivity and validity of immunoglobulin (Ig) gene rearrangement analysis of mucosal biopsies for the diagnosis of malignant B-cell lymphoma were investigated in comparison with conventional histology and immunohistology. Biopsy specimens from 34 different endoscopies of 20 patients with a previous history, or tentative diagnosis of gastric lymphoma, and 12 control samples were analyzed for the presence of clonal Ig gene rearrangements. A clonal B-cell population was detected by Southern blot analysis in all patients with a definitive histologic diagnosis of lymphoma. In addition, in two patients the detection of clonal rearrangements in biopsy specimens preceded by several months the histologic diagnosis of lymphoma, and clonality was confirmed in three further patients where histology remained inconclusive. In some cases of low-grade MALT-lymphoma, discrete spreading of malignant cells within chronically inflamed mucosa was suggested by the presence of identical clonal rearrangements in all simultaneously obtained biopsies, with or without histologically detectable involvement by lymphoma. Our results show that immunoglobulin gene rearrangement studies of endoscopic biopsy samples are an additional powerful tool for the diagnosis of gastric lymphoma, especially for detecting early recurrence, and improve the preoperative assessment of the extent of mucosal involvement. abstract_id: PUBMED:16780555 Frequent and rapid progression of atrophy and intestinal metaplasia in gastric mucosa of patients with MALT lymphoma. Objectives: Association of gastric mucosa-associated lymphoid tissue (MALT) low-grade lymphoma and adenocarcinoma has repeatedly been reported. The aim of this study was to evaluate the frequency and the spreading of atrophy and intestinal metaplasia in gastric mucosa of patients with gastric MALT lymphoma followed after conservative treatment. Methods: Forty-five patients (mean age 45 +/- 2.1 yr) with gastric MALT lymphoma, treated by Helicobacter pylori eradication, chemotherapy with per os single alkylating agents, or both treatments have been followed by gastroscopy with biopsies in antrum and corpus at least once a year. Univariate and multivariate analysis evaluated the association between the appearance of atrophy and intestinal metaplasia in antrum or corpus and different factors related to patients, H. pylori status, lymphoma features, and treatment. In addition, histological aspects of gastric biopsies at the diagnosis period and at the end of follow-up were compared with those of two control groups of age-matched patients with H. pylori gastritis. Results: At the diagnosis time, only intestinal metaplasia in corpus was more frequent in patients with gastric MALT lymphoma than in patients with nonulcer dyspepsia. Within median follow-up of 54.4 months (range 9-196), the percentage of patients with gastric atrophy and intestinal metaplasia increased significantly and became significantly higher than in age-matched nonulcer dyspepsia patients. Multivariate analysis showed significant association between corpus intestinal metaplasia and corpus atrophy, intestinal metaplasia in antrum, and duration of the follow-up. Conclusions: Conservative management of gastric MALT lymphoma including H. pylori eradication is associated with progression of gastric atrophy and intestinal metaplasia with frequent involvement of the corpus which is known to be a precancerous condition. These findings show that long-term endoscopic monitoring should be recommended in such patients. abstract_id: PUBMED:23568339 Gastric lymphoma: still an interdisciplinary challenge Differentiation of chronic gastritis from marginal zone B-cell lymphoma (MZoL) of MALT type is often difficult for the pathologist. Diagnostic tools include CD20 stain to highlight lymphoepithelial lesions, Wotherspoon grading of the infiltrate, and clonality analysis of the B-cells. MZoL may partially transform into a diffuse, large B-cell lymphoma, which the authors have named blastic MZoL. Blastic MZoL may be present with or without small cell MZoL. Without this component, blastic MzoL, while being CD10-negative, is presently difficult to positively diagnose since specific immune markers are still lacking. Blastic MZoL has a very favourable outcome compared to conventional diffuse large B-cell lymphomas (DLBCL). Moreover, there are conventional DLBCL in the stomach, mostly in a setting of a secondary organ involvement. The biology of these gastric DLBCL is identical to their extragastric counterparts. This is also true for primary gastric Burkitt lymphoma and mucosal involvement in B-CLL or mantle cell lymphoma. Unfavourable outcomes are always observed for EBV-triggered lymphoproliferations in immunodeficiency and peripheral T-cell lymphomas which might also arise or be initially diagnosed in the stomach. abstract_id: PUBMED:12811415 Value of endosonography in diagnostic staging of primary gastric lymphoma (MALT type) Background And Aims: The incidence of gastric MALT (mucosa-associated lymphoid tissue) lymphomas has significantly increased during the past few years. Especially when MALT lymphomas are treated conventionally, e.g., with eradication or chemotherapy, an exact classification of the lymphoma is required. The goal of this study was to compare the accuracy of endoscopic ultrasound (EUS) with other imaging techniques used in the staging of MALT lymphomas. Patients And Methods: In the setting of a prospective trial with a conservative therapeutic approach (Münster Study for GIT-NHL), altogether 44 patients diagnosed with a low-grade or high-grade malignant non-Hodgkin's lymphoma (NHL) were included in the study and classified according to their relative tumor stage using special diagnostic measures, which comprised gastroscopy (esophagogastroduodenoscopy), abdominal ultrasound, computed tomography (CT), and EUS. Results: In 42 out of 44 patients, EUS identified the lymphoma (in two cases, final diagnosis was possible only by histologic analysis). Concerning the identification of a thickening of the stomach wall or a fusion of the layers of the stomach wall, EUS had a significantly superior score over conventional ultrasound or CT, respectively (p < 0.001). In the detection of pathologic lymph nodes, EUS was again significantly superior to ultrasound imaging. CT, however, had a statistically similar detection rate (p > 0.05). In four cases, EUS underrated a less advanced tumor stage with tumor manifestations outside the stomach (IVE). In one case, pathologic lymph nodes could be detected only with thoracic CT. In 70% of the cases, EUS imaging resulted in higher-grade tumor ranking. In 57% of cases, EUS was the only imaging technique to achieve a classification of the tumor stage at all. Conclusion: In the diagnosis of gastric NHLs, EUS imaging is superior to conventional imaging techniques (ultrasound and CT). Therefore, the use of endoscopic ultrasound for staging can be regarded as a prerequisite for the exact tumor stage classification required in more recent conventional therapeutic approaches. Answer: Yes, gastroscopy remains a valid diagnostic tool in detecting gastric MALT lymphomas, although its effectiveness can be limited by the nonspecific gross pattern of the disease. The study by PUBMED:12404054 found that gastroscopy had a sensitivity of 61% in detecting malignancy, but this sensitivity dropped to 27% when specifically hypothesizing a diagnosis of non-Hodgkin's lymphoma. Despite these limitations, gastroscopy should be maintained in the diagnostic intervention of upper GI tract pathologies due to the indolent clinical course and biologic behavior of primary gastric lymphomas. Additionally, PUBMED:19174093 demonstrated that initial gastroscopy had high diagnostic accuracy, which was slightly lower when performed by resident physicians compared to attending physicians. The accuracy of second-look endoscopy was 100%, indicating that a systematic second-look gastroscopy can significantly improve diagnostic results and confirm the clinical benefit of this procedure in diagnosing potentially curable malignant lesions. PUBMED:34090833 highlighted that a small proportion of gastric carcinomas are missed on endoscopy, suggesting that patients with endoscopic evidence of mucosal abnormalities and negative biopsies should undergo repeat examination with multiple biopsies to optimize endoscopic practice. Furthermore, PUBMED:30113456 discussed a case of gastric MALT lymphoma that presented with perforation, emphasizing the challenges in diagnosis and treatment of MALT lymphoma and the importance of early diagnosis and timely appropriate chemotherapy. In summary, while gastroscopy has limitations, it is still an essential diagnostic tool for gastric MALT lymphomas, and its effectiveness can be enhanced through careful application, including repeat examinations and multiple biopsies when necessary (PUBMED:12404054, PUBMED:19174093, PUBMED:34090833, PUBMED:30113456).
Instruction: The ionized fraction of serum total magnesium in hemodialysis patients: is it really lower than in healthy subjects? Abstracts: abstract_id: PUBMED:12356189 The ionized fraction of serum total magnesium in hemodialysis patients: is it really lower than in healthy subjects? Aim Of The Study: Based on data in the literature, it remains unclear whether the ionized fraction of serum total magnesium (Mg) is lower in chronic hemodialysis (HD) patients compared to healthy subjects. Patients And Methods: The ionized fraction of serum total Mg was investigated in 49 HD patients, pre- and post-dialysis, and compared to 30 healthy controls. The quality of the analytical performance of the Mg measurements has been emphasized by applying a reference method and/or rigorous internal quality control (IQC). In addition, the ionized fraction of serum total calcium (Ca) was measured in both populations, because the results for Mg should be related to those of Ca. Results: In HD patients, the ionized fraction of serum total Mg was on average 65% (pre-dialysis 64.2% and post-dialysis 66.2%). In healthy controls, the ionized fraction was 64.9%. When the analytical variability was taken into account, no significant differences (p > 0.05) were observed between pre- and post-dialysis samples and controls. For Ca, an ionized fraction of 55.3% was found in HD patients, which was not significantly different from the fraction obtained in the control group (55.7%). Conclusion: The present study demonstrates that, compared to healthy controls, the ionized fraction of serum total Mg is not different in hemodialysis patients. abstract_id: PUBMED:12372946 Maintenance hemodialysis and circulating ionized magnesium. Background: Circulating magnesium exists in the bound and in the free ionized form, that is biologically active. In kidney disease the relationship between ionized and total circulating magnesium is often altered. Little information is available on the influence of hemodialysis on the relationship between ionized and total circulating magnesium in end-stage kidney disease. Methods: Plasma total and ionized magnesium and the plasma ionized magnesium fraction were assessed before and after hemodialysis (dialysate magnesium content 0.75 mmol/l) in 46 patients with end-stage kidney disease and in a control group of 25 healthy subjects. Results: In patients plasma total (from 1.19 [1.05-1.33] to 1.10 [1.02-1.16] mmol/l; median and interquartile range) and ionized (from 0.71 [0.66-0.78] to 0.65 [0.63-0.69] mmol/l) magnesium significantly decreased during dialysis (control subjects: 0.82 [0.80-0.92], respectively, 0.57 [0.54-0.59] mmol/l). The plasma ionized magnesium fraction was significantly lower in patients both before (0.61 [0.58-0.64)] and after (0.60 [0.56-0.62]) hemodialysis than in controls (0.68 [0.65-0.70]). Conclusions: The study demonstrates a tendency towards a reduced circulating ionized magnesium fraction in end-stage kidney disease that is not corrected by hemodialysis. abstract_id: PUBMED:8953122 Measurement of serum ionized versus total levels of magnesium and calcium in hemodialysis patients. Until recently, only techniques for measuring total magnesium have been available. Now commercially available instruments using new ion-selective electrodes (ISE) for Mg+2 have made possible reliable measurement of ionized magnesium also in clinical practice. We measured changes induced by a hemodialysis session in serum ionized and total pools of magnesium and calcium using ISE methods. When compared with levels in age- and sex-matched control subjects, both serum ionized magnesium (0.68 +/- 0.11 vs. 0.56 +/- 0.06 mmol/l, p < 0.001) and total magnesium (1.00 +/- 0.19 vs. 0.82 +/- 0.08 mmol/l, p < 0.001) were higher in hemodialysis patients. The fraction of ionized Mg was 68.6 +/- 2.9% in hemodialysis patients, and did not differ significantly from that in controls (68.7 +/- 5.3%). The postdialysis value was 68.1 +/- 7.7%. The corresponding ratios of calcium (ionized/total) were 51.0 +/- 2.8% pre- and 50.9 +/- 4.6% postdialysis. Both prior to and after dialysis the correlation between ionized and total magnesium was high (r = 0.976, p < 0.001, and r = 0.925, p < 0.001, respectively). The corresponding ionized versus total calcium correlations were r = 0.724 (p < 0.001) before and 0.423 (p = 0.003) after dialysis. The changes induced by a hemodialysis session in serum concentration of ionized magnesium and calcium were dependent on the concentration of the cation in the dialysate. The change in PTH (suppression or stimulation) was very closely related to the changes in the serum concentration of ionized calcium. We concluded that measurement of ionized magnesium using ion-selective electrodes for Mg++ is an interesting new method in evaluating body magnesium status. Its definitive role in clinical practice cannot be judged on the basis of the results of the present study, but it will probably not achieve the same importance as the measurement of ionized calcium in clinical nephrology. abstract_id: PUBMED:28683422 Ionized magnesium in plasma and erythrocytes for the assessment of low magnesium status in alcohol dependent patients. Background: Studies on the homeostasis of magnesium in alcohol-dependent patients have often been characterized by low hypomagnesemia detection rates. This may be due to the fact that the content of magnesium in blood serum constitutes only 1% of the average magnesium level within the human body. However, the concentration of ionized magnesium is more physiologically important and makes up 67% of the total magnesium within a human organism. There are no data concerning the determination of the ionized fraction of magnesium in patients addicted to alcohol and its influence on mental health status. Methods: This study included 100 alcohol-dependent patients and 50 healthy subjects. The free magnesium fraction was determined using the potentiometric method by means of using ion-selective electrodes. The total magnesium level was determined by using a biochemical Indiko Plus analyzer. In this study, different psychometric scales were applied. Results: Our results confirm the usefulness of ionized magnesium concentrations in erythrocytes and plasma as a diagnostic parameter of low magnesium status in alcohol-dependent patients. Conclusions: The lower the concentration of ionized magnesium, the worse the quality of life an alcohol-dependent person might experience. In the case of total magnesium, no such correlation was determined. abstract_id: PUBMED:26401148 Measuring Serum Level of Ionized Magnesium in Patients with Migraine. Objective: Migraine is known as one of the most disabling types of headache. Among the variety of theories to explain mechanism of migraine, role of serum magnesium is of great importance. Serum magnesium, as a pathogenesis factor, was considerably lower in patients with migraine. We established this study to see if serum ionized magnesium, not its total serum level, was different in migraineurs from normal individuals. Materials & Methods: In this case control study, all participants were recruited from Neurology Clinic of Imam Hossein Hospital, Tehran, Iran. Ninety-six people were entered in the study, 48 for each of case and control groups. The two groups were matched by age and sex. Migrainous patients were selected according to the criteria of International Headache Society. Various characteristics of headache were recorded based on patients' report. Controls had no history of migraine or any significant chronic headaches. Serum ionized magnesium level was measured in both of the case and control groups and the results were compared to each other. P value of <0.05 was considered as significant. Results: Case group consisted of 13 males, 35 females, and control group included 14 males, as well as 34 females. Mean age was 33.47± 10.32 yr for case and 30.45 ±7.12 yr for control group. Twenty-eight patients described the intensity of their headaches as moderate; 15 patients had severe and the 5 remainders had only mild headaches. Mean serum level of ionized Mg was 1.16± 0.08 in case group and 1.13± 0.11 in control group of no significant difference (P >0.05). Conclusion: Serum ionized magnesium, which is the active form of this ion, was not significantly different in migraineurs and those without migraine. This may propose a revision regarding pathogenesis of migraine and question the role of magnesium in this type of headache. abstract_id: PUBMED:9804396 Serum ionized versus total magnesium in patients with intestinal or liver disease. In serum, magnesium exists in three fractions: protein-bound, complex-bound and free ionized form. Only the free ionized fraction is biologically active. Until recently, only the measurement of serum total magnesium has been in clinical use. Now, commercially available instruments using new ion-selective electrodes for Mg++ have made possible the reliable measurement of serum ionized magnesium in clinical practice. For the measurement of serum ionized magnesium we used a magnesium-selective electrode installed in a six-channel electrolyte analyzer. We compared the use of ionized versus total magnesium measurement in 52 patients with intestinal disease, 54 with liver disease, and in 75 healthy control subjects. In the patients with alcoholic liver disease both serum ionized and total magnesium were lower, and in those with inflammatory bowel disease slightly higher than in control subjects. The correlation coefficient between serum ionized and total magnesium was r=0.87 (p<0.001) in the patients, and r=0.75 (p<0.001) in the controls. In the patient group the fraction of ionized magnesium in the total was negatively related to the serum albumin level (r=-0.41, p<0.001). Serum total magnesium was below the reference range in 30 out of 150 measurements, serum ionized magnesium in only 9 out of 150 measurements, respectively. Thus, 21 cases with low total but normal ionized magnesium (two thirds of hypomagnesemia according to serum total magnesium) were false positive. Total magnesium measurement may overestimate the incidence of hypomagnesemia when significant hypoalbuminemia is present. Measurement of serum ionized magnesium instead of total magnesium may therefore be of advantage in evaluating patients with hypoalbuminemia and when hypomagnesemia is expected. abstract_id: PUBMED:8268649 Deficiency of serum ionized magnesium in patients receiving hemodialysis or peritoneal dialysis. Serum total magnesium (TMg) measurements in dialysis patients are variable, with some groups reporting hypermagnesemia and some hypomagnesemia. It had not been possible to measure the biologically active fraction, ionized magnesium (IMg2+). The authors utilized an ion-selective electrode to measure IMg in 26 hemodialysis patients and 10 peritoneal dialysis (CAPD) patients and compared the results with those from 66 age matched control subjects. Dialysate magnesium was 0.375 mM/L for the hemodialysis and 0.25 mM/L for the CAPD patients. When compared with control subjects, both hemodialysis and CAPD patients had significantly lower IMg2+ (0.55 +/- 0.02 and 0.50 +/- 0.02 vs. 0.60 +/- 0.004 mM/L; p < 0.05) and greater or normal TMg values (0.99 +/- 0.04, different at the p < 0.001 level, and 0.85 +/- 0.04 vs. 0.84 +/- 0.008). Ionized calcium (ICa2+) values were similar for all three groups (1.15 +/- 0.02 and 1.21 +/- 0.04 vs. 1.17 +/- 0.01), resulting in increased mean ICa2+/IMg2+ ratios (2.14 +/- 0.07 and 2.42 +/- 0.06 vs. 1.95 +/- 0.02 for the control subjects; p < 0.05). The percent of total magnesium that was ionized (%IMg2+) was low in both the hemodialysis and CAPD patients (55.6 +/- 0.93 and 59.2 +/- 1.05) compared with that of control subjects (72 +/- 0.61; p < 0.05). IMg2+ values correlated with TMg values in both hemodialysis (r = 0.93; p < 0.0001) and CAPD (r = 0.92; p < 0.0001) patients did not correlate with age, time on dialysis, weight, fasting cholesterol or triglyceride, albumin, blood urea nitrogen (BUN), creatinine, hematocrit, phosphate, or PTH values.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:29318926 Assessment of preeclampsia risk by use of serum ionized magnesium-based equation. Background: Preeclampsia is a common medical complication in pregnancy. It has been reported to be associated with decreased serum magnesium levels. However, there has not been evidence demonstrating utilization of change in magnesium for prediction of preeclampsia. The purpose of this study was to develop magnesium fraction-based equations which took other significant clinical risk factors into consideration for prediction of preeclampsia. Methods: We collected serum total and ionized magnesium ionized magnesium levels from 84 pregnant women diagnosed with preeclampsia after week 20 of pregnancy. The ionized magnesium fraction was then calculated by the percentage ratio of ionized and total magnesium level. Results: Sixty-four (76.19%) women had normal pregnancy and 20 (23.81%) developed preeclampsia. The ionized magnesium fraction was significantly lower in preeclampsia group (23.95 ± 4.7% vs. 26.28 ± 2.3%, p = .04). Additionally, lower ionized magnesium fraction (24.67%), teenage and elderly primigravida were significantly associated with preeclampsia (OR = 4.41, 95% CI: 1.46-13.40, OR = 5.47, 95% CI: 1.85-35.42 and OR = 11.11, 95% CI: 1.09-113.78, respectively). Consequently, we attempted to develop ionized magnesium fraction-based equations calculate risk scores for preeclampsia. The area of ROC for predictive accuracy of the model was 0.77 (p < .001) and ROC suggested that the score of 0.27 would be a threshold for screening preeclampsia with 70% sensitivity and 81% specificity. Conclusions: Ionized magnesium fraction may have been appropriate for screening of preeclampsia. We suggested blood testing on total and ionized magnesium concentrations as well as calculation of ionized magnesium fraction in addition to routine antenatal care for better screening of the disease. abstract_id: PUBMED:29180531 Anion Gap as a Determinant of Ionized Fraction of Divalent Cations in Hemodialysis Patients. Background And Objectives: Circulating levels of anions that bind to magnesium and calcium are often altered in patients with CKD. However, it is unknown how these alterations affect the ionized fraction of magnesium and calcium. Design, Setting, Participants, & Measurements: This cross-sectional study involved patients on maintenance hemodialysis and patients not on dialysis who visited the outpatient department of nephrology. We collected whole-blood samples to measure ionized magnesium and calcium concentrations. Adjusted anion gap was calculated as an integrative index of unmeasured anions. Results: A total of 118 patients on hemodialysis and 112 patients not on dialysis were included. Although the prevalence of hypermagnesemia defined by total magnesium was much higher in patients on hemodialysis than in patients not on dialysis (69% versus 12%; P<0.001), the prevalence of hypermagnesemia defined by ionized magnesium did not differ significantly (13% versus 18%; P=0.28). Among patients on hemodialysis with high total magnesium, 83% had normal or low ionized magnesium. Consequently, the mean ionized fraction of magnesium in patients on hemodialysis was significantly lower than that in patients not on dialysis (51% versus 63%; P<0.001). Similarly, the mean ionized fraction of calcium in patients on hemodialysis was lower than that in patients not on dialysis (55% versus 56%; P<0.001). In patients on hemodialysis who had a higher adjusted anion gap than patients not on dialysis (mean [SD]: 14.1 [2.2] versus 5.1 [3.1]), the ionized fractions of magnesium and calcium were inversely associated with the adjusted anion gap. Furthermore, the anion gap significantly improved predictions of ionized magnesium and calcium in patients on hemodialysis. Conclusions: Anions that accumulate in patients on hemodialysis contribute to the lower ionized fraction of magnesium and calcium. Equations that incorporate the anion gap provide better predictions of ionized magnesium and calcium in patients on hemodialysis. abstract_id: PUBMED:34132041 Estimating serum-ionized magnesium concentration in hemodialysis patients. Introduction: Cardiovascular mortality is significantly increased in kidney failure with replacement therapy (KFRT) patients, which is partly mediated by enhanced vascular calcification. Magnesium appears to have anticalcifying capabilities, and hypomagnesemia has been associated with increased mortality in KFRT patients. Ionized magnesium represents the biologically and physiologically active form. As serum ionized magnesium (Mgion ) is difficult to assess in clinical routine estimating equations derived from routinely assessed laboratory parameters could facilitate medical treatment. Methods: We developed equations to estimate serum Mgion using linear regression analysis in 191 hemodialysis (HD) patients. Reference test was measured ionized magnesium (Mgion ). As index tests, we chose estimated Mgion using total magnesium (Mgtot ) and other laboratory and demographic variable candidates. Equations were internally validated, using 749 subsequent Mgion measurements. Findings: The median patient age was 65 years, 67.5% of the patients were male. Median (interquartile range [IQR]) measured Mgion was 0.64 [0.57, 0.72] mmol/L, 11 (6%) patients were hypo- (i.e., <0.45 mmol/L) and 127 (66%) were hypermagnesemic (>0.60 mmol/L). The final equation at the end of the development process included Mgtot , serum ionized, and total calcium concentrations. In the validation dataset, bias (i.e., median difference between measured and estimated Mgion , -0.017 [-0.020, -0.014] mmol/L) and precision (i.e., IQR of bias 0.043 [0.039, 0.047] mmol/L) were small, 90% [88, 93] of estimated values were ±10% of measured values. The equation detected normomagnesemia with overall good diagnostic accuracy (area under the receiver-operating curve 0.91 [0.89, 0.93]). Discussion: Mgion can be estimated from equations containing routinely assessed laboratory variables with high accuracy and good overall performance. These equations might simplify the assessment of ionized magnesium levels in the individual hemodialysis patients and help the treating physician to guide the overall treatment. Answer: The ionized fraction of serum total magnesium in hemodialysis patients has been a subject of investigation, with studies yielding varying results. According to the study with PUBMED:12356189, the ionized fraction of serum total magnesium in hemodialysis patients was found to be on average 65% (pre-dialysis 64.2% and post-dialysis 66.2%), which was not significantly different from the ionized fraction in healthy controls (64.9%). This study concluded that the ionized fraction of serum total magnesium is not different in hemodialysis patients compared to healthy controls. However, another study with PUBMED:12372946 reported a different finding. It demonstrated a tendency towards a reduced circulating ionized magnesium fraction in end-stage kidney disease that is not corrected by hemodialysis. In this study, the plasma ionized magnesium fraction was significantly lower in patients both before and after hemodialysis than in controls. Similarly, the study with PUBMED:8268649 found that both hemodialysis and peritoneal dialysis patients had significantly lower ionized magnesium (IMg2+) and greater or normal total magnesium (TMg) values compared to control subjects, indicating a deficiency in serum ionized magnesium in patients receiving dialysis. On the other hand, the study with PUBMED:8953122 found that the fraction of ionized magnesium was 68.6% in hemodialysis patients, which did not differ significantly from that in controls (68.7%). This suggests that the ionized fraction of serum total magnesium in hemodialysis patients may not be lower than in healthy subjects. In conclusion, the evidence is mixed regarding whether the ionized fraction of serum total magnesium is lower in hemodialysis patients compared to healthy subjects. Some studies report no significant difference, while others indicate a tendency towards a reduced ionized magnesium fraction in patients with end-stage kidney disease.
Instruction: Emergency laparoscopic colectomy: does it measure up to open? Abstracts: abstract_id: PUBMED:36602516 Comparison of Laparoscopic and Open Emergency Surgery for Colorectal Perforation: A Retrospective Study. Background: This study aimed to clarify the safety and efficacy of laparoscopic surgery for colorectal perforation by comparing the clinical outcomes between laparoscopic and open emergency surgery for colorectal perforation. Materials and Methods: We retrospectively reviewed the data of 116 patients who underwent surgery for colorectal perforation. The patients were categorized into two groups: the open group included patients who underwent laparotomy, and the laparoscopic group included those who underwent laparoscopic surgery. Clinical and operative characteristics and postoperative outcomes were evaluated. Results: The open and laparoscopic groups included 67 and 49 patients, respectively. More than half of the patients in both groups developed perforation in the sigmoid colon (open, 58.2%; laparoscopic, 61.2%). The most common cause of perforation was diverticulum, followed by colorectal cancer. The mean intraoperative blood loss was significantly lower in the laparoscopic group than in the open group (70.0 mL versus 160.3 mL; P = .0290). The incidence of surgical site infection was lower in the laparoscopic group than in the open group (2.0% versus 13.4%; P = .0430). There were no significant differences in either the short- or long-term outcomes between the two groups. Univariate and multivariate analyses showed that the choice of surgical approach (open versus laparoscopic) did not affect overall survival in patients with colorectal perforation. Conclusion: The laparoscopic approach for colorectal perforation in an emergency setting can be safely performed and provides certain advantages over an open approach in suitable patients. abstract_id: PUBMED:38396446 Laparoscopic versus Open Emergency Surgery for Right Colon Cancers. Background: A laparoscopic approach to right colectomies for emergency right colon cancers is under investigation. This study compares perioperative and oncological long-term outcomes of right colon cancers undergoing laparoscopic or open emergency resections and identifies risk factors for survival. Methods: Patients were identified from a prospectively maintained institutional database between 2009 and 2019. Demographics, clinicopathological features, recurrence, and survival were investigated. Cox regression analysis was performed for risk factor analysis. Results: A total of 202 right colectomies (114 open and 88 laparoscopic) were included. ASA III-IV was higher in the open group. The conversion rate was 14.8%. Laparoscopic surgery was significantly longer (156 vs. 203 min, p < 0.001); pTNM staging did not differ. Laparoscopy was associated with higher lymph node yield, and showed better resection clearance (R0, 78.9 vs. 87.5%, p = 0.049) and shorter postoperative stay (12.5 vs. 8.0 days, p < 0.001). Complication rates and grade were similar. The median length of follow-up was significantly higher in the laparoscopic group (20.5 vs. 33.5 months, p < 0.001). Recurrences were similar (34.2 vs. 36.4%). Open surgery had lower five-year overall survival (OS, 27.1 vs. 51.7%, p = 0.001). Five-year disease-free survival was similar (DFS, 55.8 vs. 56.5%). Surgical approach, pN, pM, retrieved LNs, R stage, and complication severity were risk factors for OS upon multivariate analysis. Pathological N stage and R stage were risk factors for DFS upon multivariate analysis. Conclusions: A laparoscopic approach to right colon cancers in an emergency setting is safe in terms of perioperative and long-term oncological outcomes. Randomized control trials are required to further investigate these results. abstract_id: PUBMED:37489661 Laparoscopic Appendectomy versus Open Appendectomy in Acute Appendicitis. Background: Appendectomy is the most common emergency surgical procedure performed. Appendectomy is performed by either open or laparoscopic methods. However, there is lack of consensus regarding the most appropriate method. This study aimed to compare the outcomes of laparoscopic and open appendectomy in the treatment of acute appendicitis. Methods: Fifty-two patients undergoing appendectomy were analyzed in this prospective comparative study, with 26 patients each in laparoscopic and open group. The outcomes were measured in terms of operative time, postoperative pain at 4, 6 and 12 hours, length of hospital stay, postoperative complications according to modified Clavien Dindo classification and cost analysis. Results: Laparoscopic group had longer time after completion of surgery till exit from operation theatre (30 min in laparoscopic and 20 min in open, p<0.01) and significantly higher cost (Nrs. 26295 for laparoscopic and Nrs. 19575 for open, p<0.01) than open appendectomy. Operative time, time from entering operation theatre till being kept in operation table, time from being kept in operation table till initiation of anesthesia, postoperative pain at 4,6 and 12 hours and postoperative complications were insignificant in both groups. Conclusions: The results suggest that laparoscopic appendectomy group had longer recovery time after operation and was costlier than open appendectomy. Thus, the decision of the operative procedure can be based on the patient's preference. abstract_id: PUBMED:33080991 The Analysis of Risk Factors in the Conversion from Laparoscopic to Open Cholecystectomy. Laparoscopic cholecystectomy is a standard treatment for cholelithiasis. In situations where laparoscopic cholecystectomy is dangerous, a surgeon may be forced to change from laparoscopy to an open procedure. Data from the literature shows that 2 to 15% of laparoscopic cholecystectomies are converted to open surgery during surgery for various reasons. The aim of this study was to identify the risk factors for the conversion of laparoscopic cholecystectomy to open surgery. A retrospective analysis of medical records and operation protocols was performed. The study group consisted of 263 patients who were converted into open surgery during laparoscopic surgery, and 264 randomly selected patients in the control group. Conversion risk factors were assessed using logistic regression analysis that modeled the probability of a certain event as a function of independent factors. Statistically significant factors in the regression model with all explanatory variables were age, emergency treatment, acute cholecystitis, peritoneal adhesions, chronic cholecystitis, and inflammatory infiltration. The use of predictive risk assessments or nomograms can be the most helpful tool for risk stratification in a clinical scenario. With such predictive tools, clinicians can optimize care based on the known risk factors for the conversion, and patients can be better informed about the risks of their surgery. abstract_id: PUBMED:18689167 Emergency laparoscopic surgery The laparoscopic approach has represented a major step forward in general and emergency surgery. Its application in the emergency setting still raises a number of concerns that limit its more widespread use. To assess the true scope of laparoscopic surgery in the acute abdominal setting, we retrospectively evaluated our experience. From February 2003 to June 2007, 314 patients underwent an emergency laparoscopic operation, for low abdominal pain (193 patients), acute cholecystitis (78 patients), bowel obstruction (18 patients), diffuse peritonitis (16 patients), blunt abdominal trauma (6 patients), and acute pancreatitis (3 patients). Laparoscopy yielded a good diagnostic definition in all cases. The conversion rate was 16.6% (52 patients). Mean operative time was 63 +/- 29 minutes. The general major morbidity rate was 1.5% (4 patients) and the mortality rate was 0.4% (1 pt.). The laparoscopic approach in patients with abdominal emergencies is a useful tool that yields a reliable diagnostic definition in uncertain cases and allows minimal access treatment of the causative disease in the majority of cases. abstract_id: PUBMED:31523536 Comparison of Open Appendectomy and Laparoscopic Appendectomy in Perforated Appendicitis. Introduction Laparoscopic appendectomy for nonperforated appendicitis is associated with improved outcomes. This study compares laparoscopic appendectomy and open appendectomy in cases of a perforated appendix by assessing surgical site infection, mean operating time, and length of hospital stay. Materials and methods This study was a prospective randomized study conducted at the Department of Surgery, Holy Family Hospital, Rawalpindi, Pakistan, from January 2016 to January 2017, by randomly allotting the laparoscopic or the open appendectomy technique to 130 patients by the lottery method. Patients having a perforated appendix were included after they provided informed consent. Data were entered and analyzed using IBM SPSS Statistics for Windows, Version 20.0 (IBM Corp., Armonk, NY, US). Results The frequency of wound site infection was significantly higher in open appendectomy (27.69%) than in the laparoscopic approach (10.77%; p=0.01). Mean hospital stay was slightly longer in the laparoscopic approach (4.38 ± 1.09 days) than in open appendectomy (4.18 ± 0.77 days; p=0.23). Mean operating time for laparoscopic appendectomy and open appendectomy was 46.98 ± 2.99 minutes and 53.02 ± 2.88 minutes, respectively (p<0.000). Conclusion Laparoscopic appendectomy was associated with fewer surgical site infections and shorter mean operating time than an open appendectomy. abstract_id: PUBMED:21234139 Laparoscopic versus open appendicectomy: An Indian perspective. Background: Laparoscopic appendicectomy though widely practiced has not gained universal approval. Laparoscopic appendicectomy in India is relatively new and the literature is scant. This study was aimed to compare laparoscopic with open appendicectomy and ascertain the therapeutic benefit, if any, in the overall management of acute appendicitis. Materials And Methods: The study group consisted of two hundred and seventy nine patients suffering from acute appendicitis. One hundred patients underwent laparoscopic appendicectomy (LA) and one hundred seventy nine patients underwent open appendecectomy (OA). Comparison was based on length of hospital stay, operating time, postoperative morbidity, duration of convalescence and operative cost in terms of their medians. The Mann-Whitney statistics (T) were calculated and because of large samples, the normal deviate test (Z) was used. Results: Of the hundred patients, six patients (6%) had the procedure converted to open surgery. The rate of infections and overall complications (LA: 15%, OA: 31.8%, P < 0.001) were significantly lower in patients undergoing LA. The median length of stay was significantly shorter after LA (3 days after LA, 5 days after OA, P < 0.0001) than after OA. The operating time was shorter {OA: 25 min (median), LA: 28 min (median), 0.01< P < 0.05} in patients undergoing open appendicectomy compared to laparoscopic appendicectomy. Conclusion: Hospital stay for LA is significantly shorter and the one-time operative charges appear to be almost the same. LA is also associated with increased clinical comfort in terms of fewer wound infections, faster recovery, earlier return to work and improved cosmesis. abstract_id: PUBMED:36645563 Differences in in-hospital outcomes and healthcare utilization for laparoscopic versus open approach for emergency inguinal hernia repair: a nationwide analysis. Purpose: There has been a growing debate of whether laparoscopic or open surgical techniques are superior for inguinal hernia repair. For incarcerated and strangulated inguinal hernias, the laparoscopic approach remains controversial. This study aims to be the first nationwide analysis to compare clinical and healthcare utilization outcomes between laparoscopic and open inguinal hernia repair in an emergency setting. Methods: A retrospective analysis of the National Inpatient Sample was performed. All patients who underwent laparoscopic inguinal hernia repair (LIHR) and open inguinal hernia repair (OIHR) between October 2015 and December 2019 were included. The primary outcome was mortality, and secondary outcomes include post-operative complications, ICU admission, length of stay (LOS), and total admission cost. Two approaches were compared using univariate and multivariate logistic and linear regression. Results: Between the years 2015 and 2019, 17,205 patients were included. Among these, 213 patients underwent LIHR and 16,992 underwent OIHR. No difference was observed between laparoscopic and open repair for mortality (odds ratio [OR] 0.80, 95% CI [0.25, 2.61], p = 0.714). Additionally, there was no significant difference between groups for post-operative ICU admission (OR 1.11, 95% CI [0.74, 1.67], p = 0.614), post-operative complications (OR 1.09, 95% CI [0.76, 1.56], p = 0.647), LOS (mean difference [MD]: -0.02 days, 95% CI [- 0.56, 0.52], p = 0.934), or total admission cost (MD: $3,028.29, 95% CI [$- 110.94, $6167.53], p = 0.059). Conclusion: Laparoscopic inguinal hernia repair is comparable to the open inguinal hernia repair with respect to low rates of morbidity, mortality as well as healthcare resource utilization. abstract_id: PUBMED:27182227 Comparison of Laparoscopic Appendectomy with open appendectomy in Treating Children with Appendicitis. Objective: To analyze feasibility and curative effect of laparoscopic appendectomy in the treatment of pediatric appendicitis and compare it with open appendectomy. Methods: Two hundred and sixty patients were selected for this study and randomly divided into open appendectomy group (130 cases) and laparoscopic appendectomy group (130 cases). Patients in open appendectomy group underwent traditional open appendectomy, while patients in laparoscopic appendectomy were treated with laparoscopic appendectomy. Incision length, blood loss during operation, duration of operation, time to leave bed, anus exhausting time, time to take food, catheter drainage time, urinary catheterization time, time of using antibiotics, use of pain killer and incidence of complications such as incision infection, residual abscess and intestinal obstruction were compared between two groups. Results: We found relevant indexes including length of incision, amount of bleeding and duration of operation in laparoscopic appendectomy group were better than open appendectomy group after surgery; and differences were statistically significant (P<0.05). Indexes such as time to out of bed, time to take food, exhaust time, drainage time, catheterization time and application time and use of antibiotics in laparoscopic appendectomy group were all superior to open appendectomy group, and differences had statistical significance (P<0.05). Incidence of complications in laparoscopic appendectomy group was much lower than open appendectomy group and the difference was statistically significant (P<0.05). Conclusion: Laparoscopic appendectomy has advantages of small trauma, sound curative effect, low incidence of complications and rapid recovery and can effectively relieve pain of children suffering from appendicitis. Hence it is worth promotion and should be preferred. abstract_id: PUBMED:21160796 Gastrointestinal symptomatic outcomes of laparoscopic and open gastrectomy. Aim: To compare the laparoscopic and the open gastrectomy approaches for short term morbidity, length of hospital stay and also long term gastrointestinal symptoms. Methods: Patients who have undergone gastrectomy had their medical records reviewed for demographic data, type of gastrectomy, short term morbidity, and length of hospital stay. Patients were contacted and asked to complete the Gastrointestinal Symptom Rating Scale (GSRS). The GSRS measures three domains of GI symptoms: Dyspepsia Syndrome (DS) for the foregut (best score 0, worse score 15), indigestion syndrome (IS) for the midgut (best score 0, worse score 12), and bowel dysfunction syndrome (BDS) for the hindgut (best score 0, worse score 16). Statistical analysis was done using the Mann-Whitney U-test. Results: We had complete data on 32 patients: 7 laparoscopic and 25 open. Of these, 25 had a gastroenteric anastomosis and 6 did not. The table shows the results as medians with interquartile range. Laparoscopic gastrectomy had a better score than open gastrectomy in the DS domain (0 vs 1, P = 0.02), while gastrectomy without anastomosis had a better score than gastrectomy with anastomosis in the IS domain (0 vs 1, P = 0.05). Conclusion: Patients have little adverse gastrointestinal symptoms and preserve good gastrointestinal function after undergoing any type of gastrectomy. Laparoscopic approach had better dyspepsia and foregut symptoms. Performing an anastomosis led to mild adverse midgut and indigestion effects. Answer: The comparison between laparoscopic and open emergency colectomy shows that laparoscopic surgery can be safely performed and offers certain advantages over the open approach in suitable patients. Studies have found that laparoscopic surgery for colorectal perforation resulted in significantly lower intraoperative blood loss and a lower incidence of surgical site infection compared to open surgery, with no significant differences in short- or long-term outcomes between the two groups (PUBMED:36602516). Similarly, for right colon cancers, laparoscopic emergency resections were associated with higher lymph node yield, better resection clearance, and shorter postoperative stay, although the laparoscopic surgery took longer. Open surgery had lower five-year overall survival compared to laparoscopic surgery, but five-year disease-free survival was similar between the two approaches (PUBMED:38396446). In the context of emergency inguinal hernia repair, laparoscopic repair was found to be comparable to open repair with respect to morbidity, mortality, and healthcare resource utilization (PUBMED:36645563). For acute appendicitis, laparoscopic appendectomy was associated with fewer surgical site infections and shorter mean operating time than open appendectomy (PUBMED:31523536), although another study suggested that laparoscopic appendectomy had longer recovery time after operation and was costlier than open appendectomy (PUBMED:37489661). Overall, the evidence suggests that emergency laparoscopic colectomy measures up to open colectomy in terms of safety and efficacy, with some studies indicating additional benefits such as reduced blood loss, infection rates, and hospital stay. However, the choice of surgical approach should be based on patient-specific factors, and the decision can be informed by the patient's preference and the surgeon's expertise with laparoscopic techniques (PUBMED:37489661). Randomized control trials are encouraged to further investigate these results and solidify the findings (PUBMED:38396446).
Instruction: Effect of multiparity on morphometry and oestrogen receptor expression of pelvic and perineal striated muscles in rabbits: is serum oestradiol relevant? Abstracts: abstract_id: PUBMED:23664379 Effect of multiparity on morphometry and oestrogen receptor expression of pelvic and perineal striated muscles in rabbits: is serum oestradiol relevant? Objective: To determine changes in morphometry and expression of oestrogen receptors (OR) in the pubococcygeus and bulbospongiosus muscles, and the concentration of serum oestradiol associated with multiparity. Study Design: Twelve chinchilla-breed female rabbits were divided into multiparas who had undergone four consecutive deliveries and age-matched virgin nulliparas. Pubococcygeus and bulbospongiosus muscles were surgically removed from each rabbit and processed histologically. Fibre cross-sectional area, number of peripheral nuclei, and expression of ORα and ORβ were measured for each muscle. Serum samples were obtained and the concentration of serum oestradiol was quantified. Results: Multiparity increased (p ≤ 0.05) fibre cross-sectional area and the number of peripheral nuclei per fibre in pubococcygeus muscle, but not in bulbospongiosus muscle. Expression of both ORα and ORβ was high (p ≤ 0.05) in both muscles from multiparous rabbits. A rise in serum oestradiol was measured at the end of the second pregnancy, which was absent (p ≤ 0.05) at the end of the fourth pregnancy. The concentration of serum oestradiol was similar (p > 0.05) in nulliparous and multiparous rabbits. Conclusions: Multiparity caused morphometric changes in pubococcygeus muscle but not in bulbospongiosus muscle. As OR expression was high for both muscles, some properties related to fibre composition or muscle physiology could be affected. The finding that serum oestradiol was not elevated at the end of the fourth pregnancy could be related to changes in pelvic and perineal muscles associated with multiparity. abstract_id: PUBMED:25850953 Aromatase expression is linked to estrogenic sensitivity of periurethral muscles in female rabbits. Beyond its role in the conversion of androgens to estrogens, the expression of aromatase could influence on the estrogenic signalling in targeted tissues. Considering the well-defined biochemical and physiological differences between the pubococcygeus (Pcm) and bulbospongiosus (Bsm) muscles in female rabbits, it is presently hypothesized that the aromatase expression is differentially linked to the estrogen sensitivity of each muscle. To this end, serum estradiol levels and the aromatase expression, presence of ERα and ERβ and morphometry were evaluated in the Pcm and Bsm of female rabbits allocated in control, ovariectomized (OVX) and OVX treated with estradiol benzoate (OVX + EB) groups. Aromatase expression was high in the Pcm. Independently to serum estradiol, ovariectomy increased aromatase expression in the Pcm while decreased it in the Bsm. The EB treatment avoided the effect of ovariectomy only in the Pcm. The number of immunoreactive nuclei anti-ERα and anti-ERβ was high in the Pcm of OVX and OVX + EB rabbits, while those in the Bsm remained unchanged. The number of peripheral nuclei per fibre and the cross-sectional area-to-myonucleus ratio were modified only in the Pcm. Our findings support aromatase expression in the Pcm, and Bsm of rabbits is differentially linked to estrogenic sensitivity of each muscle. abstract_id: PUBMED:17633471 Estrogen receptors in pelvic floor for female stress urinary incontinence Objective: To observe the statuses of estrogen receptor (ER) alpha and beta in the pelvic floor and its relation with stress urinary incontinence (SUI). Methods: The ERa and ERP in levator ani muscle and pelvic floor structure of premenopausal and postmenopausal SUI patients were detected by immunohistochemical staining and Western blot. Results: The positive biopsy rates in the premenopausal and postmenopausal SUI groups were 11% and 0, and the corresponding results in the control group were 35% and 33%. No ERalpha and ERbeta expression was detected in the levator ani muscle. ERalpha and ERbeta located in pelvic fasia tissue around the levator ani muscle. ERbeta was negatively stained in some segments of pelvic fasia tissue. The positive expression rates of ERbeta in the pelvic fasia tissue of premenopausal and postmenopausal SUI group were 57% and 33%, which were significantly lower than those in the control groups (P < 0.05). The positive expression rate of ERP in the pelvic fascia was significantly lower in postmenopausal group than in the premenopausal group (P < 0.05). The positive expression rates of ERa in the pelvic fasia tissue in the premenopausal and postmenopausal SUI groups were (4.43 +/- 2.64)% and (5.14 +/- 1.79)%, which were significantly lower than (9.61 +/- 5.48)% and (10.88 +/- 2.90)% in control group (P < 0.05). Western blot showed that ERbeta expression in pelvic fasia tissue was less than the expressions of ERalpha, ERalpha and ERbeta in the SUI group was also lower than that of the control group. Conclusion: ERalpha and ERbeta do not express in the levator ani muscle of women. Pathogenesis of premenopausal SUI correlates with lower serum estrodiol level and the expression of ER correlates SUI. abstract_id: PUBMED:30286578 High Estradiol Differentially Affects the Expression of the Glucose Transporter Type 4 in Pelvic Floor Muscles of Rats. Purpose: To characterize the relationship between serum estradiol levels and the expression of glucose transporter type 4 (Glut4) in the pubococcygeus and iliococcygeus muscles in female rats. Methods: The muscles were excised from virgin rats during the metestrus and proestrus stages of the estrous cycle, and from sham and ovariectomized rats implanted with empty or estradiol benzoate-filled capsules. The expression of estrogen receptors (ERs) was inspected in the muscles at metestrus and proestrus. Relative Glut4 expression, glycogen content, and serum glucose levels were measured. Appropriate statistical tests were done to identify significant differences (P≤0.05). Results: The pubococcygeus and iliococcygeus muscles expressed ERα and ERβ. Glut4 expression and glycogen content in the pubococcygeus muscle were higher at proestrus than at metestrus. No significant changes were observed in the iliococcygeus muscle. In ovariectomized rats, the administration of estradiol benzoate increased Glut4 expression and glycogen content in the pubococcygeus muscle alone. Conclusion: High serum estradiol levels increased Glut4 expression and glycogen content in the pubococcygeus muscle, but not in the iliococcygeus muscle. abstract_id: PUBMED:20824855 Oestrogens for treatment or prevention of pelvic organ prolapse in postmenopausal women. Background: Pelvic organ prolapse is common and can be detected in up to 50% of parous women although many are asymptomatic. Oestrogen preparations are used to improve vaginal thinning (atrophy). It is possible that oestrogens, alone or in conjunction with other interventions, might prevent or assist in the management of pelvic organ prolapse, for example by improving the strength of weakened supporting structures. Objectives: To determine the effects of oestrogens or drugs with oestrogenic effects alone, or in conjunction with other treatments, both for prevention and treatment of pelvic organ prolapse. Search Strategy: We searched the Cochrane Incontinence Group Specialised Register of trials (searched 6 May 2010), MEDLINE (January 1950 to April 2010) as well as reference lists of relevant articles. Selection Criteria: Randomised or quasi-randomised controlled trials that included the use of any oestrogens or drugs with oestrogenic (or anti-oestrogenic) actions for pelvic organ prolapse. Data Collection And Analysis: Trials were assessed and data extracted independently by two review authors. Main Results: Three trials and one meta-analysis of adverse effects of a further three trials were identified. One trial did not provide useable data. Two trials included 148 women with prolapse, one included 58 postmenopausal women and the meta-analysis reported a mixed population (women with and without prolapse) of postmenopausal women (N=6984). The meta analysis and one other small trial investigated the effect of selective oestrogen receptor modulators (SERMs) for treatment or prevention of osteoporosis but also collected data of the effects on prolapse. Interventions included oestradiol, conjugated equine oestrogen and two (SERMs), raloxifene and tamoxifen. Only one small trial addressed the primary outcome (prolapse symptoms).One small treatment trial of oestradiol for three weeks before prolapse surgery found a reduced incidence of cystitis in the first four weeks after surgery but this unexpected finding needs to be confirmed in a larger trial.A meta-analysis of adverse effects of a SERM, raloxifene (used for treatment or prevention of osteoporosis in postmenopausal women) found a statistically significant reduction in the need for prolapse surgery at three year follow up (OR 0.50, 95% CI 0.31 to 0.81), but this was statistically significant only in women older than 60 years (OR 0.68, 95% CI 0.22 to 2.08) and the total number of women having prolapse surgery was small. A further small trial comparing conjugated equine oestrogen, raloxifene, tamoxifen and placebo in postmenopausal women having pelvic floor muscle training was too small to detect effects on prolapse outcomes. Authors' Conclusions: There was limited evidence from randomised controlled trials regarding the use of oestrogens for the prevention and management of pelvic organ prolapse. The use of local oestrogen in conjunction with pelvic floor muscle training before surgery may reduce the incidence of post-operative cystitis within four weeks after surgery. Oral raloxifene may reduce the need for pelvic organ prolapse surgery in women older than 60 years although this cannot be taken as an indication for practice.There is a need for rigorous randomised controlled trials with long term follow up to assess oestrogen preparations for prevention and management of pelvic organ prolapse, particularly as an adjunctive treatment for women using pessaries and also before and after prolapse surgery. abstract_id: PUBMED:19782341 Effects of phytoestrogen genistein on genioglossus function and oestrogen receptors expression in ovariectomized rats. Objective: This study was designed to investigate the effects of genistein on genioglossal muscle function and the expression of oestrogen receptors (ERs) in the ovariectomized rats. Design: Fifty female Sprague-Dawley rats were randomly divided into 5 groups: the control group (SHAM), the ovariectomized group (OVX), the ovariectomized rats receiving low genistein dosage (OVX+L), the moderate genistein dosage group (OVX+M) and the high genistein dosage group (OVX+H). Oestradiol level was detected by radioimmunity. The isometric twitch tension (P(t)) and tetanic tension (P(0)) of the GG muscle were measured in response to electrical field stimulation. The expression of ERs on the mRNA and protein levels was measured by real-time PCR and western blot respectively. Results: Ovariectomy decreased muscle fatigue resistance and the expression of different ERs significantly. Genistein treatment resulted in a dose-dependent protective effect on muscle fatigability and a parallel dose-responsive increase in the expression of oestrogen receptors mRNA and protein levels in genioglossus, with larger effects on oestrogen receptor beta vs. alpha. In contrast to the improvements in fatigability, there was no treatment effect on isometric twitch or tetanic tensions. Conclusion: The results indicated that genistein increased muscle fatigue resistance in addition to effects on receptors, and the up-regulation of receptors expression may be a possible mechanism by which genistein improved fatigue. abstract_id: PUBMED:20405008 Estrogen regulates estrogen receptors and antioxidant gene expression in mouse skeletal muscle. Background: Estrogens are associated with the loss of skeletal muscle strength in women with age. Ovarian hormone removal by ovariectomy in mice leads to a loss of muscle strength, which is reversed with 17beta-estradiol replacement. Aging is also associated with an increase in antioxidant stress, and estrogens can improve antioxidant status via their interaction with estrogen receptors (ER) to regulate antioxidant gene expression. The purpose of this study was to determine if ER and antioxidant gene expression in skeletal muscle are responsive to changes in circulating estradiol, and if ERs regulate antioxidant gene expression in this tissue. Methodology/principal Findings: Adult C57BL/6 mice underwent ovariectomies or sham surgeries to remove circulating estrogens. These mice were implanted with placebo or 17beta-estradiol pellets acutely or chronically. A separate experiment examined mice that received weekly injections of Faslodex to chronically block ERs. Skeletal muscles were analyzed for expression of ER genes and proteins and antioxidant genes. ERalpha was the most abundant, followed by Gper and ERbeta in both soleus and EDL muscles. The loss of estrogens through ovariectomy induced ERalpha gene and protein expression in the soleus, EDL, and TA muscles at both the acute and chronic time points. Gpx3 mRNA was also induced both acutely and chronically in all 3 muscles in mice receiving 17beta-estradiol. When ERs were blocked using Faslodex, Gpx3 mRNA was downregulated in the soleus muscle, but not the EDL and TA muscles. Conclusions/significance: These data suggest that Gpx3 and ERalpha gene expression are sensitive to circulating estrogens in skeletal muscle. ERs may regulate Gpx3 gene expression in the soleus muscle, but skeletal muscle regulation of Gpx3 via ERs is dependent upon muscle type. Further work is needed to determine the indirect effects of estrogen and ERalpha on Gpx3 expression in skeletal muscle, and their importance in the aging process. abstract_id: PUBMED:15968433 Effects of resveratrol on delayed afterdepolarization and triggered activity induced by ouabain in guinea pig papillary muscles. The purpose of this study was to investigate the effects of resveratrol on delayed afterdepolarization (DAD) and triggered activity (TA) induced by ouabain in guinea pig papillary muscles and the underlying mechanism. Action potentials were recorded using intracellular microelectrode technique. The results obtained are as follows: (1) DAD and TA induced by ouabain (1 micromol/L) were inhibited by pretreatment with resveratrol (30, 60, and 120 micromol/L) in a concentration-dependent manner; (2) Pretreatment with N(G)-nitro-L-arginine methyl ester (L-NAME, 1 mmol/L), a nitric oxide (NO) synthase inhibitor, failed to abolish the above effect of resveratrol (60 micromol/L ); (3) 5 micromol/L 17beta-estradiol (E(2)) or 30 micromol/L resveratrol had no effects on DAD and TA, however, resveratrol combined with E(2) at the same doses exerted significant inhibitory effects on DAD and TA; (4) Pretreatment with tamoxifen (TAM, 10 micromol/L), an inhibitor of estrogen receptor, also did not blocked the effects of resveratrol (60 micromol/L) on DAD and TA induced by ouabain. All these results indicated that resveratrol exerted an inhibitory effects on DAD and TA induced by ouabain, possibly by reducing calcium influx, which might not be mediated by NO and estrogen receptor. The antiarrhythmic effects of resveratrol may contribute to its cardioprotective action. abstract_id: PUBMED:21790808 Regulation of choline acetyltransferase expression by 17 β-oestradiol in NSC-34 cells and in the spinal cord. Motoneurones located in the ventral horn of the spinal cord conciliate cholinergic innervation of skeletal muscles. These neurones appear to be exceedingly affected in neurodegenerative diseases such as amyotrophic lateral sclerosis. The dysfunction of motoneurones is typically accompanied by alterations of cholinergic metabolism and signalling, as demonstrated by a decrease in choline acetyltransferase (ChAT) expression. 17 β-Oestradiol (E(2)) is generally accepted as neuroprotective factor in the brain under acute toxic and neurodegenerative conditions and also appears to exert a protective role for motoneurones. In the present study, we attempted to analyse the role of E(2) signalling on ChAT expression in the motoneurone-like cell line NSC-34 and in vivo. In a first step, we demonstrated the presence of oestrogen receptor α and β in NSC-34 cells, as well as in the cervical and lumbar parts, of the male mouse spinal cord. Subsequently, we investigated the effect of E(2) treatment on ChAT expression. The application of E(2) significantly increased the transcription of ChAT in NSC-34 cells and in the cervical but not lumbar part of the spinal cord. Our results indicate that E(2) can influence the cholinergic system by increasing ChAT expression in the mouse spinal cord. This mechanism might support motoneurones, in addition to survival-promoting mechanisms, in the temporal balance toxic or neurodegenerative challenges. abstract_id: PUBMED:16757570 Estrogenic effect of soy isoflavones on mammary gland morphogenesis and gene expression profile. We examined the effect of 17beta-estradiol (E2) and soy isoflavones' exposure on morphogenesis and global gene expression in the murine mammary gland. Three exposure regimens were applied: isoflavones added to the diet throughout either the lactational period (via the dams) or the postweaning period and E2 administered orally during the lactational period. Whole mounts of mammary glands were evaluated both in juvenile and adult animals with respect to branching morphogenesis and terminal end bud (TEB) formation. At postnatal day (PND) 28, we observed a significant increase in branching morphogenesis in all treated groups with the most pronounced effect after E2 exposure. For the E2-treated animals there was also a significant increase in TEB formation. At PNDs 42-43 the postweaning isoflavone and the E2 groups showed a transient reduction in the number of TEBs. A similar response after isoflavone and E2 exposure was further substantiated by changes in gene expression, since the same groups of genes were up- and downregulated, particularly in the E2 and postweaning isoflavone regimen. All changes in gene expression correlated with changes in the cellular composition of the gland, i.e., more and larger TEBs and ducts. The results suggest an estrogenic response of physiological doses of isoflavones on mammary gland development at both the morphological and molecular level, which resembled that induced by puberty. Answer: The study described in abstract PUBMED:23664379 investigated the effect of multiparity on the morphometry and estrogen receptor (ER) expression in the pubococcygeus and bulbospongiosus muscles of rabbits, as well as the relevance of serum estradiol levels. The results showed that multiparity increased the fiber cross-sectional area and the number of peripheral nuclei per fiber in the pubococcygeus muscle, but not in the bulbospongiosus muscle. Additionally, the expression of both ERα and ERβ was higher in both muscles from multiparous rabbits. Interestingly, a rise in serum estradiol was measured at the end of the second pregnancy, but this increase was absent at the end of the fourth pregnancy. The concentration of serum estradiol was similar in nulliparous and multiparous rabbits. These findings suggest that while multiparity caused morphometric changes in the pubococcygeus muscle and affected ER expression in both muscles, serum estradiol levels did not appear to be elevated in multiparous rabbits at the end of the fourth pregnancy, indicating that serum estradiol may not be directly relevant to the changes observed in pelvic and perineal muscles associated with multiparity.
Instruction: Do topical repellents divert mosquitoes within a community? Abstracts: abstract_id: PUBMED:24376852 Do topical repellents divert mosquitoes within a community? Health equity implications of topical repellents as a mosquito bite prevention tool. Objectives: Repellents do not kill mosquitoes--they simply reduce human-vector contact. Thus it is possible that individuals who do not use repellents but dwell close to repellent users experience more bites than otherwise. The objective of this study was to measure if diversion occurs from households that use repellents to those that do not use repellents. Methods: The study was performed in three Tanzanian villages using 15%-DEET and placebo lotions. All households were given LLINs. Three coverage scenarios were investigated: complete coverage (all households were given 15%-DEET), incomplete coverage (80% of households were given 15%-DEET and 20% placebo) and no coverage (all households were given placebo). A crossover study design was used and coverage scenarios were rotated weekly over a period of ten weeks. The placebo lotion was randomly allocated to households in the incomplete coverage scenario. The level of compliance was reported to be close to 100%. Mosquito densities were measured through aspiration of resting mosquitoes. Data were analysed using negative binomial regression models. Findings: Repellent-users had consistently fewer mosquitoes in their dwellings. In villages where everybody had been given 15%-DEET, resting mosquito densities were fewer than half that of households in the no coverage scenario (Incidence Rate Ratio [IRR]=0.39 (95% confidence interval [CI]: 0.25-0.60); p<0.001). Placebo-users living in a village where 80% of the households used 15%-DEET were likely to have over four-times more mosquitoes (IRR=4.17; 95% CI: 3.08-5.65; p<0.001) resting in their dwellings in comparison to households in a village where nobody uses repellent. Conclusions: There is evidence that high coverage of repellent use could significantly reduce man-vector contact but with incomplete coverage evidence suggests that mosquitoes are diverted from households that use repellent to those that do not. Therefore, if repellents are to be considered for vector control, strategies to maximise coverage are required. abstract_id: PUBMED:696610 Dermal penetration of three topical insect repellents in dogs and rabbits. Three (14)C-labeled candidate insect repellents, cyclohexamethylene carbamide, n-butylsufonimidocyclohexamethylene and 2-hydroxyethylcyclohexane carboxylate were evaluated for skin penetration in dogs and rabbits. Absorption of the repellents was determined by monitoring excreted urine daily for seven days following topical application. Significant percutaneous absorption of all three repellents occurred within 24 hours. (14)C-cyclohexamethylene carbamide showed the greatest absorption. Measurable amounts of radioactivity persisted at the application site for both species after seven days. abstract_id: PUBMED:24767458 Use of a semi-field system to evaluate the efficacy of topical repellents under user conditions provides a disease exposure free technique comparable with field data. Background: Before topical repellents can be employed as interventions against arthropod bites, their efficacy must be established. Currently, laboratory or field tests, using human volunteers, are the main methods used for assessing the efficacy of topical repellents. However, laboratory tests are not representative of real life conditions under which repellents are used and field-testing potentially exposes human volunteers to disease. There is, therefore, a need to develop methods to test efficacy of repellents under real life conditions while minimizing volunteer exposure to disease. Methods: A lotion-based, 15% N, N-Diethyl-3-methylbenzamide (DEET) repellent and 15% DEET in ethanol were compared to a placebo lotion in a 200 sq m (10 m × 20 m) semi-field system (SFS) against laboratory-reared Anopheles arabiensis mosquitoes and in full field settings against wild malaria vectors and nuisance-biting mosquitoes. The average percentage protection against biting mosquitoes over four hours in the SFS and field setting was determined. A Poisson regression model was then used to determine relative risk of being bitten when wearing either of these repellents compared to the placebo. Results: Average percentage protection of the lotion-based 15% DEET repellent after four hours of mosquito collection was 82.13% (95% CI 75.94-88.82) in the semi-field experiments and 85.10% (95% CI 78.97-91.70) in the field experiments. Average percentage protection of 15% DEET in ethanol after four hours was 71.29% (CI 61.77-82.28) in the semi-field system and 88.24% (84.45-92.20) in the field. Conclusions: Semi-field evaluation results were comparable to full-field evaluations, indicating that such systems could be satisfactorily used in measuring efficacy of topically applied mosquito repellents, thereby avoiding risks of exposure to mosquito-borne pathogens, associated with field testing. abstract_id: PUBMED:27997936 A review of recommendations on the safe and effective use of topical mosquito repellents. Mosquito-borne disease is an annual problem in Australia, with endemic pathogens such as Ross River virus infecting thousands of people each year. The recent emergence of Zika virus in South America and the Pacific, together with ongoing outbreaks of dengue viruses in Southeast Asia, generated great community interest in the most effective strategies to avoid mosquito bites. Large-scale mosquito control programs are not common in Australia and are limited in New South Wales (NSW). The use of topical insect repellents is a key recommendation by health authorities to prevent mosquito-borne disease. All products sold in Australia purporting to repel mosquitoes must be registered with the Australian Pesticides and Veterinary Medicines Authority. Despite around 100 commercial products registered as repelling mosquitoes, there are relatively few active ingredients used across these formulations. The most common are diethyltoluamide (DEET), picaridin, p-menthane-3,8-diol (PMD) and a range of plant-derived products (e.g. melaleuca, eucalyptus, citronella oils). Research has shown that each of these active ingredients varies in the duration of protection provided against biting mosquitoes. Recommendations by health authorities are informed by this research, but inconsistencies between recommendations and available repellent formulations and their concentration of active ingredients can cause confusion in the community. There are conflicts between the data resulting from scholarly research, marketing promotion by manufacturers and recommendations provided by overseas health authorities. A review was undertaken of NSW Health's current recommendations on choosing and using insect repellents, taking into consideration recent research and currently registered topical repellents. abstract_id: PUBMED:15119071 Evaluation of botanicals as repellents against mosquitoes. Repellent properties of three plant extracts--essential oil (steam distillate) of Zanthoxylum limonella (fruits), Citrus aurantifolia (leaf) and petroleum ether extract of Z. limonella (fruits) were evaluated as repellent against Aedes (S.) albopictus mosquitoes in mustard (Dhara) and coconut (Parachute) oil base under laboratory conditions. Three concentrations--10, 20 and 30% of the repellents were evaluated. Repellents in mustard oil afforded longer protection time against the bites of Aedes (S.) albopictus mosquitoes than those in coconut oil. At 30% concentration, 296-304 min protection time was achieved by the test repellents in mustard oil base while repellents in coconut oil exhibited 223.5-245 min protection time at the same concentration. Oil of Z. limonella gave the highest protection time against the bites of Aedes (S.) albopictus mosquitoes at all the concentrations than other herbal repellents tested both in mustard and coconut oil. abstract_id: PUBMED:26574048 Factors influencing the use of topical repellents: implications for the effectiveness of malaria elimination strategies. In Cambodia, despite an impressive decline in prevalence over the last 10 years, malaria is still a public health problem in some parts of the country. This is partly due to vectors that bite early and outdoors reducing the effectiveness of measures such as Long-Lasting Insecticidal Nets. Repellents have been suggested as an additional control measure in such settings. As part of a cluster-randomized trial on the effectiveness of topical repellents in controlling malaria infections at community level, a mixed-methods study assessed user rates and determinants of use. Repellents were made widely available and Picaridin repellent reduced 97% of mosquito bites. However, despite high acceptability, daily use was observed to be low (8%) and did not correspond to the reported use in surveys (around 70%). The levels of use aimed for by the trial were never reached as the population used it variably across place (forest, farms and villages) and time (seasons), or in alternative applications (spraying on insects, on bed nets, etc.). These findings show the key role of human behavior in the effectiveness of malaria preventive measures, questioning whether malaria in low endemic settings can be reduced substantially by introducing measures without researching and optimizing community involvement strategies. abstract_id: PUBMED:26202080 Reception of odors and repellents in mosquitoes. Mosquitoes use their sense of smell to find hosts, nectar, and oviposition sites, and to avoid repellents. A small number of mosquito species are adapted to feed on humans and have a major impact on public health by transmitting diseases such as malaria, dengue and filariasis. The application of odorants for behavioral control has not been fully realized yet due to complexity of the mosquito olfactory system. Recent progress in molecular and computational tools has enabled rigorous investigations of the mosquito olfactory system function and has started to reveal how specific receptors contribute to attractive and aversive behaviors. Here we discuss recent advances in linking odors to receptors and in exploiting this knowledge in finding attractants and repellents for mosquitoes. abstract_id: PUBMED:25413142 Are topical insect repellents effective against malaria in endemic populations? A systematic review and meta-analysis. Background: Recommended vector control tools against malaria, such as long-lasting insecticidal nets (LLINs) and indoor residual spraying (IRS), mainly target mosquitoes that rest and feed on human hosts indoors. However, in some malaria-endemic areas, such as Southeast Asia and South America, malaria vectors primarily bite outdoors meaning that LLINs and IRS may be less effective. In these situations the use of topical insect repellents may reduce outdoor biting and morbidity from malaria. A systematic review and meta-analysis was conducted to assess the efficacy of topical insect repellents against malaria. Methods: Studies were identified using database searches (MEDLINE, EMBASE, Web of Science and clinical trials registers), as well as reference list searches and contact with researchers. Randomized and non-randomized controlled trials were included that assessed the effect of topical repellents (all active ingredients and concentrations) on Plasmodium falciparum or Plasmodium vivax malaria or infection in malaria-endemic populations. Meta-analysis of clinical data was conducted in order to generate summary risk ratios. Results: Ten trials met the inclusion criteria. Studies varied in terms of repellent active ingredient and formulation, co-interventions, study population, compliance, and follow-up period. Topical repellents showed an 18% protective efficacy against P. falciparum malaria, although this was not significant (95% CI: -8%, 38%). Similarly, the average protective efficacy of topical repellents against P. vivax malaria did not reach significance (protective efficacy: 20%, 95% CI: -37%, 53%). Exclusion of non-randomized trials from the meta-analysis did not alter the findings. Conclusions: Although topical repellents can provide individual protection against mosquitoes, the results of this meta-analysis indicate that topical repellents are unlikely to provide effective protection against malaria. However, there was substantial heterogeneity between studies included and the relatively small number of studies meant that this heterogeneity could not be fully explored in the analysis. Further well-designed trials of topical repellents at appropriate doses and alternative modes of repellent delivery, such as spatial repellents and long-lasting insecticide-treated clothing, are required. abstract_id: PUBMED:18766378 Celery-based topical repellents as a potential natural alternative for personal protection against mosquitoes. Celery-based products were investigated for chemical composition, skin irritation, and mosquito repellency in comparison to commercial repellents and the standard chemical, N,N-diethyl-3-methylbenzamide (DEET), with a goal to develop a natural alternative to synthetic repellents for protection against mosquitoes. Chemical identification by gas chromatography coupled with mass spectrometry discovered that the major constituents of Apium graveolens hexane extract (AHE) were 3-n-butyl-tetrahydrophthalide (92.48%), followed by 5.10% beta-selinene and 0.68% gamma-selinene. Evaluation of skin irritation in 27 human volunteers revealed no irritant potential from 25% ethanolic AHE solution. Laboratory investigated repellent against female Aedes aegypti mosquitoes demonstrated that G10 formula, the best AHE-developed product, provided remarkable repellency with a median protection time of 4.5 h (4.5-5 h), which was greater than that of ethanolic DEET solution (25% DEET, 3.5 h) and comparable to that of the best commercial repellent, Insect Block 28 (28.5% DEET, 4.5 h). According to significantly promising results, including highly effective repellency and no potential skin irritation or other side effects, the G10 formula is a worthwhile product that has the promise of being developed for commercialized registration. This developed AHE product could be an acceptable and affordable alternative to conventional synthetic chemicals in preventing mosquito bites, and in turn, helping to interrupt mosquito-borne disease transmission. abstract_id: PUBMED:1142085 Topical mosquito repellents VII: alkyl triethylene glycol monoethers. Normal and branched-chain aliphatic monoethers of triethylene glycol are effective topical mosquito repellents. In terms of duration of protection, they are generally superior to the corresponding diethylene glycol analogs and some are superior to diethyltoluamide. The n-heptyl monoether of triethylene glycol affords double the protection time of diethyltoluamide under controlled laboratory conditions and appears to be a useful new mosquito repellent. Answer: Yes, topical repellents can divert mosquitoes within a community. A study conducted in three Tanzanian villages found that when repellents were used by some but not all households, those not using repellents experienced a higher number of mosquitoes in their dwellings. Specifically, in scenarios where 80% of households used 15%-DEET and 20% used a placebo, the placebo-using households had over four times more mosquitoes resting in their dwellings compared to households in a village where nobody used repellent (PUBMED:24376852). This suggests that mosquitoes are diverted from households that use repellent to those that do not, highlighting the importance of maximizing coverage when considering repellents for vector control to avoid health equity issues within communities.
Instruction: Variation in small bowel length: factor in achieving total enteroscopy? Abstracts: abstract_id: PUBMED:24861190 Variation in small bowel length: factor in achieving total enteroscopy? Background And Aim: Estimation of small bowel length is of interest following the recent development of device-assisted enteroscopy. This new technology allows access to the deep small bowel, but rates of examination of the entire small bowel (total enteroscopy) differ between study populations. Variation in small bowel length could factor into this observed irregularity in total enteroscopy rates. Medical literature contains limited information regarding small bowel length in living patients and conflicting data regarding small bowel length and its relationship to height and weight. We carried out small bowel measurements on surgical patients to further define the total length of the small bowel and its relationship to height, weight and body mass index (BMI). Methods: Measurement of ileojejunal length on 91 surgical patients undergoing laparotomy for routine indications. Demographic data were collected for each subject, including height, weight and BMI. Results: Small bowel length was found to vary widely between individuals (average 998.52 cm, range 630-1510 cm). Linear regression analysis demonstrated a statistically significant relationship between small bowel length and height (regression coefficient = 0.0561, P-value = 0.0238). A linear relationship between small bowel length and weight or BMI was not observed. Conclusions: Length of the small bowel in humans is pertinent to advances in deep enteroscopy and existing surgical applications such as intestinal bypass and prevention of short gut syndrome. If average small bowel length varies with height, total enteroscopy may be easier to achieve in patients who are short in stature. abstract_id: PUBMED:27098815 Exploring the Small Bowel: Update on Deep Enteroscopy. Deep enteroscopy allows for the diagnosis and treatment of small bowel disorders that historically required operative intervention. There are a variety of endoscopic platforms using balloons and rotational overtubes to facilitate small bowel intubation and even allow for total enteroscopy. Obscure gastrointestinal bleeding is the most common indication for deep enteroscopy. By visualizing segments of the small bowel not possible through standard EGD or push enteroscopy, deep enteroscopy has an established high rate of identification and treatment of bleeding sources. In addition to obscure bleeding, other common indications include diagnosis and staging of Crohn's disease, evaluation of findings on capsule endoscopy and investigation of possible small bowel tumors. Large endoscopy databases have shown deep enteroscopy to be not only effective but safe. Recent research has focused on comparing the diagnostic rates, efficacy, and total enteroscopy rates of the different endoscopic platforms. abstract_id: PUBMED:33425538 Retrograde NaviAid Enteroscopy-Assisted Resection of Distal Small Bowel Hamartomatous Polyps. Hamartomatous polyps are gastrointestinal tumors that may present with small bowel obstruction requiring surgical resection, while others may present earlier as symptomatic anemia prior to becoming an obstructing mass. Video capsule endoscopy has enhanced the early detection of small bowel lesions. However, endoscopic interventions especially for distal small bowel lesions are limited due to long procedure times, technical challenges in achieving depth of insertion, and the requirement of specialized deep enteroscopy equipment with advanced endoscopy training, which are not always available. Therefore, surgical intervention is often required. NaviAid-assisted enteroscopy, a novel thorough-the-scope balloon, results in deep anterograde and retrograde intubation of the small intestine using standard colonoscope and can be used for rapid therapeutic intervention. We present two cases of distal small bowel hamartomas which were resected via retrograde NaviAid-assisted enteroscopy, thus, preventing surgery. abstract_id: PUBMED:35004760 The Evolution of Device-Assisted Enteroscopy: From Sonde Enteroscopy to Motorized Spiral Enteroscopy. The introduction of capsule endoscopy in 2001 opened the last "black box" of the gastrointestinal tract enabling complete visualization of the small bowel. Since then, numerous new developments in the field of deep enteroscopy have emerged expanding the diagnostic and therapeutic armamentarium against small bowel diseases. The ability to achieve total enteroscopy and visualize the entire small bowel remains the holy grail in enteroscopy. Our journey in the small bowel started historically with sonde type enteroscopy and ropeway enteroscopy. Currently, double-balloon enteroscopy, single-balloon enteroscopy, and spiral enteroscopy are available in clinical practice. Recently, a novel motorized enteroscope has been described with the potential to shorten procedure time and allow for total enteroscopy in one session. In this review, we will present an overview of the currently available techniques, indications, diagnostic yield, and complications of device-assisted enteroscopy. abstract_id: PUBMED:28839900 Small bowel endoluminal imaging (capsule and enteroscopy). Over the last 16 years, the disruptive technologies of small bowel capsule endoscopy and device-assisted enteroscopy have revolutionised endoluminal imaging and minimally invasive therapy of the small bowel. Further technological developments continue to expand their indications and use. This brief review highlights the state-of-the-art in this arena and aims to summarise the current and potential future role of these technologies in clinical practice. abstract_id: PUBMED:32694897 Impact of enteroscopy on diagnosis and management of small bowel tumors. Small bowel tumors (SBTs) have been increasingly diagnosed in recent decades. The pathogenesis of this increment is largely unknown, but advances in radiological and endoscopic methods facilitate the improvement of the diagnosis. Capsule endoscopy (CE) and device-assisted enteroscopy (DAE) allow the clinician to assess the entire small bowel in the search for suspicious lesions, or a cause of symptoms. In this review, we discuss the role of enteroscopy, techniques and strategies in the diagnosis and management of SBTs, and a brief description of the most common tumors. abstract_id: PUBMED:32746537 Roles of Capsule Endoscopy and Balloon-Assisted Enteroscopy in the Optimal Management of Small Bowel Bleeding. The small bowel had long been considered a dark unapproachable tunnel until the invention of capsule endoscopy and doubleballoon enteroscopy in the 21st century, which revolutionized the diagnosis and management of small bowel diseases, including bleeding. Various imaging modalities such as computed tomographic enterography, angiography, capsule endoscopy, and balloonassisted enteroscopy play vital roles in the diagnosis and management of small bowel bleeding. The choice of modality to use and timing of application differ according to the availability of the modalities, patient's history, and physician's experience. Small bowel bleeding is managed using different strategies as exemplified by medical treatment, interventional radiology, endoscopic therapy, or surgical intervention. Balloon-assisted enteroscopy enables endoscopic interventions to control small bowel bleeding, including electrocautery, argon plasma coagulation, clip application, and tattooing as a prelude to surgery. In this article, we clarify the recent approaches to the optimal diagnosis and management of patients with small bowel bleeding. abstract_id: PUBMED:29017295 Balloon-Assisted Enteroscopy and Capsule Endoscopy in Suspected Small Bowel Crohn's Disease. Inflammatory bowel diseases are idiopathic inflammatory diseases of two main types, Crohn's disease and ulcerative colitis. Crohn's disease can affect the entire gastrointestinal tract, and the distal ileum is involved in up to 70% of patients. Moreover, Crohn's disease in one-quarter to one-third of patients involves isolation of the small bowel. Due to the nonspecific symptoms and anatomical location of the disease, small bowel Crohn's disease is a phenotype that is particularly difficult to manage. Since the introduction of capsule endoscopy in 2000 and balloon-assisted enteroscopy in the 21st century, it is now possible to directly inspect for small bowel Crohn's disease. However, the new modalities still have limitations, such as capsule retention and invasiveness of balloon-assisted enteroscopy. The diagnostic yields of both capsule endoscopy and balloon-assisted enteroscopy are high for patients with suspected small bowel Crohn's disease. Therefore, earlier use of capsule endoscopy or balloon-assisted enteroscopy can help with the diagnosis and earlier treatment of these patients to avert possible disastrous outcomes. abstract_id: PUBMED:32999905 Small Bowel Enteroscopy - A Joint Clinical Guideline from the Spanish and Portuguese Small Bowel Study Groups. The present evidence-based guidelines are focused on the use of device-assisted enteroscopy in the management of small-bowel diseases. A panel of experts selected by the Spanish and Portuguese small bowel study groups reviewed the available evidence focusing on the main indications of this technique, its role in the management algorithm of each indication and on its diagnostic and therapeutic yields. A set of recommendations were issued accordingly. abstract_id: PUBMED:24713796 Small bowel enteroscopy in Crohn's disease. Endoscopic assessment of the small bowel is difficult because of its long and tortuous anatomy. However, recent developments have greatly improved the insertion depth and diagnostic yield, by means of device-assisted enteroscopy (DAE). Therefore, DAE may be of specific interest in the diagnostic and therapeutic approach of patients with inflammatory bowel disease. It may be of help in the diagnostic assessment of intestinal disease extent and severity and complications, with an impact on the therapeutic management. Moreover, local treatment within the small bowel is also feasible with DAE. This review aims to provide an overview of the currently available literature data on the use of enteroscopy in inflammatory bowel disease, and Crohn's disease in particular. Answer: Variation in small bowel length is indeed a factor in achieving total enteroscopy. The length of the small bowel can vary widely between individuals, with an average length of 998.52 cm and a range from 630 to 1510 cm (PUBMED:24861190). This variability can influence the success rate of total enteroscopy, which is the examination of the entire small bowel using device-assisted enteroscopy. A statistically significant relationship has been found between small bowel length and height, suggesting that total enteroscopy may be easier to achieve in patients who are shorter in stature (PUBMED:24861190). Deep enteroscopy, which includes techniques such as double-balloon enteroscopy, single-balloon enteroscopy, and spiral enteroscopy, has been developed to allow for diagnosis and treatment of small bowel disorders and has a high rate of identification and treatment of bleeding sources (PUBMED:27098815). However, achieving total enteroscopy remains challenging due to the length and complexity of the small bowel, and the success rates can vary depending on the endoscopic platform used (PUBMED:35004760). The introduction of capsule endoscopy and device-assisted enteroscopy has revolutionized the imaging and minimally invasive therapy of the small bowel, but the ability to achieve total enteroscopy is still considered the "holy grail" in enteroscopy (PUBMED:28839900). The variation in small bowel length among individuals is one of the factors that can impact the ability to perform a complete examination of the small bowel, which is critical for the diagnosis and management of various conditions, including small bowel tumors, Crohn's disease, and obscure gastrointestinal bleeding (PUBMED:32694897, PUBMED:32746537, PUBMED:29017295, PUBMED:24713796).
Instruction: MR colonography with fecal tagging: do individual patient characteristics influence image quality? Abstracts: abstract_id: PUBMED:17457810 MR colonography with fecal tagging: do individual patient characteristics influence image quality? Purpose: To evaluate if different patient characteristics influence performance of fecal tagging (a new MR colonography (MRC) technique to label stool to avoid bowel cleansing) and, consecutively, MR image quality. Materials And Methods: A total of 333 patients (mean age = 61 years) underwent MRC with fecal tagging. Four segments of the large bowel (ascending, transverse, descending, and sigmoid colon) were assessed as for the presence of nontagged stool particles, which can impede an assessment of the colonic wall. Ratings were correlated with patients' characteristics including patient age (<55 vs. > or =55 years), body mass index (BMI) (<25 vs. > or =25), gender, and acceptance levels for fecal tagging. Statistical analysis was performed using a Mann-Whitney U-test. Results: A total of 1332 colonic segments were evaluated. Among them, 327 segments (25%) did not contain any visible stool particles. Considerably reduced image quality was found in 61 segments (5%). Best image quality was found in the sigmoid colon (mean value = 1.9), while image quality of the ascending colon turned out to be worst (mean value = 2.6). Fecal tagging effectiveness showed a reverse correlation with patient age. However, all other characteristics did not have a statistically significant influence on fecal tagging outcome. Conclusion: MRC in conjunction with barium-based fecal tagging led to diagnostic image quality in 95% of all colonic segments. Since tagging results were significantly decreased in patients > or =55 years, tagging protocols should to be modified in this group, i.e., by increasing the time interval of tagging administration. abstract_id: PUBMED:15022985 MR Colonography MR Colonography is a new non invasive technique for colonic imaging. The first studies in man appeared in 1997-1998. MR Colonography requires an endoluminal contrast and combines volumetric acquisition with sophisticated image post-processing referred as virtual endoscopy. New technical developments such as fecal tagging were recently introduced. MR Colonography now enters into a phase of clinical evaluation. This article describes the available technique as well as pitfalls and artifacts with regard to our own experience. abstract_id: PUBMED:27209295 MR colonography with a fecal tagging technique and water-based enema for the assessment of inflammatory bowel disease. Purpose: To assess the usefulness of MR colonography (MRC) with a fecal tagging technique and water-based enema in patients with inflammatory bowel disease (IBD). Materials And Methods: Twenty-two patients with suspected or known IBD underwent MRC with a 1.5-T MR system (Siemens Symphony) using a phased-array coil. A fecal tagging technique was performed by oral administration of dense barium sulfate (200 ml) at major meals starting 2 days before the MRI. After a water enema (2000-2500 ml), the MR protocol was carried out, starting with HASTE and true-FISP sequences. Coronal T1w 3D VIBE (2.0 mm thick) was obtained before and 60 s after intravenous administration of Gd chelate. MR images were evaluated by consensus agreement of two observers in terms of image quality and by searching for bowel abnormalities. MRC findings were correlated with our gold standard-conventional colonoscopy (incomplete in 6/22 patients) with histopathological analysis, and surgery (performed in 8/22 subjects). Results: The MR imaging protocol was completed in all of the investigated subjects. In terms of image quality, 128 out of 132 colon segments (97 %) were considered diagnostic on MR examinations by the two reviewers; signs of bowel inflammation were identified in 74 and 72 % of colon segments of patients with ulcerative colitis (n = 6) and Crohn's disease (n = 15), respectively. In 13/15 patients with Crohn's disease, involvement of both the large and small bowel was demonstrated on MR imaging; perianal abscesses and fistulas were also identified in 2 and 3 of these patients, respectively. In one patient with normal MRC, a diagnosis of IBD could not be confirmed. Conclusion: MRC with a fecal tagging technique and water-based enema is a promising minimally invasive technique for evaluating the bowel in patients with a suspected or established diagnosis of IBD. abstract_id: PUBMED:18562740 Image quality and patient acceptance of four regimens with different amounts of mild laxatives for CT colonography. Objective: The purpose of our study was to prospectively evaluate image quality and patient acceptance of CT colonography (CTC) with fecal tagging using different levels of catharsis. Subjects And Methods: Forty consecutive increased-risk patients were randomized. Group 1 received orally 20 mg of bisacodyl, group 2 received 30 mg of bisacodyl, group 3 received 20 mg of bisacodyl and 8.2 g of magnesium citrate, and group 4 received 30 mg of bisacodyl and 16.4 g of magnesium citrate. All patients used a 2-day low-fiber diet and received diatrizoate meglumine and barium for fecal tagging. One reviewer blindly scored subjective image quality (fecal tagging, amount of residual feces [liquid or solid], luminal distention, and image readability) on a 5- to 6-point scale using a 2D review technique. The mean and SD of attenuation of tagging were measured as well as the relative SD as a measure of homogeneity. Furthermore, patient acceptance (burden related to diarrhea, abdominal pain, flatulence, and overall burden) was evaluated. Ordinal regression, generalized estimating equations, and parametric and nonparametric tests were used for analysis. Results: Image readability was evaluated as good or excellent in all examinations except one in group 2 (nondiagnostic) and two in group 3 (moderate). Group 2 contained more feces than group 4 (p = 0.04). With regard to mean attenuation and homogeneity of tagging, no significant differences were observed between groups. Group 4 experienced more severe diarrhea than groups 1 and 2 and higher overall burden than groups 1 and 3 (p < 0.042). Conclusion: The mildest preparation with 20 mg of bisacodyl provided good image quality of CTC images. Increasing the amount of laxatives did not improve image quality or tagging characteristics but was associated with a lower patient acceptance. abstract_id: PUBMED:18393151 MR-colonography in hospitalized patients: feasibility and sensitivity Introduction: Besides conventional colonoscopy, CT and MR colonography offer alternate virtual imaging modalities of the colon. The sensitivity of CT colonography, which is associated with radiation exposure, has been published in prior, large studies. Regarding MR colonography, in particular dark lumen MR colonography with the rectal administration of a water enema as a contrast agent, only limited published data exist. The goal of this study was to compare MR colonography with conventional colonoscopy in the detection of colorectal polyps. In addition the feasibility and image quality in unselected hospitalised patients were assessed. Patients/methods: Included were 103 hospitalised patients who had to undergo colonoscopy for various indications. Immediately prior to conventional colonoscopy, MR colonography with rectal water enema and additional intravenous administration of contrast material was performed. Detection rates for polyps and adenomas were documented with both imaging modalities. Image quality and completion rates (practicability) and other (incidental) findings were also recorded. Results: In 15 of 103 patients the MR examination could not be done or was only partially completed. The detection rate of MR colonography for polyps (adenomas) was 2% (4%) for polyps (adenomas) up to 5 mm in diameter, 38% (56%) for polyps (adenomas) 6-10 mm in diameter and 89% (89%) for polyps (adenomas) up to 11 mm in diameter. One flat carcinoma seen with conventional coloscopy was missed on MR colonography. Conclusions: MR colonography offers the possibility of imaging the colon without exposure to radiation. Polyps and adenomas are detected, similar to the detection rate of CT colonography, with adequate sensitivity only if they are larger than 10 mm in diameter. Therefore this imaging technique is not (yet) suitable as a screening test. Additional limitations are the necessary cooperation of the patient which can reduce the practicability and image quality in selected patients. Further studies like the just started German multicentre trial are needed to assess the position of MR colonography. abstract_id: PUBMED:12594545 MR colonography with fecal tagging: comparison between 2D turbo FLASH and 3D FLASH sequences. The objective of this study was to compare inversion recovery turbo 2D fast low-angle shot (FLASH) and 3D FLASH sequences for fecal-tagged MR colonography studies. Fifteen consecutive patients with indications for colonoscopy underwent MR colonography with fecal tagging. An inversion recovery turbo-FLASH sequence was applied and compared in terms of artifacts presence, efficiency for masking residual stool, and colonic wall conspicuity with a fat-saturated 3D FLASH sequence. Both sequences were acquired following administration of paramagnetic contrast agent. Contrast-to-noise ratio and relative contrast between colonic wall and lumen were calculated and compared for both sequences. Turbo 2D FLASH provided fewer artifacts, higher efficiency for masking the residual stool, and colonic wall conspicuity equivalent to 3D FLASH. An inversion time of 10 ms provided homogeneously low signal intensity of the colonic lumen. Contrast to noise between colonic wall and lumen was significantly higher in the 3D FLASH images, whereas differences in relative contrast were not statistically significant. An optimized inversion-recovery 2D turbo-FLASH sequence provides better fecal tagging results and should be added to the 3D FLASH sequence when designing dark-lumen MR colonography examination protocols. abstract_id: PUBMED:20083594 Current status of MR colonography. The search for an acceptable colorectal cancer screening examination has led to the development of virtual colonoscopy, which includes both computed tomographic (CT) colonography and magnetic resonance (MR) colonography. As indicated by the much larger number of published studies on CT colonography than on MR colonography, multidetector CT appears to be more suitable for colorectal screening than does MR colonography, in part reflecting the ease and speed of performing CT, as well as the increased spatial resolution, decreased cost, and wider availability of CT colonography. The main advantage of MR colonography over CT colonography is that it does not use ionizing radiation, which has important implications for colorectal cancer screening. The use of dark-lumen MR colonography to screen patients for colorectal cancer as well as other abdominopelvic disease could make it more attractive than CT. With the integration of 3.0-T MR colonography, fecal tagging, and parallel imaging into research and clinical settings, new MR colonography protocols must be optimized. Future MR colonography research should address issues such as image characteristics, presence of artifacts, management of specific absorption rate, and hardware-related modifications. abstract_id: PUBMED:37558647 Comparison of Diatrizoate and Iohexol for Patient Acceptance and Fecal-Tagging Performance in Noncathartic CT Colonography. Objective: The aim of this study was to compare diatrizoate and iohexol regarding patient acceptance and fecal-tagging performance in noncathartic computed tomography colonography. Methods: This study enrolled 284 volunteers with fecal tagging by either diatrizoate or iohexol at an iodine concentration of 13.33 mg/mL and an iodine load of 24 g. Patient acceptance was rated on a 4-point scale of gastrointestinal discomfort. Two gastrointestinal radiologists jointly analyzed image quality, fecal-tagging density and homogeneity, and residual contrast agent in the small intestine. The results were compared by the generalized estimating equation method. Results: Patient acceptance was comparable between the 2 groups (3.95 ± 0.22 vs 3.96 ± 0.20, P = 0.777). The diatrizoate group had less residual fluid and stool than the iohexol group ( P = 0.019, P = 0.004, respectively). There was no significant difference in colorectal distention, residual fluid, and stool tagging quality between the 2 groups (all P 's > 0.05). The mean 2-dimensional image quality score was 4.59 ± 0.68 with diatrizoate and 3.60 ± 1.14 with iohexol ( P < 0.001). The attenuation of tagged feces was 581 ± 66 HU with diatrizoate and 1038 ± 117 HU with iohexol ( P < 0.001). Residual contrast agent in the small intestine was assessed at 55.3% and 62.3% for the diatrizoate group and iohexol group, respectively ( P = 0.003). Conclusions: Compared with iohexol, diatrizoate had better image quality, proper fecal-tagging density, and more homogeneous tagging along with comparable excellent patient acceptance, and might be more suitable for fecal tagging in noncathartic computed tomography colonography. abstract_id: PUBMED:11665923 Magnetic resonance colonography for colorectal cancer screening. Colorectal cancer screening has vast potential. Beyond considerations of cost and diagnostic accuracy, the effectiveness of any colorectal screening strategy will be dependent on the degree of patient acceptance. Magnetic resonance (MR) colonography has been shown to be accurate regarding the detection of clinically relevant colonic polyps exceeding 10 mm in size, with reported sensitivity and specificity values exceeding 95%. To further enhance patient acceptance, strategies for fecal tagging have recently been developed. By modulating the signal of fecal material to be identical to the signal characteristics of the enema applied to distend the colon, fecal tagging in conjunction with MR colonography obviates the need for bowel cleansing. This article will describe the techniques underlying MR colonography and early clinical experience with fecal tagging techniques. abstract_id: PUBMED:15346285 Two-dimensional parallel acquisition technique in 3D MR colonography. Purpose: In-vitro and in-vivo examinations to investigate the influence of one (1D)- and two-dimensional (2D) parallel acquisition techniques (PAT) on contrast-to-noise ratio (CNR) and image quality for bright-lumen 3D MR colonography. Materials And Methods: In-vitro measurements were performed on a 1.5 T whole-body MR scanner (MAGNETOM AVANTO(R), Siemens AG, Erlangen) with a standard spoiled 3D gradient-echo (3D GRE) sequence and a volume interpolated 3D GRE (VIBE) sequence using a home-built colon phantom. The relative CNR was determined and image quality evaluated for different acceleration factors (PAT factors). A bright-lumen 3D MR colonography with PAT factors 2, 3, 4 and 6 was performed on a 39-year-old volunteer. The 3D data sets were compared with regard to image quality. Results: The mean relative CNR values for the in-vitro measurements were 1, 0.81, 0.73, 0.52 and 0.4 (3D GRE) and 1, 0.8, 0.65, 0.45 and 0.3 (VIBE) for the PAT factors 1, 2, 3, 4 and 6, respectively. Residual aliasing artifacts not affecting image quality in a distinct manner were visible for 2D PAT, especially for the VIBE sequence. Increasing the PAT factor up to 6 and decreasing the acquisition time down to 10 seconds for the bright-lumen 3D MR colonography could achieve adequate image quality with significantly reduced image artifacts caused by peristalsis and pulsations. Discussion: Even for high PAT factors up to 6, 2D PAT only leads to a moderate CNR loss. For a 3D MR colonography, distinct shorter acquisition times can be achieved with identical resolution. Answer: Yes, individual patient characteristics can influence the image quality of MR colonography with fecal tagging. A study evaluating the performance of fecal tagging in MR colonography (MRC) across 333 patients found that patient age had a statistically significant impact on the effectiveness of fecal tagging. Specifically, the effectiveness of fecal tagging showed a reverse correlation with patient age, with significantly decreased results in patients aged 55 years and older. This suggests that tagging protocols may need to be modified for this age group, potentially by increasing the time interval of tagging administration (PUBMED:17457810). Other patient characteristics such as body mass index (BMI), gender, and acceptance levels for fecal tagging did not show a statistically significant influence on the outcome of fecal tagging in this study. Overall, the study concluded that MRC with barium-based fecal tagging led to diagnostic image quality in 95% of all colonic segments assessed.
Instruction: Can face-to-face patient education be replaced by computer-based patient education? Abstracts: abstract_id: PUBMED:17448621 Can face-to-face patient education be replaced by computer-based patient education? A randomised trial. Objective: Computer-based patient education is an effective (therapeutic) tool. More and more studies prove that it can be an effective additive for face-to-face education, but it is still unclear if it is possible to educate patients by only using a computer. Our objective was to compare knowledge levels and satisfaction scores after computer-based patient education versus doctor-based patient education. Methods: We started a randomised and stratified trial in a large plastic surgery clinic in the Netherlands. One hundred thirteen patients with carpal tunnel syndrome (CTS), scheduled for operation were included. One group received CTS education by a doctor using a checklist (group A). The other group received the same CTS education by specially designed interactive computer program (group B). After 2 weeks, knowledge levels and satisfaction scores were tested by using a questionnaire. Results: One hundred thirteen patients were included, 96 patients finished the questionnaire. With respect to knowledge levels the doctor group (A) scored 20.2 points (max 40 points), and the computer group (B) scored 23.5 points. After correction the difference was 2.8 points (p=0.001). The total satisfaction score in group A was 7.6 and in group B 7.5; a difference of -0.16 (p=0.585). Conclusions: The results suggest that educating patients can result in a higher knowledge levels by using a computer than by using a doctor. Satisfaction seems equal after both methods of education. Practice Implications: Knowing that you can educate patients without seeing a doctor or nurse seems promising. Patient can be educated by computer-based education programs leaving more time in a consultation for asking questions and discussing treatment options. The discussion will be at a more equal level of knowledge. Possibly it can save time and money resulting in a more efficient patient care, without dissatisfying our patients. abstract_id: PUBMED:37397948 The COVID-19 pandemic triggered a change in continuing education in nursing: From face-to-face to online education. Prior to and up to 2019, the Training Center for Nursing Development, National College of Nursing, Japan (NCNJ) had provided education for nurses working at policy-based medical facilities in a face-to-face format. Since 2020 however, all on-campus courses have been cancelled due to the coronavirus disease (COVID-19) pandemic. The nursing directors of all participating facilities were subsequently surveyed; based on their responses, online education was implemented on a trial basis. As a result, all training since 2021 has been provided as online education. Online education has many advantages, such as there is no risk of contracting COVID-19 or other infectious diseases, no need for commuting or accommodations, courses can be attended remotely, and time can be used effectively. That said, there are some disadvantages. Potential improvements should be identified in the future. abstract_id: PUBMED:19502896 Comparison of Web-based and face-to-face training concerning patient education within a hospital system. The authors examined education records of 1,661 registered nurses from four affiliated hospitals who attended either a 1-hr face-to-face instructor-led training program, an instructor-facilitated Web-based training program, or an independent Web-based training program to fulfill a training requirement concerning patient education. The authors compared post-test results, course evaluation results, and costs and also developed a demographic profile of nurses' preferences for Web-based and face-to-face learning options. There was no significant difference in course effectiveness or satisfaction between the training methods studied; however, because of the large number of nurses who selected Web-based training, it was the most cost-effective alternative. abstract_id: PUBMED:38132003 An Exploration of Nurses' Experience Following a Face-to-Face or Web-Based Intervention on Patient Deterioration. A web-based clinical simulation program, known as FIRST2ACT (Feedback Incorporating Review and Simulation Techniques to Act on Clinical Trends), was designed to increase the efficacy of clinicians' actions in the recognition and immediate response to a patient's deterioration. This study, which was nested in a larger mixed method project, used ten focus groups (n = 65) of graduate, enrolled, registered nurses, associate nurse unit managers, and general managers/educators/coordinators from four different institutions to investigate whether nurses felt their practice was influenced by participating in either a face-to-face or web-based simulation educational programme about patient deterioration. The results indicate that individuals who were less "tech-savvy" appreciated the flexibility of web-based learning, which increased their confidence. Face-to-face students appreciated self-reflection through performance evaluation. While face-to-face simulations were unable to completely duplicate symptoms, they showed nurses' adaptability. Both interventions enhanced clinical practice by improving documentation and replies while also boosting confidence and competence. Web learners initially experienced tech-related anxiety, which gradually subsided, demonstrating healthcare professionals' resilience to new learning approaches. Overall, the study highlighted the advantages and challenges of web-based and face-to-face education in clinical practice, emphasising the importance of adaptability and reflective learning for healthcare professionals. Further exploration of specific topics is required to improve practice, encourage knowledge sharing among colleagues, and improve early detection of patient deterioration. abstract_id: PUBMED:25679246 Individual face-to-face tutorials: the value in nurse education. The individual, face-to-face tutorial is one means by which academic support can be provided to students in higher education. This mode of tutorial support has been deemed effective but it can be considered labour-intensive, which is a concern in the UK with the recession currently impacting on higher education institutions. Nevertheless, with increasing student fees come higher student expectations. With all UK pre-registration nursing study programmes at degree level and with the emphasis on widening access, students may require additional academic support to ensure success. This study aimed to examine the value of individual, face-to-face tutorials for academic support in nurse education. A descriptive and exploratory design was used, mixing qualitative and quantitative methods. The survey approach employed a web-based, self-completion questionnaire, which was distributed to lecturers and pre-registration student nurses in a UK university. Following analysis of the questionnaire results, students were invited to attend a group interview. Findings highlighted the importance of individual, face-to-face tutorials with qualitative data supplying detailed accounts regarding their value. abstract_id: PUBMED:3906726 Computer-based patient education. The increased exposure of health professionals to computers and the decreased cost of this technology make computer-based instruction an increasingly attractive alternative for patient education. This article presents an overview of the present status of computer-based patient education and suggests guidelines for the development and use of such educational materials. It emphasizes general techniques applicable to a wide range of patients and conditions and includes a list of resources available to the interested health professional. abstract_id: PUBMED:8134319 The potential of computers in patient education. Typical computer programs for patient education are didactic and fail to tailor information to an individual's specific needs. New technology greatly enhances the potential of computers in patient education. Computer-assisted instruction programs can now elicit information from users before leading them through problem-solving exercises. New authoring systems enable health professionals to develop their own programs. The capacity to elicit and report back information about factors that influence patients' health behaviors give the newest computer programs one of the strengths of face-to-face patient counseling: the ability to tailor an educational message for an individual patient. These programs are not intended to replace but rather to enhance personal interaction between providers and patients. This article describes the advantages of using computers for individualizing patient education and assessing trends across groups of patients. Innovative programs and features to look for in programs and equipment selection are also described. abstract_id: PUBMED:31101427 Blended face-to-face and online/computer-based education approaches in chronic disease self-management: A critical interpretive synthesis. Objective: To review the literature on chronic disease self-management programs that blend face-to-face and online/computer-based education design and delivery. Methods: A critical interpretive synthesis was conducted. Studies that described blended chronic disease self-management programs were reviewed. Two reviewers performed independent database searches, eligibility screening, and data extraction. Findings were synthesized using a conceptual mapping process. Results: Twelve articles were included in the final review. Studies focused on patients with diagnoses of diabetes, asthma, and chronic obstructive pulmonary disorder. Factors influencing the design and delivery of programs focused on patient characteristics (such as disease prognosis, time since diagnosis, social determinants of health, health literacy, and proficiency with online/computer-based technologies). Conclusions: Blended learning self-management programs should consider the suitability of programs in light of health conditions and patient characteristics. Individual patient needs can be identified through clinician-driven assessments, including collaborative goal setting and the selection of pertinent educational tools. Practice Implications: When considering the design and delivery of chronic disease self-management education programs, healthcare providers should consider three factors: 1) patient characteristics, 2) learning perspectives, and 3) design technology options that best meet patient abilities, clinician expertise, and administrative capacity. abstract_id: PUBMED:22812156 Patient education in France Patient education is an increasingly important component of therapeutic strategies, especially for chronic illnesses, which currently affect about 12% of the French population and will undoubtedly increase in coming years. Patient education aims to enhance patients' personal responsibility and participation in their therapeutic management and quality of life. Article 84 of French health legislation passed in 2009 inscribes patient education in the Public Health Code for the first time. It distinguishes personalized therapeutic education, patient accompaniment, and learning programs. Direct links between patients and drug companies are prohibited. However, the notion of patient accompaniment remains to be defined, along with the evaluation of patient education, funding sources and practical modalities. abstract_id: PUBMED:37481920 The effect of online and face-to-face active learning methods on learning attitudes. Aim: This study aims to compare the effects of online and face-to-face education models using active learning methods on students' learning attitudes. Methods: This retrospective and quasi-experimental study included total 203 third-grade nursing students who took the Obstetrics and Gynecological Health Nursing course face-to-face and online in the fall semester of the 2019-2020 and 2020-2021. Active Learning Methods Questionnaire and the Scale Attitude Towards Learning (SATL) were used. Results: The mean score of active learning methods of the students receiving online education (77.35 ± 18.63) was higher than the face-to-face education group (67.00 ± 20.67). Although there was no difference between the face-to-face and online education groups in terms of the effort to learn and caring for learning, online students had a lower attitude towards learning avoidance (t: 6.540, p: 0.000). There was a negative and low-level significant correlation between the evaluation of active learning methods and the total score of SATL in the online education group (r = -0.200; p = .043), no correlation in face-to-face group (r: 0.004; p: 0.963). Conclusion: Online education, which uses active learning methods, affected positively students' attitudes towards learning and decreased their avoidance of learning more than face-to-face education. This study has shown that active learning methods allow online students to question theoretical knowledge, convey it to patient care, contribute to clinical knowledge, and facilitate their learning and analytical thinking based on the more positively evaluation by online students. Answer: Based on the provided abstracts, it appears that computer-based patient education can be an effective alternative to face-to-face patient education in certain contexts. A randomized trial comparing knowledge levels and satisfaction scores after computer-based versus doctor-based education for patients with carpal tunnel syndrome found that the computer group had higher knowledge levels and similar satisfaction scores compared to the doctor group (PUBMED:17448621). This suggests that computer-based education can result in higher knowledge levels and maintain patient satisfaction. Additionally, the COVID-19 pandemic has triggered a shift from face-to-face to online education in nursing, highlighting the advantages of online education, such as no risk of contracting infectious diseases, no need for commuting, and the ability to attend courses remotely (PUBMED:37397948). A comparison of Web-based and face-to-face training concerning patient education within a hospital system found no significant difference in course effectiveness or satisfaction between the methods, with Web-based training being the most cost-effective alternative (PUBMED:19502896). Furthermore, a study exploring nurses' experiences following a face-to-face or web-based intervention on patient deterioration indicated that both interventions enhanced clinical practice, improved confidence and competence, and that web learners' initial tech-related anxiety subsided over time (PUBMED:38132003). The value of individual face-to-face tutorials in nurse education was highlighted in another study, but it also acknowledged the labor-intensive nature of this approach (PUBMED:25679246). The potential of computers in patient education is also emphasized, with newer technology allowing for more tailored and interactive educational experiences (PUBMED:8134319). Blended learning approaches that combine face-to-face and online/computer-based education are suggested to consider patient characteristics and individual needs (PUBMED:31101427). In summary, while face-to-face patient education has its unique advantages and is valued in certain educational contexts, the evidence from the provided abstracts supports the notion that computer-based patient education can be a viable replacement in many cases, offering similar or even superior outcomes in terms of knowledge acquisition and satisfaction, as well as additional benefits such as cost-effectiveness and flexibility (PUBMED:17448621, PUBMED:37397948, PUBMED:19502896, PUBMED:38132003, PUBMED:25679246, PUBMED:8134319, PUBMED:31101427).
Instruction: Supracristal ventricular septal defect in adults: Is it time for a paradigm shift? Abstracts: abstract_id: PUBMED:33150104 Repair of a Supracristal Ventricular Septal Defect in an Adult. Ventricular septal defects (VSDs) are the most common congenital cardiac abnormalities occurring in five out of every 1000 births. Supracristal VSDs (located above the crista supraventicularis) are very rare and comprise only 2%-3% of all VSDs. Many VSDs close spontaneously during childhood; however, a substantial portion may not and are present in adulthood with a myriad of symptoms. We describe the management of a complex case of an adult patient with a supracristal VSD and resultant severe aortic insufficiency (AI) in the perioperative setting. abstract_id: PUBMED:26149330 Supracristal ventricular septal defect in adults: Is it time for a paradigm shift? Background: To determine the risk and predictors of progression of aortic regurgitation (AR) and other adverse events (AE) in adults with supracristal ventricular septal defect (VSD). Methods: Retrospective observational study of 62 adults with supracristal VSD followed at the Mayo Clinic from 1994 to 2013. Freedom from AR progression was compared by age and racial groups (Asian vs non-Asian). Predictors of AR progression were determined using Cox proportional hazard model. Composite AE endpoint was defined as AR progression, endocarditis, aneurysm of sinus of Valsalva (aSOV), and rupture of aSOV. Risk of AE was compared between group 1 (no surgical intervention) and group 2 (surgical intervention). Results: Sixty-two patients aged 47(SD 12) years were followed for 13 (SD 5) years. Group 1=42/62 and group=20/62. Endpoint of AR progression and aSOV occurred in 13% and 20% respectively. Freedom from AR progression for the entire cohort was 88%, 86% and 86% at 5, 10 and 15 years respectively. Risk of AR progression was significantly lower in patients (aged ≤40 years, p=0.008) but similar between Asians and non-Asians (p=0.57). Age ≤40 years was an independent predictor of AR progression (hazard ratio [HR] 3.5, 95% CI 2.3-5.4, p=0.001). Composite AE endpoint occurred in 33% and 40% of group1 and 2 cohorts respectively, p=0.69. Conclusion: Adults with supracristal VSD are at lower risk for AR progression but higher risk for aSOV formation compared to historical pediatric cohorts. Younger age was a predictor of AR progression. Surgical intervention had no protective effect on complication rate. abstract_id: PUBMED:25109598 Bad company: supracristal VSD presenting with ruptured sinus of valsalva aneurysm. a case presentation with echocardiographic depiction and an analysis of contemporary literature. Supracristal ventricular septal defect (SCVSD), a defect of the infundibular portion of the interventricular septum just below the right aortic cusp, occurs more frequently in Eastern Asian populations. SCVSD may be complicated by right sinus of Valsalva aneurysm (SoVA). We present the case of a 26-year-old male of Korean descent with a history of a childhood murmur who was referred to our institution for progressive heart failure symptoms. He was diagnosed with SCVSD and ruptured right SoVA based on history, physical exam, and echocardiography including three-dimensional transesophageal echocardiography with reconstructed surgical views. The patient underwent SCVSD closure, SoVA excision, and valve-sparing aortic root replacement. We reviewed the echocardiography literature regarding SCVSD and SoVA, and analyzed contemporary literature of SoVA and its relationship with SCVSD. We conclude that a higher prevalence of ruptured SoVA in Eastern Asians is likely related to a higher prevalence of underlying SCVSD in this population. abstract_id: PUBMED:403794 Supracristal ventricular septal defects: spectrum of associated lesions and complications. The clinical records and radiographs of 46 patients with a supracristal ventricular septal defect are reviewed to illustrate the spectrum of associated lesions and complications seen with this defect. The presence of a supracristal ventricular septal defect compromises the normal conal muscular support of the aortic valve, usually in the region of the right sinus of Valsalva, and, consequently, complications involving the aortic valve are frequently noted. Decreased sinus support may lead progressively to prolapse of the right aortic cusp through the supracristal ventricular septal defect into the right ventricular infundibulum effecting sequentially aortic regurgitation, right ventricular outflow obstruction, and ultimately right sinus of Valsalva-right ventricular fistula. This potential progression is an indication for early recognition and repair of the supracristal ventricular septal defect. A supracristal ventricular septal defect is also an integral component of the Taussig-Bing malformation, is present in the majority of patients with aortic arch interruption, and may be seen in as many as 25% of Oriental patients with tetralogy of Fallot. abstract_id: PUBMED:26504442 Unruptured Sinus of Valsalva Aneurysm with Right Ventricular Outflow Tract Obstruction and Supracristal Ventricular Septal Defect: A Rare Case. Unruptured right sinus of Valsalva aneurysm that causes severe obstruction of the right ventricular outflow tract is extremely rare. We describe the case of a 47-year-old woman who presented with exertional dyspnea. Upon investigation, we discovered an unruptured right sinus of Valsalva aneurysm with associated right ventricular outflow tract obstruction and a supracristal ventricular septal defect. To our knowledge, only 2 such cases have previously been reported in the medical literature. Although treatment of unruptured sinus of Valsalva aneurysm remains debatable, surgery should be considered for extremely large aneurysms or for progressive enlargement of the aneurysm on serial evaluation. Surgery was undertaken in our patient because there was clear evidence of right ventricular outflow tract obstruction, right-sided heart dilation, and associated exertional dyspnea. abstract_id: PUBMED:3227324 Surgical treatment of supracristal type of ventricular septal defect. Surgically treated ventricular septal defect (VSD) was of supracristal type in 120 of 389 Chinese patients; 93 of the 120 were younger than 15 years. Concomitant aortic anomalies were present in 58 of the patients (regurgitation in 23 and cusp prolapse in 35). Corresponding figures among the 93 patients of the under-15 group were 40 (9 + 31). Direct suture or patch closure of the supracristal VSD and replacement or plication of anomalous aortic valves were the methods used. The results of direct closure were equal to those of patch closure. One patient died of subacute bacterial endocarditis, which had been present preoperatively. There were no other deaths. The postoperative observation period was 6 months to 7 years. As the incidence of associated valvulopathy increases with patient age, early operation for supracristal VSD, regardless of shunt volume, is advocated. abstract_id: PUBMED:10050799 MRI of supracristal ventricular septal defects. The MR features of two cases of supracristal ventricular septal defect are described. In both patients, axial SE T1-weighted images demonstrated a defect between the base of the aorta and the upper posterior aspect of the right ventricular infundibulum. Cine MRI in the two cases showed left-to-right shunting with a flow jet in the distal right ventricular outflow tract that propagated into the main pulmonary artery. Both patients had prolapse of the right sinus of Valsalva, and one had aortic insufficiency. abstract_id: PUBMED:1987722 Usefulness of color-flow Doppler in diagnosing and in differentiating supracristal ventricular septal defect from right ventricular outflow tract obstruction. Color-flow Doppler is used for special localization of cardiac flows. The transeptal jet in supracristal ventricular septal defect (VSD) is directed toward the pulmonary valve and is often difficult to distinguish from the right ventricular (RV) outflow tract flow. Pulsed- and continuous-wave Doppler can misread the VSD jet as a stenotic lesion in the RV outflow tract. This study describes the color-flow Doppler characteristics of supracristal VSD and determines if color-flow Doppler can differentiate supracristal VSD jets from RV outflow tract flow. The study group comprised 28 patients ranging in age from 3 days to 23 years (mean 6.4), with catheter-diagnosed supracristal VSD in 14, isolated pulmonary valve stenosis in 10 and RV infundibular obstruction in 4. The echocardiographic Doppler and color-flow Doppler data of all patients were reviewed without knowledge of catheterization diagnosis. In all patients with supracristal VSD, color-flow Doppler revealed an abnormal transeptal jet directed toward the pulmonary valve that occurred 5 to 10 ms before RV outflow tract flow was identified. VSD and RV outflow tract jet could be differentiated by pulsed- and continuous-wave Doppler alone in 10 of 14 patients (71%). In 4 patients, differentiation was possible only with color-flow Doppler and electrocardiographic gating. Associated aortic regurgitation, or a fistula, or both, were detected in 6 patients. Pulsed- and continuous-wave Doppler with or without color-flow Doppler correctly identified the level of obstruction in all patients with RV outflow tract lesions (14 of 14). Supracristal VSD jet occurs in early systole and is directed toward the pulmonary valve.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:22204223 Supracristal ventricular septal defect with severe right coronary cusp prolapse. The case of a 20 years old male, diagnosed as supracristal ventricular septal defect (VSD) for last 6 years is being presented. He came in emergency department with decompensated congestive cardiac failure. After initial stabilization, he underwent transthoracic echocardiogram which showed large supracristal VSD, severely prolapsing right coronary cusp, severe aortic regurgitation and severe pulmonary hypertension. Right heart catheterization was performed which documented reversible pulmonary vascular resistance after high flow oxygen inhalation. He underwent VSD repair, right coronary cusp was excised and aortic valve was replaced by mechanical prosthesis. Post operative recovery was uneventful. He was discharged home in one week. abstract_id: PUBMED:19359138 Double chamber right ventricle in a patient with supracristal ventricular septal defect and prolapsing right coronary cusp: role of intraoperative transesophageal echocardiography. A supracristal ventricular septal defect (VSD), an outlet VSD situated in subaortic and subpulmonic regions, has a strong association with prolapse of the aortic valve cusp. The authors report the case of a patient operated for a supracristal VSD with prolapse of the right coronary cusp. The VSD was found to be in the subaortic position on preoperative transthoracic echocardiography, which failed to detect the presence of a double-chamber right ventricle. Intraoperative transesophageal echocardiography correctly recognized the supracristal nature of the VSD and identified the double-chamber right ventricle, subsequently altering the course of surgery. Answer: Supracristal ventricular septal defects (VSDs) are a rare form of VSD, comprising only 2%-3% of all VSDs, and are located above the crista supraventricularis (PUBMED:33150104). In adults, the management and outcomes of supracristal VSDs have been a subject of discussion, particularly regarding the progression of aortic regurgitation (AR) and the formation of aneurysms of the sinus of Valsalva (aSOV). A retrospective observational study at the Mayo Clinic followed 62 adults with supracristal VSD and found that the risk of AR progression was significantly lower in patients aged ≤40 years, but there was no protective effect of surgical intervention on the complication rate (PUBMED:26149330). This suggests that the traditional approach of early surgical intervention may not necessarily prevent complications in adults with supracristal VSD. Furthermore, supracristal VSDs can be associated with complications such as right sinus of Valsalva aneurysm (SoVA), which may rupture, particularly in Eastern Asian populations where SCVSD occurs more frequently (PUBMED:25109598). The presence of a supracristal VSD compromises the normal conal muscular support of the aortic valve, leading to a range of complications involving the aortic valve and the right sinus of Valsalva (PUBMED:403794). In light of these findings, it may be time for a paradigm shift in the management of supracristal VSD in adults. The traditional approach of early surgical intervention, regardless of shunt volume, may need to be reconsidered, especially given that surgery has not shown a protective effect on complication rates in adults (PUBMED:26149330). Instead, a more individualized approach that takes into account factors such as age, the presence of AR, and the risk of aSOV formation may be warranted. This approach would likely involve careful monitoring and timely intervention based on the progression of complications rather than preemptive surgery for all cases.
Instruction: Medical forensic examination of detained immigrants: is the Istanbul Protocol followed? Abstracts: abstract_id: PUBMED:32094109 Evaluation of medical examination of forensic medicine specialists during / after detention within the scope of Istanbul Protocol. Introduction: The Istanbul Protocol is a document prepared by many scientists from different countries and accepted by the United Nations thematising the medical reporting of torture. Normally, forensic specialists are responsible for medical examinations of torture victims, but this study looks at forensic specialists who were themselves subject to torture and mistreatment. It aims to analyze and evaluate, especially with regard to the Istanbul Protocol, the medical examinations performed on them while they were in detention. Material/method: This study includes experts in forensic science who were detained during and after the state of emergency that was declared in 2016 in Turkey. Participants were asked questions regarding the examination steps specified in the Istanbul protocol. The Google Forms system (Google Inc., CA/USA) was used for the surveys, sent to the participants via text message. Results: Twenty-two forensic scientists who were detained during and after the state of emergency in Turkey participated in this study. 45.5% of the participants were not asked for identification before their medical examination in detention. 36.4% of the participants stated that police were present in the examination room. Only 13.6% of participants found the privacy conditions during the examination appropriate. 90.9% stated that the time allocated for the examination was insufficient. The medical examinations lasted from 10 s to 10 min. Most participants were examined in less than 5 min. When asked if they were abused, 22.7% stated that they were not tortured, and 77.3% stated that they were subjected to torture or mistreatment practices, such as beatings, inappropriate use of handcuffs, being forced to wait or being rushed unnecessarily, being abandoned in isolating or hostile environments. In addition, the questions required to assess the psychological indications of torture and maltreatment were never asked in 40.9% of the cases. Discussion And Conclusion: Although torture is prohibited by international and humanitarian law, it has not completely disappeared. Medical personnel play an important role in torture assesment and prevention. The testimony of the forensic experts, who were fired and detained, revealed that the medical examinations during and after the State of Emergency in Turkey did not comply with the Istanbul Protocol and ethical rules. Medical examinations not carried out in accordance with the Istanbul Protocol lead to the denial of many rights and health problems. A worldwide awareness and sensitivity is needed to solve this problem. abstract_id: PUBMED:22875793 Medical forensic examination of detained immigrants: is the Istanbul Protocol followed? Introduction: Maltreatment and torture during custody is still an important problem in many countries. Both national and international regulations and the Istanbul Protocol are of great importance in terms of elimination of maltreatment and torture. In this study, we evaluated whether examinations for custody of immigrants were performed in accordance with the Istanbul Protocol. Method: We retrospectively evaluated reports of forensic examinations for custody of 100 immigrants in Van-Çaldıran Government Hospital in 2009. Data about nationality, age, gender, examination date, referring department, examination findings and departments to which the reports were submitted were collected. Results: Fifty percent of the immigrants were Iranian and they were aged between 11 and 62 years. Ninety-nine immigrants were men and one was woman. Data about history of arrest were missing in forensic reports about all immigrants. Data about signs of trauma were present only for seven immigrants. Ninety-three immigrants were reported to have no signs of trauma. None of the immigrants underwent psychiatric examination. All forensic reports were found to be submitted to the police personally. Whether the immigrants were examined before or after the custody was not clear in the reports. Conclusion: Evaluation of the data showed that none of the forensic examinations were performed in accordance with the Istanbul Protocol. It can be recommended that the protocol should be incorporated into the curricula for undergraduate, graduate and in-service training programmes in order to improve relevant forensic practices. abstract_id: PUBMED:38093432 History of forensic medical examination service in Belarus in the Soviet period (1918-1991) The article presents a part of historical path (1918-1991) of forensic medical examination in the Republic of Belarus, the history of origin, formation and development of forensic medical service and the academical departments of forensic medicine in the BSSR. The creation date of the state forensic medical service on the territory of Belarus can be considered as 3 of July 1918 yr. Since then, the service has passed from the judicial and administrative subdivision of the department of public healthcare of Executive Committee of Vitebsk Council of Workers, Peasant and Red Army Deputies (1918), the department of forensic medicine of the PCH of the BSSR (1919), forensic medical department at the Central Chemical and Biological Station of the PCH of the BSSR (1921) to the Institute of Scientific and Forensic Examination of the BSSR (1929), Central Forensic Medical Laboratory of the PCH of the BSSR (1931) and Republican Bureau of Forensic Medical Examination under the Ministry of Healthcare of the BSSR (1952). The historical stages of the existence of the forensic medical service and its state organizations in the BSSR were following: 1) stage of origin (1918-1922); 2) stage of formation (1921-1941, 1946-1952); 3) stage of development (1952). The main forensic medical experts, who made a significant contribution to the origin, formation and development of forensic medical examination in the territory of Belarus were V.F. Chervakov (1930-1939), S.A. Prilutskiy (1939-1941, 1946-1960), V.K. Steshits (1961-1977) and S.S. Maksimov (from 1977). abstract_id: PUBMED:36881047 Forensic Medical Examination After Sexual Violence: Implications Based on Victims' Perceptions. Background/aim: The objective of this study was to assess the perception of the forensic medical examination (FME) by victims of sexual violence. Based on patient-related outcomes gained in terms of personnel, chronological and spatial parameters, an additional aim was to derive improved examination procedures. Patients And Methods: A total of 49 sexually assaulted women were enrolled in this study. After standardized FME by a forensic doctor followed by a gynecologist, women were asked to complete a questionnaire addressing general perception, preferences regarding attending staff's sex, sequence and time frame of the examinations performed. The attending gynecologist also completed a questionnaire addressing demographic and medical parameters of the patient as well as assault-related information. Results: The examination setting in general was evaluated positively. Nevertheless, 52% of examined victims perceived the FME as an additional psychological burden. Overall, 85% of the affected women preferred a female forensic physician and 76% a female gynecologist to perform the examination. When women said they experienced a violation of their privacy during the gynecological examination, a male was more often present (60% vs. 35%, p=0.0866). Regarding the sequence of the examination components, 65% of the victims preferred to start with their medical history followed by the forensic and then the gynecological examination. Conclusion: Forensic medical and gynecological examination after sexual assault is an essential procedure, yet it is a potentially further traumatizing experience for the victim. The identified patient preferences should be taken into account in order to diminish further trauma. abstract_id: PUBMED:36719317 75 years in the service of Karelia: to the anniversary of the State Medical Institution of the Republic of Karelia Bureau of Forensic Medical Examination The article is dedicated to the 75th anniversary of the State Budgetary Healthcare Institution of the Republic of Karelia «Bureau of Forensic Medical Examination». The history of the formation of the forensic medicine service in the Republic of Karelia is briefly described. The data on the heads of the service and well-known forensic experts, the organizers of the forensic medical examination in Karelia, the course of forensic medicine of Petrozavodsk State University, the role of its employees in the work of the State Medical Institution of the Republic of Karelia «Bureau of Forensic Medical Examination» are presented. In addition to available literature sources, archival materials of employees and veterans of the forensic medical service of Karelia were used. abstract_id: PUBMED:36719305 Mathematical modeling using finite element analysis in forensic medical examination The purpose of the work is to develop methods of mathematical modeling using finite element analysis in forensic medical examination. The stages of the methodology for solving problems of deformable body mechanics in forensic medicine are considered, which allows to reliably establish the possibility of formation and morphology of damage under specific conditions and circumstances, to focus the researcher's attention on problem points when creating and evaluating the model. The use of simplified models of the human body makes the expert's conclusion more reasonable, which increases the confidence of law enforcement agencies in the activities of the forensic medical expert service and allows for a new look at solving the problems of forensic medicine and forensic medical examination. abstract_id: PUBMED:32124792 Independent forensic medical examination as a mean of proving the facts of a torture usage. Objective: Introduction: In most countries detainees are often subjected to physical and mental abuse by law enforcement officials, but very rarely victims of torture can prove the guilt of these offenders due to the poor quality of findings of state forensic medical and forensic psychiatric experts and due to inconsistency of their findings with international guidelines (Istanbul Protocol). The aim: To determine the role of forensic medical examination in the investigation of torture crimes, to provide arguments for necessity to security of the victim's right to collect evidence independently, including through using of special knowledge of independent forensic medical and forensic psychiatric experts in criminal proceedings. Patients And Methods: Materials and methods: The authors used the European Court of Human Rights (ECHR) Decisions on the complaints of the victims' torture, international and Ukrainian human rights legal acts, the results of numerous torture investigations conducted by medical and criminalistics scientists. The research is carried out on the basis of a harmonious combination of philosophical approaches, general and special scientific methods. Conclusion: Conclusions: In order to exercise the rights of victims of torture in accordance with the Convention on Human Rights and Fundamental Freedoms, the Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment, and according to other international and Ukrainian legal acts on human rights the arguments given for the need to enable the victim to engage independent forensic medical and forensic psychiatric experts to provide their conclusions in torture-related criminal proceedings. abstract_id: PUBMED:31175760 Current legal issues of conducting a forensic medical examination of newborns' corpses. Objective: Introduction: Forensic medical examination is a mandatory investigative action in determining the causes of death of newborns. It is especially significant and occupies a key place in proving the corpus deliciti. The aim is to study the current legal issues of a forensic medical examination of newborns' corpses. Patients And Methods: Materials and methods: The study of legal and literary sources was carried out. The library-semantic and the content analysis methods were applied. Conclusion: Conclusions: Forensic medical examination should also include questions regarding the mother of the newborn: whether she was healthy at the time of delivery; if the medical staff caused any harm to her or her child; if she had the necessary help before and during the childbirth, etc. Examination should be appointed if one of the parents has the desire, which must be expressed in writing in the form of a motion. abstract_id: PUBMED:37796453 Role and importance of forensic medical re-examination of defects in health care delivery Article is devoted to the problem of production and legal evaluation of repeated commission (complex) forensic medical examinations based on materials of «medical» practices, related to facts of health care delivery adverse outcomes, in criminal and civil proceedings. Reasons of re-examination assignment, theoretical and applied aspects of this procedural institute in evidence in cases of iatrogenic crimes and unqualified health care delivery were considered. The authors pay particular attention to deficiency of legal base and practical approaches to the re-examination assignment and its results' evaluation, and suggest ways to improve the forensic medical and law enforcement activity in this category of cases. The examples from practice, illustrating the content and nature of the re-examination's problem in «medical» cases, were given, the ways to improve the quality of expert work and judicial practice were emphasized, and measures, that will improve the situation in short term, were identified. abstract_id: PUBMED:33409560 Forensic evaluation of alleged wrist restraint/handcuff injuries in survivors of torture utilizing the Istanbul Protocol. Despite being a common form of abuse, there is a paucity of literature describing shackling and wrist restraint injuries among survivors of torture. Forensic evaluation of alleged wrist restraint/handcuff injuries in survivors of torture presents challenges to the evaluator, especially if the injuries are remote and do not leave lasting marks nor neurologic deficits. Thorough history-taking and physical examination are critical to effective forensic documentation. Guidance is provided in The Manual on Effective Investigation and Documentation of Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment (Istanbul Protocol), the gold standard for the medicolegal documentation of torture. This guidance relies primarily on physical findings, with less direction provided on how to interpret historical evidence or when historical evidence provided by the patient can be interpreted as highly consistent with alleged injury in the absence of current physical findings. Through a case-based review, we present diagnostic strategies for the evaluation of alleged abuse involving wrist restraints/handcuffs, focusing on skin, neurologic, and osseous injuries. We highlight key findings from both the history and physical examination that will allow the evaluator to improve the accuracy of their expert medical opinion on the degree to which medical findings correlate with the patient's allegations of wrist restraint injuries. Answer: Based on the abstract provided (PUBMED:22875793), the evaluation of forensic examinations for the custody of immigrants in Van-Çaldıran Government Hospital in 2009 indicated that none of the forensic examinations were performed in accordance with the Istanbul Protocol. The study found that data about the history of arrest were missing in all forensic reports about the immigrants, and signs of trauma were only reported for seven immigrants. Additionally, none of the immigrants underwent psychiatric examination, and all forensic reports were submitted to the police personally. The timing of the examinations, whether before or after custody, was not clear in the reports. The conclusion drawn from this data was that the examinations did not follow the guidelines set by the Istanbul Protocol, and the study recommended that the protocol should be incorporated into curricula for training programs to improve forensic practices.
Instruction: Containing costs and containing bugs: are they mutually exclusive? Abstracts: abstract_id: PUBMED:19236136 Containing costs and containing bugs: are they mutually exclusive? Background: The overall health care costs for managing patients with community-acquired pneumonia (CAP) in U.S. hospitals is burdensome. While pharmacy costs comprise only a minor proportion of these costs, hospital length of stay (LOS) is the greatest contributor. Infections due to antimicrobial-resistant pathogens are also associated with increased overall health care cost. Therefore, strategies that aim to minimize antimicrobial resistance and reduce hospital LOS may have the greatest impact in reducing overall health care costs in managing patients with CAP. Objective: To evaluate how antimicrobial resistance can impact health care costs associated with CAP and review strategies to minimize the risk of resistance development while promoting appropriate antimicrobial therapy (including optimized dosing) and decreasing hospital LOS. Summary: Antimicrobial resistance can increase the risk of clinical failure and result in higher overall health care costs. Further development of antimicrobial resistance during therapy should, therefore, be minimized. This can be achieved through optimized antimicrobial dosing strategies- using a higher dose of concentration-dependent agents or prolonged infusion of time-dependent agents - that increase the probability of attaining pharmacokinetic-pharmacodynamic targets for eradication of the pathogen and hence successful clinical outcomes. Decreasing LOS must be a priority when attempting to reduce hospital costs. Active intravenous-to-oral switch therapy has been shown to effectively reduce LOS. Appropriate short-course regimens may also offer the opportunity for effective treatment while reducing or eliminating unnecessary antimicrobial exposure that not only reduces the potential for drug-related adverse events, but may also minimize the selection of resistant organisms. Conclusion: Clinical failure and antimicrobial resistance can significantly increase the cost of managing patients with CAP, primarily by increasing LOS. Therefore, strategies should be employed to minimize the risk of resistance development and reduce LOS. These include early appropriate therapy, optimized dosing based on pharmacodynamic principles, and efficient IV-to-PO switch therapy when appropriate. abstract_id: PUBMED:38005690 Foliar Application of Sulfur-Containing Compounds-Pros and Cons. Sulfate is taken up from the soil solution by the root system; and inside the plant, it is assimilated to hydrogen sulfide, which in turn is converted to cysteine. Sulfate is also taken up by the leaves, when foliage is sprayed with solutions containing sulfate fertilizers. Moreover, several other sulfur (S)-containing compounds are provided through foliar application, including the S metabolites hydrogen sulfide, glutathione, cysteine, methionine, S-methylmethionine, and lipoic acid. However, S compounds that are not metabolites, such as thiourea and lignosulfonates, along with dimethyl sulfoxide and S-containing adjuvants, are provided by foliar application-these are the S-containing agrochemicals. In this review, we elaborate on the fate of these compounds after spraying foliage and on the rationale and the efficiency of such foliar applications. The foliar application of S-compounds in various combinations is an emerging area of agricultural usefulness. In the agricultural practice, the S-containing compounds are not applied alone in spray solutions and the need for proper combinations is of prime importance. abstract_id: PUBMED:29661631 Chlorhexidine-containing dressings in the prevention of central venous catheter-related bloodstream infections: A cost and resource utilization analysis. Background: A recent study reported a reduction in probable/definite central venous catheter (CVC)-related bloodstream infections (CRBSIs) in neutropenic high-risk patients using CVC dressings with a chlorhexidine-containing gel pad. Methods: Based on published data, a health-economic analysis was performed to analyze the economic effect of using CVC dressings with a chlorhexidine-containing gel pad compared to non-chlorhexidine control dressings. A micro-costing approach was used to determine CRBSI-related direct treatment cost factors. Results: Between February 2012 and September 2014, 356 patients (178 patients in both groups) were analyzed. Distribution of probable and definite CRBSI in the chlorhexidine group and control group were 12 (7%) vs 18 (10%) and 9 (5%) vs 21 (12%), respectively (P = .011). Median overall length of stay (25 vs 27.5 days; P = .630) and days on treatment with antibacterials (10 vs 12 days; P = .140) were similar between the chlorhexidine and control groups. The most important cost driver in both groups was treatment on general ward (€4275 [US$ 5173], interquartile range [IQR]: €592 - €6504 [US$ 716 - US$ 7871] vs €4560 [US$ 5518], IQR: €1227 - €8567 [US$ 1485 - US$ 10,367]; P = .120), resulting in median overall direct treatment costs of €13,881 (US$ 16,798) [IQR: €10,922 - €25,457 (US$ 13,217 - US$ 30,807) vs €13,929 [US$ 16,856] [IQR: €11,295 - €23,561 (US$ 13,669 - US$ 28,512); P = .640]). Conclusion: Our study shows similar results in overall direct treatment costs, meaning that higher acquisition costs of chlorhexidine-containing dressings did not translate into higher costs. Expenses were primarily outweighed by a lower rate of probable/definite CRBSI and reduced associated costs. abstract_id: PUBMED:32575665 Biosensors Based on Advanced Sulfur-Containing Nanomaterials. In recent years, sulfur-containing nanomaterials and their derivatives/composites have attracted much attention because of their important role in the field of biosensor, biolabeling, drug delivery and diagnostic imaging technology, which inspires us to compile this review. To focus on the relationships between advanced biomaterials and biosensors, this review describes the applications of various types of sulfur-containing nanomaterials in biosensors. We bring two types of sulfur-containing nanomaterials including metallic sulfide nanomaterials and sulfur-containing quantum dots, to discuss and summarize the possibility and application as biosensors based on the sulfur-containing nanomaterials. Finally, future perspective and challenges of biosensors based on sulfur-containing nanomaterials are briefly rendered. abstract_id: PUBMED:30235584 Process Investigation of a Novel Compaction Technique With Pellet-Containing Granules. Objective: The purpose of this study was to investigate the influence of the preparing process on the properties of pellet-containing granules and tablets. Methods: Coated pellets were granulated by centrifugal granulation, and the obtained pellet-containing granules were mixed with cushioning granules and compressed into tablets. Tablets were characterized for a drug release rate as compared with the original coated pellets. Results: The surface roughness and the angle of repose of pellet-containing granules increased with the granulating ratio. Weight and drug content variations in tablets were diminished by granulation, and great improvement in tablet uniformity was achieved even for large pellets. Granulation showed protection for coated films under different compress forces and at even a low content of cushioning granules. Conclusions: The uniformity of tablets prepared from pellet-containing granules could be significantly improved by the granulation process at a proper granulating ratio. The granulation process could protect the coated pellets during compaction even under high compression forces and with a low content of cushioning granules. abstract_id: PUBMED:32186925 The cost-effectiveness of a bedaquiline-containing short-course regimen for the treatment of multidrug-resistant tuberculosis in South Africa. Background: Bedaquiline-containing regimens have demonstrated improved outcomes over injectable-containing regimens in the long-term treatment of multidrug-resistant tuberculosis (MDR-TB). Recently, the World Health Organization (WHO) recommended replacing injectables in the standard short-course regimen (SCR) with a bedaquiline-containing regimen. The South African national TB program similarly recommends a bedaquiline-containing regimen. Here, we investigated the cost-effectiveness of a bedaquiline-containing SCR versus an injectable-containing SCR for the treatment of MDR-TB in South Africa.Methods: A Markov model was adapted to simulate the incidence of active patients with MDR-TB. Patients could transition through eight health states: active MDR-TB, culture conversion, cure, follow-up loss, secondary MDR-TB, extensively DR-TB, end-of-life care, and death. A 5% discount was assumed on costs and outcomes. Health outcomes were expressed as disability-adjusted life years (DALYs).Results: Over a 10-year time horizon, a bedaquiline-containing SCR dominated an injectable-containing SCR, with an incremental saving of US $982 per DALY averted. A bedaquiline-containing SCR was associated with lower total costs versus an injectable-containing SCR (US $597 versus $657 million), of which US $3.2 versus $21.9 million was attributed to adverse event management.Conclusions: Replacing an injectable-containing SCR with a bedaquiline-containing SCR is cost-effective, offering a cost-saving alternative with improved patient outcomes for MDR-TB. abstract_id: PUBMED:1778427 B-lymphocyte mitogenicity and adjuvanticity of an ornithine-containing lipid or a serine-containing lipid. An ornithine-containing lipid (Orn-L) or a serine-containing lipid (Ser-L) from Flavobacterium meningosepticum exhibited strong mitogenicity for the splenocytes from both LPS-responder C3H/HeSlc and LPS-low-responder C3H/HeJ mice. The potency of the lipoamino acids was the same as that of LPS for responder mice. The lipoamino acids were B-lymphocyte mitogens. Furthermore, Orn-L or Ser-L exhibited strong adjuvanticity. Compared with the adjuvanticity of LPS, the activity of Orn-L was rather high. Based on these data, together with the previously reported data of macrophage activation, we propose that the lipoamino acids are non-toxic, potent immunoactivators. abstract_id: PUBMED:31061812 Gas containing renal stone - case report. Gas-containing stones are extremely rare. The formation of renal stones, including the presence of free gas within the stone, is overall poorly understood. The presence of free gas in the collecting system of the urinary tract indicates emphysematous pyelonephritis, in the absence of prior surgical intervention, gastrointestinal fistula, or urinary bladder catheterization. Here, we present the tenth case of gas-containing renal stones, as detected by computed tomography, and discuss the main diagnostic features to differentiate gas-containing stones from emphysematous pyelonephritis. abstract_id: PUBMED:32864606 Development and Clinical Application of Phosphorus-Containing Drugs. Phosphorus-containing drugs belong to an important class of therapeutic agents and are widely applied in daily clinical practices. Structurally, the phosphorus-containing drugs can be classified into phosphotriesters, phosphonates, phosphinates, phosphine oxides, phosphoric amides, bisphosphonates, phosphoric anhydrides, and others; functionally, they are often designed as prodrugs with improved selectivity and bioavailability, reduced side effects and toxicity, or biomolecule analogues with endogenous materials and antagonistic endoenzyme supplements. This review summarized the phosphorus-containing drugs currently on the market as well as a few promising molecules at clinical studies, with particular emphasis on their structural features, biological mechanism, and indications. abstract_id: PUBMED:36429186 Dietary Heme-Containing Proteins: Structures, Applications, and Challenges. Heme-containing proteins, commonly abundant in red meat and blood, are considered promising dietary sources for iron supplementation and fortification with higher bioavailability and less side effects. As the precise structures and accurate bioactivity mechanism of various heme-containing proteins (hemoglobin, myoglobin, cytochrome, etc.) are determined, many methods have been explored for iron fortification. Based on their physicochemical and biological functions, heme-containing proteins and the hydrolyzed peptides have been also widely utilized as food ingredients and antibacterial agents in recent years. In this review, we summarized the structural characterization of hemoglobin, myoglobin, and other heme proteins in detail, and highlighted recent advances in applications of naturally occurring heme-containing proteins as dietary iron sources in the field of food science and nutrition. The regulation of absorption rate, auto-oxidation process, and dietary consumption of heme-containing proteins are then discussed. Future outlooks are also highlighted with the aim to suggest a research line to follow for further studies. Answer: Containing costs and containing bugs, such as antimicrobial-resistant pathogens, are not necessarily mutually exclusive. Strategies can be employed to minimize the risk of resistance development while also reducing healthcare costs, particularly in the management of community-acquired pneumonia (CAP). Antimicrobial resistance can lead to clinical failure and higher overall healthcare costs, primarily by increasing the length of hospital stay (LOS), which is a significant contributor to hospital costs (PUBMED:19236136). To address this, optimized antimicrobial dosing strategies can be used, such as higher doses of concentration-dependent agents or prolonged infusion of time-dependent agents, to increase the likelihood of eradicating the pathogen and achieving successful clinical outcomes. Additionally, active intravenous-to-oral switch therapy and appropriate short-course regimens can effectively reduce LOS and minimize unnecessary antimicrobial exposure, which in turn reduces the potential for drug-related adverse events and the selection of resistant organisms (PUBMED:19236136). In the context of central venous catheter-related bloodstream infections (CRBSIs), the use of chlorhexidine-containing dressings has been shown to reduce the rate of probable/definite CRBSIs without translating into higher costs, despite the higher acquisition costs of these dressings. The expenses were primarily outweighed by the lower rate of infections and reduced associated costs (PUBMED:29661631). Similarly, in the treatment of multidrug-resistant tuberculosis (MDR-TB), a bedaquiline-containing short-course regimen has been found to be cost-effective compared to an injectable-containing regimen, offering cost savings with improved patient outcomes (PUBMED:32186925). Therefore, while there are costs associated with implementing measures to contain bugs, such as antimicrobial resistance or infections, these measures can be designed in a way that they do not necessarily lead to increased healthcare costs and may even result in cost savings while improving patient outcomes.
Instruction: Does mechanical bowel preparation have a role in preventing postoperative complications in elective colorectal surgery? Abstracts: abstract_id: PUBMED:17853636 The use of mechanical bowel preparation in elective colorectal surgery. Background: Mechanical bowel preparation (MBP) prior to elective colorectal surgery has been in use for many years. It is considered important in preventing post-operative infectious complications after colorectal surgery. The evidence to support these claims is lacking within the medical literature and yet this still remains standard practice in many hospitals. A literature search was undertaken to ascertain the evidence available regarding the use of MBP in elective colorectal surgery. Methods: The search included the databases PubMed, Medline and Embase using the keywords "mechanical bowel preparation", "bowel cleansing" and "elective colorectal surgery", a search of recent issues of relevant journals including Diseases of the Colon and Rectum and British Journal of Surgery and backward chaining from articles obtained. Results And Conclusion: Most authors recommend that colorectal surgery is safe without pre-operative MBP but that there may some situations in which it may be beneficial (e.g. if there is a small tumour or the possible need for intra-operative colonoscopy). The implication for clinical practice in this situation is that there is not enough strength of evidence at present to recommend a change in practice. There is a need for further higher powered trials to try to answer this question definitively. abstract_id: PUBMED:32098461 Role of Mechanical Bowel Preparation for Elective Colorectal Surgery The presence of bowel contents during colorectal surgery has been related to surgical site infections (SSI), anastomotic leakage (AL) and postoperative complications theologically. Mechanical bowel preparation (MBP) for elective colorectal surgery aims to reduce fecal materials and bacterial count with the objective to decrease SSI rate, including AL. Based on many observational data, meta-analysis and multicenter randomized control trials (RTC), non-MBP did not increase AL rates or SSI and other complications in colon and even rectal surgery. In 2011 Cochrane review, there is no significant benefit MBP compared with non-MBP in colon surgery and also no better benefit MBP compared with rectal enemas in rectal surgery. However, in surgeon's perspectives, MBP is still in widespread surgical practice, despite the discomfort caused in patients, and general targeting of the colon microflora with antibiotics continues to gain popularity despite the lack of understanding of the role of the microbiome in anastomotic healing. Recently, there are many evidence suggesting that MBP+oral antibiotics (OA) should be the growing gold standard for colorectal surgery. However, there are rare RCT studies and still no solid evidences in OA preparation, so further studies need results in both MBP and OA and only OA for colorectal surgery. Also, MBP studies in patients with having minimally invasive surgery (MIS; laparoscopic or robotics) colorectal surgery are still warranted. Further RCT on patients having elective left side colon and rectal surgery with primary anastomosis in whom sphincter saving surgery without MBP in these MIS and microbiome era. abstract_id: PUBMED:27324152 Benefit of mechanical bowel preparation prior to elective colorectal surgery: current insights. Background: Back in the 1970s, the administration of oral antibiotics combined with mechanical bowel preparation prior to colorectal surgery was considered standard procedure and adopted widely. Subsequent evidence suggested that bowel cleansing was unnecessary, even harmful, and hence was abandoned. Most recent evidence, however, suggests that full preparation significantly improves a spectrum of colectomy-specific postoperative outcomes. Purpose: The purpose of this review was to describe existing literature regarding the optimal bowel preparation regimen prior to elective colorectal resection. In addition, we evaluate the available evidence on each component-oral antibiotics, mechanical bowel preparation and intravenous antibiotics, with regard to short-term postoperative outcomes. Conclusions: Current best evidence suggests that colorectal resection should be preceded by a combination of oral antibiotics, mechanical bowel preparation and intravenous antibiotics at induction. Further randomized controlled trials are required due to a paucity of level 1 evidence. abstract_id: PUBMED:27011543 An Open-Label Prospective Randomized Controlled Trial of Mechanical Bowel Preparation vs Nonmechanical Bowel Preparation in Elective Colorectal Surgery: Personal Experience. Over the last two decades, preoperative mechanical bowel preparation for elective colorectal surgery has been criticized. Yet, many surgeons are still in favor of its use simply because of the belief that it achieves better clearance of the colonic fecal load. The objective of this study is to compare the outcome with regard to patient compliance and postoperative complications following elective colorectal surgery between two groups of patients, one with bowel prepared mechanically and the other by nonmechanical means. This open-label prospective randomized controlled trial was conducted in a high-volume tertiary government referral hospital of Kolkata over a period of 3 years. It included 71 patients, divided into two groups, admitted for elective colorectal resection procedures in one surgical unit. Both methods of bowel preparation were equally well tolerated, and there was no statistically significant difference in the incidence of postoperative complications or mortality between the two groups. abstract_id: PUBMED:29398873 Impact of mechanical bowel preparation in elective colorectal surgery: A meta-analysis. Aim: To analyse the effect of mechanical bowel preparation vs no mechanical bowel preparation on outcome in patients undergoing elective colorectal surgery. Methods: Meta-analysis of randomised controlled trials and observational studies comparing adult patients receiving mechanical bowel preparation with those receiving no mechanical bowel preparation, subdivided into those receiving a single rectal enema and those who received no preparation at all prior to elective colorectal surgery. Results: A total of 36 studies (23 randomised controlled trials and 13 observational studies) including 21568 patients undergoing elective colorectal surgery were included. When all studies were considered, mechanical bowel preparation was not associated with any significant difference in anastomotic leak rates (OR = 0.90, 95%CI: 0.74 to 1.10, P = 0.32), surgical site infection (OR = 0.99, 95%CI: 0.80 to 1.24, P = 0.96), intra-abdominal collection (OR = 0.86, 95%CI: 0.63 to 1.17, P = 0.34), mortality (OR = 0.85, 95%CI: 0.57 to 1.27, P = 0.43), reoperation (OR = 0.91, 95%CI: 0.75 to 1.12, P = 0.38) or hospital length of stay (overall mean difference 0.11 d, 95%CI: -0.51 to 0.73, P = 0.72), when compared with no mechanical bowel preparation, nor when evidence from just randomized controlled trials was analysed. A sub-analysis of mechanical bowel preparation vs absolutely no preparation or a single rectal enema similarly revealed no differences in clinical outcome measures. Conclusion: In the most comprehensive meta-analysis of mechanical bowel preparation in elective colorectal surgery to date, this study has suggested that the use of mechanical bowel preparation does not affect the incidence of postoperative complications when compared with no preparation. Hence, mechanical bowel preparation should not be administered routinely prior to elective colorectal surgery. abstract_id: PUBMED:31123559 Preoperative bowel preparation does not favor the management of colorectal anastomotic leak. Background: Controversy exists regarding the impact of preoperative bowel preparation on patients undergoing colorectal surgery. This is due to previous research studies, which fail to demonstrate protective effects of mechanical bowel preparation against postoperative complications. However, in recent studies, combination therapy with oral antibiotics (OAB) and mechanical bowel preparation seems to be beneficial for patients undergoing an elective colorectal operation. Aim: To determine the association between preoperative bowel preparation and postoperative anastomotic leak management (surgical vs non-surgical). Methods: Patients with anastomotic leak after colorectal surgery were identified from the 2013 and 2014 Colectomy Targeted American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) database and were employed for analysis. Every patient was assigned to one of three following groups based on the type of preoperative bowel preparation: first group-mechanical bowel preparation in combination with OAB, second group-mechanical bowel preparation alone, and third group-no preparation. Results: A total of 652 patients had anastomotic leak after a colectomy from January 1, 2013 through December 31, 2014. Baseline characteristics were assessed and found that there were no statistically significant differences between the three groups in terms of age, gender, American Society of Anesthesiologists score, and other preoperative characteristics. A χ2 test of homogeneity was conducted and there was no statistically/clinically significant difference between the three categories of bowel preparation in terms of reoperation. Conclusion: The implementation of mechanical bowel preparation and antibiotic use in patients who are going to undergo a colon resection does not influence the treatment of any possible anastomotic leakage. abstract_id: PUBMED:18156032 Mechanical bowel preparation for elective colorectal surgery: a multicentre randomised trial. Background: Mechanical bowel preparation is a common practice before elective colorectal surgery. We aimed to compare the rate of anastomotic leakage after elective colorectal resections and primary anastomoses between patients who did or did not have mechanical bowel preparation. Methods: We did a multicentre randomised non-inferiority study at 13 hospitals. We randomly assigned 1431 patients who were going to have elective colorectal surgery to either receive mechanical bowel preparation or not. Patients who did not have mechanical bowel preparation had a normal meal on the day before the operation. Those who did were given a fluid diet, and mechanical bowel preparation with either polyethylene glycol or sodium phosphate. The primary endpoint was anastomotic leakage, and the study was designed to test the hypothesis that patients who are given mechanical bowel preparation before colorectal surgery do not have a lower risk of anastomotic leakage than those who are not. The median follow-up was 24 days (IQR 17-34). We analysed patients who were treated as per protocol. This study is registered with ClinicalTrials.gov, number NCT00288496. Findings: 77 patients were excluded: 46 who did not have a bowel resection; 21 because of missing outcome data; and 10 who withdrew, cancelled, or were excluded for other reasons. The rate of anastomotic leakage did not differ between both groups: 32/670 (4.8%) patients who had mechanical bowel preparation and 37/684 (5.4%) in those who did not (difference 0.6%, 95% CI -1.7% to 2.9%, p=0.69). Patients who had mechanical bowel preparation had fewer abscesses after anastomotic leakage than those who did not (2/670 [0.3%] vs 17/684 [2.5%], p=0.001). Other septic complications, fascia dehiscence, and mortality did not differ between groups. Interpretation: We advise that mechanical bowel preparation before elective colorectal surgery can safely be abandoned. abstract_id: PUBMED:12804412 Mechanical bowel preparation for elective colorectal surgery. Background: For more than a century the presence of bowel content during surgery has been related to anastomotic leakage. Mechanical bowel preparation has been considered an efficient agent against leakage and infections complications. This dogma is not based on solid evidence, but more on observational data and expert's opinions. Objectives: To determine the security and effectiveness of prophylactic mechanical bowel preparation for morbidity and mortality rates in colorectal surgery. The following hypothesis was tested: "The use of mechanical bowel preparation before elective colorectal surgery reduces the incidence of postoperative complications". Search Strategy: All publications describing mechanical bowel preparation before elective colorectal surgery was sought through computerized searches of EMBASE, LILACS, MEDLINE, and Cochrane Library; by hand-searching in relevant medical journals, from major gastroenterological congresses, without limitation for date and language, using the search strategy described by the Colorectal Cancer Review Group. In addition, randomised clinical trials will be searched through personal communication with colleagues and from conference proceedings Selection Criteria: Studies: All randomised, clinical trials, that were performed in order to answer the hypothesis. Participants: Patients submitted elective colorectal surgery. Interventions: Any strategy in mechanical bowel preparation compared with no mechanical bowel preparation. Primary Outcome Measures: 1. Anastomosis leakage- stratified for rectum and colon 2. Overall anastomotic leakage Secondary Outcome Measures: 3. Mortality 4. Peritonitis 5. Re operation 6. Wound Infection 7 Infectious extra-abdominal complication 8. Non-infection extra-abdominal 9. Overall surgical site infections Data Collection And Analysis: Data was independently extracted by two reviewers and cross-checked. The methodological quality of each trial was assessed by the same two reviewers. Details of the randomisation (generation and concealment), blinding, whether an intention-to-treat analysis was done, and the number of patients lost to follow-up was recorded. The results of each RCT was summarised in 2 x 2 tables for each outcome. For analysis the Peto-Odds ratio was used as defaults (no statistical heterogeneity was observed) Main Results: Of the 1159 patients with anastomosis (6 RCTs), 576 were allocated for mechanical bowel preparation (groups 1) and 583 for no preparation (groups 2) before elective colorectal surgery. Of 1204 patients totally enrolled 595 were in groups 1 and 609 in groups 2. Primary Outcomes: 1) Anastomotic leakage - stratified:A) Low anterior resection: 12.5% (6 of 48 patients in 1) compared with 12% (6 of 50 patients in 2); Peto OR 1.17, 95% CI: 0.35 - 3.96 (non-significant) B) Colonic surgery: 1.16% (2 of 172 patients in 1) compared with 0.6% (1 of 166 patients in 2); Peto OR 1.75, 95% CI: 0.18 - 17.02 2) Overall anastomotic leakage: 5.5% (32 of 576 patients in 1) compared with 2.9% (17 of 583 patients in 2); Peto OR 1.94, 95% CI: 1.09 - 3.43 (P=0.02) SECONDARY OUTCOMES: 3) Mortality: 0.6% (2 of 329 patients in 1) compared with 0% (0 of 326 patients in 2); Peto OR 7.95, 95% CI: 0.49 - 128.34 (non-significant) 4) Peritonitis: 5.1% ( 13 of 254 patients in 1) compared with 2.8% (7 of 252 patients in 2); Peto OR 1.90, 95% CI: 0.78 -4.64) (non significant) 5) Reoperation: 3.3% ( 11 of 329 patients) compared with 2.5% (8 of 326 patients); Peto OR 1.40, 95% CI: 0.56 - 3.49) (non-significant) 6) Wound infection: 7.4% (44 of 595 patients in 1) compared with 5.7% (35 of 609 patients in 2); Peto OR 1.34, 95% CI: 0.85 - 2.13 (non-significant) 7) Infectious extra-abdominal complication: 8.3% ( 14 of 168 patients in 1) compared with 9.4% (15 of 159 patients in 2); Peto OR, 95%: 0.87 (0.41 - 1.87) 8) Non-infection extra-abdominal complication: 8.0% ( 20 of 250 patients in 1) compared with 7.0% (17 of 246 patients in 2); Peto OR 1.19, 95% CI: 0.61 - 2.32 (non-significant) - 9) Surgical site infection: 9.8% (31 of 325 patients in 1) compared with 8.3% (27 of 322 patients in 2); Peto OR 1.20, 95% CI: 0.70 - 2.05 (non-significant) - Reviewer's Conclusions: The results failed to support the hypothesis that bowel preparation reduces anastomotic leak rates and other complications. There was no a priori hypothesis that bowel preparation may increase anastomotic leak rates, so this was not stated. Thus, the routine use of mechanical bowel preparation in patients undergoing elective colorectal surgery is questioned. abstract_id: PUBMED:25048485 The impact of mechanical bowel preparation in elective colorectal surgery: a propensity score matching analysis. Purpose: To evaluate the influence of preoperative mechanical bowel preparation (MBP) based on the occurrence of anastomosis leakage, surgical site infection (SSI), and severity of surgical complication when performing elective colorectal surgery. Materials And Methods: MBP and non-MBP patients were matched using propensity score. The outcomes were evaluated according to tumor location such as right- (n=84) and left-sided colon (n=50) and rectum (n=100). In the non-MBP group, patients with right-sided colon cancer did not receive any preparation, and patients with both left-sided colon and rectal cancers were given one rectal enema before surgery. Results: In the right-sided colon surgery, there was no anastomosis leakage. SSI occurred in 2 (4.8%) and 4 patients (9.5%) in the non-MBP and MBP groups, respectively. In the left-sided colon cancer surgery, there was one anastomosis leakage (4.0%) in each group. SSI occurred in none in the rectal enema group and in 2 patients (8.0%) in the MBP group. In the rectal cancer surgery, there were 5 anastomosis leakages (10.0%) in the rectal enema group and 2 (4.0%) in the MBP group. SSI occurred in 3 patients (6.0%) in each groups. Severe surgical complications (Grade III, IV, or V) based on Dindo-Clavien classification, occurred in 7 patients (14.0%) in the rectal enema group and 1 patient (2.0%) in the MBP group (p=0.03). Conclusion: Right- and left-sided colon cancer surgery can be performed safely without MBP. In rectal cancer surgery, rectal enema only before surgery seems to be dangerous because of the higher rate of severe postoperative complications. abstract_id: PUBMED:38058372 Does mechanical bowel preparation really prevent complications after colorectal surgery depending on the lesion localization? A myth or fact? Objectives: Despite being routinely used before elective colorectal surgery in most surgical clinics, mechanical bowel preparation (MBP) remains controversial. This study aimed to investigate postoperative complications and outcomes of right, left, or rectosigmoid resection without MBP. Material And Methods: Patients who underwent elective colorectal surgery without mechanical bowel preparation and oral antibiotics between January 2011 and December 2021 were included in the study. Patients were categorized according to the side of resection, and these subgroups were compared for anastomotic leakage, surgical site infections (SSI), and overall morbidity measured using the Clavien-Dindo complication grade. Results: Data of 422 patients were analyzed. Overall anastomotic leakage was found in 14 patients (3.3%), SSI in 46 (10.9%), collection in 14 (3.3%), mortality in 18 (4.3%), and reoperation in 17 (%4) patients. Anastomotic leakage was observed in six (3.9%) in right colectomy, two (1.9%) in left colectomy, and in six (3.7%) patients in the rectosigmoid resection group when the groups were evaluated separately. There was no statistical difference between the groups (p= 0.630). Furthermore, there was no statistical difference between the groups regarding collection and reoperation (p values were p= 0.31, and p= 0.251, respectively). Conclusion: Study results showed that anastomotic leakage, surgical site infection, intra-abdominal collection, reoperation, and mortality rates were similar to the current literature obtained from the studies with mechanical bowel preparation. In addition, these results were found to be similar according to the resection site. Answer: The role of mechanical bowel preparation (MBP) in preventing postoperative complications in elective colorectal surgery has been a subject of debate. Some studies and reviews suggest that MBP does not significantly reduce the incidence of postoperative complications such as anastomotic leak rates, surgical site infection (SSI), intra-abdominal collection, mortality, reoperation, or hospital length of stay when compared with no preparation (PUBMED:29398873). Furthermore, a multicentre randomised trial found no difference in the rate of anastomotic leakage between patients who did or did not have MBP, although fewer abscesses were noted after anastomotic leakage in patients who had MBP (PUBMED:18156032). However, other evidence suggests that a combination of oral antibiotics, MBP, and intravenous antibiotics at induction may improve a spectrum of colectomy-specific postoperative outcomes (PUBMED:27324152). Additionally, some studies indicate that MBP combined with oral antibiotics (OAB) may be beneficial, although the evidence is not yet solid, and further randomized controlled trials (RCTs) are needed (PUBMED:32098461, PUBMED:31123559). A meta-analysis including 36 studies found no significant difference in postoperative complications between patients receiving MBP and those receiving no MBP or just a single rectal enema (PUBMED:29398873). Another study concluded that MBP should not be administered routinely prior to elective colorectal surgery (PUBMED:18156032). Similarly, a review of randomized clinical trials did not support the hypothesis that MBP reduces anastomotic leak rates or other complications (PUBMED:12804412). An open-label prospective randomized controlled trial found no statistically significant difference in postoperative complications or mortality between patients prepared with MBP and those by nonmechanical means (PUBMED:27011543). A propensity score matching analysis suggested that right- and left-sided colon cancer surgery could be performed safely without MBP, but cautioned that rectal cancer surgery without MBP might lead to a higher rate of severe postoperative complications (PUBMED:25048485). In summary, while MBP is still in widespread surgical practice, the current evidence does not conclusively support its routine use in preventing postoperative complications in elective colorectal surgery. Further high-quality RCTs are required to definitively answer this question (PUBMED:17853636).
Instruction: Primary gastrointestinal cancers in the Western Region of Saudi Arabia. Is the pattern changing? Abstracts: abstract_id: PUBMED:11423884 Primary gastrointestinal cancers in the Western Region of Saudi Arabia. Is the pattern changing? Objective: To determine the age, sex and relative frequencies of various gastrointestinal malignancies in patients registered with the National Cancer Registry from the Western Region of Saudi Arabia from January 1994 till December 1997, and compare this data with previous hospital based studies about the pattern of these malignancies in Saudi Arabia. Methods: A National Cancer Registry was established in Saudi Arabia in 1992, and since 1st January 1994 all cancer cases in Saudi Arabia have been registered with the National Cancer Registry. All National Cancer Registry data on patients with primary gastrointestinal cancers from the Western Region of Saudi Arabia from January 1994 till December 1997 was retrieved and analyzed according to ethnic origin, site, age, sex and relative frequencies of various tumors. Results: Out of a total of 1833 cases with primary gastrointestinal malignancies 1207 (66%) were Saudis, while the rest were Non-Saudis. Colorectal cancer was the the most common malignancy found in both population groups accounting for 28.5% of cases in Saudis and 36% in Non-Saudis. Malignancies of liver, stomach and esophagus followed in decreasing frequencies. The mean age of the Saudi population was 58+16 years (standard deviation) with male to female ratio of 1.67:1. About 80% of the patients were above 40 years of age and the peak of onset for most of the tumors was between 50 and 70 years of age. Conclusions: This study highlights that colorectal cancer is the most common gastrointestinal malignancy seen in the Western Region of Saudi Arabia followed by hepatocellular carcinoma. This is in sharp contrast to the previous hospital based studies from Saudi Arabia and national trends as seen in the cancer incidence report from the National Cancer Registry of Saudi Arabia. The factors for this changing pattern of gastrointestinal malignancy remain to the determined. abstract_id: PUBMED:498028 Malignant neoplasms in Saudi Arabia. In a sample of 1000 consecutive malignant neoplasms in Saudis resident in the Western Region of Saudi Arabia, malignant lymphoma was the commonest of the life-threatening malignancies. The differences between malignant lymphoma in this sample and Western series include the greater frequency of lymphoma; the tendency for reticulum cell and poorly differentiated lymphomas to present as abdominal lesions; the earlier peak of prevalence of Hodgkin's disease, and the dissimilar proportions of its subtypes. The distribution of cancers in the gastrointestinal tract in our sample is almost the reverse of that encountered in the West in that cancer of the mouth and esophagus were more common than cancer of the lower intestinal tract. Lung cancer was relatively uncommon. The smoking habit is not so prevalent in Saudi Arabia as in the West and there is a need to maintain this situation by discouraging smoking. Cancer of the breast was by far the commonest major malignancy in the female, although most Saudi women have their first child early in their reproductive life. Skin cancers proved to be the most prevalent malignancy, and of these squamous cell carcinoma was the most common. The biases that affect studies such as ours in Saudi Arabia are stressed. abstract_id: PUBMED:22471466 Clinico-pathological pattern of extranodal non-Hodgkin's lymphoma in Saudi Arabia. Objectives: The incidence of primary extranodal non-Hodgkin's lymphoma (NHL) is variable in different regions of world but there is a paucity of literature on various demographic aspects of extranodal NHL as a group. This study was conducted to evaluate the clinico-pathological pattern of extranodal NHL in Saudi patients. Methods: We retrospectively studied a cohort of 855 NHL patients in four tertiary care centres in Riyadh, Saudi Arabia over a period of 5 years. Results: Extranodal NHL constituted 41.4% of the total. The mean age of affected patients was 55∓18 years and a male to female ratio of 1.5:1.0. Most presented in the age range of 41-60 years. NHL of gastro-intestinal (GI) tract was the most common extranodal variety, followed by head and neck NHL (18%), primary cutaneous lymphoma (14.4%), primary CNS lymphoma (5.6%), and primary bone, thyroid and soft tissue lymphoma (4.5% each). In the GI tract, stomach was the most common site involved, accounting for more than 81% of GI NHL. Diffuse large B-cell lymphoma was the most common histologic type, comprising around 72% of all extranodal lymphomas. Conclusions: Extranodal NHL is common in Saudi Arabia and diffuse large B cell lymphoma is the most common histologic subtype. We found significant differences in the pattern of extranodal NHL in Saudi patients compared to those reported from other parts of the world. Further studies focused on the risk factors and treatment outcome are needed to better understand the biology of a disease common in this population. abstract_id: PUBMED:3814236 Alimentary malignancies in Arabs in the Eastern Province of Saudi Arabia. A study was made of 89 cases of alimentary malignancies in Arab patients in the Eastern Province of Saudi Arabia. The pattern of alimentary involvement observed was colo-rectum 43%, stomach 34%, oesophagus 19% and small intestine 4.5%. This finding was in contrast to previous studies in Saudi Arabia which had reported the upper alimentary tract, particularly the oesophagus, as the site most frequently affected. Of the patients 80% were Saudis. The male:female ratio was 3.9:1. A striking feature was the youthfulness of several patients, 44% being under fifty years of age. There were only three cases of alimentary lymphoma. Although colo-rectal schistosomiasis was a moderately frequent finding in the population studied, there was no pathological evidence to support an aetiological association between bilharziasis and large bowel malignancy. abstract_id: PUBMED:22227543 GIST in Saudi Arabia: multicentric histopathological genetic study of 75 surgically excised cases. The availability of CD117 (Ckit) immunohistochemical testing in most hospital laboratories has facilitated the diagnosis of GIST in cKit positive cases. The aim of this study was to describe the histopathological variations of GIST in Saudi Arabia. The study involved analysis of paraffin blocks of 75 surgically excised GIST and GIST-like stromal tumors obtained from four major referral hospitals in Saudi Arabia (Riyadh Military Hospital, King Abdulaziz Medical City, King Faisal Specialist Hospital & Research Center and Aramco's hospital) between 1998 and 2009. Patient sex and age were noted in addition to the following tumor variables: size, site, histological grade and type, immunohistochemical profile and genetic analysis of the cKit mutation. The stomach was the commonest site for tumors and spindle cell type was the most frequently seen variant. Further sub classification of the histopathological type was made in order to recognize the morphological pattern of the tumor. The most common tumor grades seen were of low-risk and high-risk groups. There was one case of familial GISTparaganglioma syndrome and the youngest subject was an 8-year-old girl with high-risk spindle cell gastric GIST. Genetic study in 34 cases revealed exon 11 mutations in all of the cKit genes. abstract_id: PUBMED:37621805 Anemia Profiles in Cancer Patients: Prevalence, Contributing Factors, and Insights From a Retrospective Study at a Single Cancer Center in Saudi Arabia. Background: Anemia, a common complication of cancer and its treatments, significantly affects cancer patients' survival and quality of life. Nevertheless, there is limited research conducted in the southern region of Saudi Arabia regarding its effects. This study aims to assess the prevalence of anemia, as well as its associated factors, among cancer patients undergoing active chemotherapy treatment. Method: This retrospective study analyzed adult cancer patients who underwent chemotherapy at King Khaled Hospital's oncology department in Najran, Saudi Arabia, between 2017 and 2022. We aimed to determine the prevalence and contributing factors of anemia through comprehensive demographic and clinical assessment. Univariate analysis was performed to assess factors necessitating blood transfusion. Result: A total of 95 cancer patients received chemotherapy, with a mean age of 52.2 ± 16.5 years. The majority were females (65.3%) aged between 18 and 64 years (74.7%). Gastrointestinal (42.1%) and breast (17.9%) cancers were the most prevalent malignancies. Most patients (56.8%) were in locally advanced stages. Anemia was present at admission in 48 (50.5%) patients with a higher prevalence among colorectal and genitourinary tract cancer patients. The mean hemoglobin (Hb) drop during treatment was 9.1 ± 2.1 g/dL. Anemia severity was stratified as follows: life-threatening (7.4%), severe (33%), moderate (31%), and lower limited (29%). Blood transfusions were required in 79% of cases. Advanced age, increased chemotherapy cycles, and anemia of chronic disease (ACD) were significantly associated with increased anemia severity (p<0.05). Increasing chemotherapy cycles also correlated with an increased need for blood transfusion (p<0.001). Older patients (≥65 years) had higher anemia at admission, poor Eastern Cooperative Oncology Group (ECOG) performance status, more Hb decrease during treatment, and increased need for blood transfusions (p<0.05) compared to younger patients (<65 years). Conclusion: The study noted a high prevalence of anemia (50.5%) in patients receiving active cancer treatment, specifically in the context of genitourinary and gastrointestinal tract cancers. Advanced age, frequent chemotherapy cycles, and ACD were associated with increased severity of anemia. Furthermore, older patients displayed a higher frequency of anemia, poorer performance status, and an increased requirement for transfusions with an escalating number of chemotherapy cycles. abstract_id: PUBMED:34988907 Antibacterial Activity of Some Medicinal Plants in Al Baha Region, Saudi Arabia, Against Carcinogenic Bacteria Related to Gastrointestinal Cancers. Background: Gastrointestinal cancers are the most dangerous cancers all over the world. The gut microbiota dysbiosis increases the risk of GI cancers and induces the host's susceptibility to carcinogenic bacteria. Antibiotic resistance is rising in these bacteria. Thus, discovering new safe and effective antibacterial agents is a worldwide concern. This study evaluates the antibacterial activity of six wild medicinal plants from the Al Bahah region in Saudi Arabia. Methods: Arial parts of Cissus quadrangularis, Aloe castellorum, Psiadia punctulata, Aloe pseudorubroviolacea, Barbeya oleoides, Teucrium yemense were collected and dried for extraction with ethanol. The minimum inhibitory concentrations (MIC) of these ethanolic extracts against carcinogenic bacteria Bacteroides fragilis, Clostridium ssp., Cutibacterium acnes, Escherichia coli, Fusobacterium nucleatum, Helicobacter pylori, Mycoplasma spp., Neisseria gonorrhoeae, Porphyromonas gingivalis, Salmonella enterica, and Treponema pallidum were evaluated to determine its antibacterial activity. Results: All extracts showed antibacterial activity with MIC lower than 1 mg/ml. Psiadia punctulata showed higher antibacterial activity, while the Aloe species showed the lowest antibacterial activity. Conclusion: The studied plants' extracts showed high effectiveness as antibacterial activity against the carcinogenic bacteria related to gastrointestinal cancers due to their high content of pharmaceutical components. These plants could be explored further for the development of new antibacterial products against these carcinogenic bacteria. abstract_id: PUBMED:37533611 Cancer Therapy-Related Cardiotoxicity: A Comprehensive Retrospective Analysis at Najran Cancer Center, Saudi Arabia. Background Cardiotoxicity, produced as an adverse effect of anticancer therapy, is a common issue during cancer treatment. Acute coronary syndrome, myocarditis, arrhythmias, or heart failure can all be symptoms of this issue. Little is known about its occurrence among Saudi Arabian cancer patients. This study aims to investigate factors linked to anticancer therapy-related cardiotoxicity. Methods A retrospective study was conducted from April 2020 to May 2022 at the King Khalid Hospital, Najran, Saudi Arabia. The study included adult cancer patients receiving anticancer therapy, regardless of their cardiovascular disease history. Univariate analysis was used to investigate factors associated with the occurrence of cardiotoxicity related to anticancer therapy. Results Of 78 patients receiving anticancer therapy, cardiotoxicity occurred in 12 (15.4%) patients. The mean age was 56.5 ± 13.4 years, with 33.3% aged over 65 years. Comorbidities included hypertension (44; 56.4%), diabetes (41; 52.6%), dyslipidemia (13; 16.7%), smoking (16; 20.5%), heart disease (6; 7.7%), trastuzumab use (9; 11.5%), and chronic kidney disease (2; 2.6%). The most common cancers were breast cancer and gastrointestinal cancer (27.6% each). Monoclonal anticancer agents 35 (46.1%) and alkylating agents 29 (38.2%) were commonly used chemotherapies. Cardiac protective agents were used in 16 (21.1%) of patients, with angiotensin-converting enzyme (ACE) inhibitors 15 (19.7%) and statins (13; 17.1%) being the most prescribed. Baseline ejection fraction (EF) was normal in 69 (90.8%) of cases. The follow-up duration was 1.93 ± 1.90 years. A drop in EF occurred in five (6.6%) of cases. Dyslipidemia (OR: 0.12; 95% CI: 0.03-0.47, p=0.002), previous heart disease (OR: 0.14; 95% CI: 0.02-0.81, p=0.029), and impaired baseline EF (p=0.029) were associated with increased risk of cardiotoxicity. Statin (OR: 0.22; 95% CI: 0.05 to 0.84, p=0.028) and antiplatelet agents (OR: 0.19; 95% CI: 0.03 to 1.01, p=0.051) were protective agents against cardiac toxicity. Conclusion Effective anti-cancer therapy may be accompanied by an increased risk of cardiotoxicity. In this study, a history of prior heart disease, dyslipidemia, low baseline ejection fraction, and the administration of multiple anticancer therapy agents was associated with cardiotoxicity. Proactive management strategies aimed at mitigating the potential cardiotoxic effects of anti-cancer therapies are crucial. abstract_id: PUBMED:17588047 Pattern of primary gastrointestinal tract malignancy among Saudi nationals: A retrospective study. This histopathological reports from the central laboratory of Riyadh Central Hospital for between 1404 to 1406H (1984 to 1986) were reviewed retrospectively to determine the pattern of primary cancer of the gastrointestinal tract among Saudi nationals. During the study period, a total of 32,990 histopathological examinations were conducted. Neoplasms were identified in 4683 cases (14.2%); 1772 (37.8%) were malignant and 2911 (62.2%) were benign. The stomach (31%), liver (20%), and esophagus (19%) were the most commonly involved organs. The proportional frequencies of gastrointestinal malignancies differ from those seen in Western countries, although the age and sex distribution are similar. The possible etiological factors are discussed and the need for a cancer registry emphasized. abstract_id: PUBMED:1925505 Pathology and clinical features of gastro-intestinal lymphoma in Saudi Arabia. The histopathology and clinical presentation of 19 cases of primary gastrointestinal lymphoma is described. Our patients are similar to others in the Middle East but have a lower incidence of diarrhoea and malabsorption. All revealed a widespread chronic inflammatory background. Four patients with primary gastric lymphoma had endoscopic biopsies from the duodenum; these biopsies were completely free from tumour but showed a moderate to severe diffuse lymphoplasmacytic infiltrate. Electron microscopy shows that tumour cells penetrate basement membranes and invade the epithelium which becomes thin and attenuated. Substances which inhibit lymphocyte tropism might be useful in preventing intestinal ulceration. The MALT concept has been found useful in classification of the tumours. Two patients with unusual mesenteric node histology are described and it is thought that the appearances may indicate a substantial capacity for differentiation in the group of tumours. Answer: The pattern of primary gastrointestinal cancers in the Western Region of Saudi Arabia appears to be changing. Historically, hospital-based studies in Saudi Arabia reported that the upper alimentary tract, particularly the esophagus, was the site most frequently affected by cancer (PUBMED:3814236). However, more recent data from the National Cancer Registry indicates that colorectal cancer has become the most common gastrointestinal malignancy in the Western Region of Saudi Arabia, followed by hepatocellular carcinoma (PUBMED:11423884). This shift is in contrast to previous trends and suggests a changing pattern of gastrointestinal malignancy in the region. The study from the National Cancer Registry, which analyzed data from January 1994 till December 1997, found that colorectal cancer accounted for 28.5% of cases in Saudis and 36% in Non-Saudis, with liver, stomach, and esophagus malignancies following in decreasing frequencies. The mean age of the Saudi population with these cancers was 58 years, with a male to female ratio of 1.67:1, and about 80% of the patients were above 40 years of age (PUBMED:11423884). In contrast, an earlier study of 1000 consecutive malignant neoplasms found that malignant lymphoma was the most common life-threatening malignancy, with a greater frequency of lymphoma presenting as abdominal lesions, and cancers of the mouth and esophagus were more common than those of the lower intestinal tract (PUBMED:498028). The changing pattern of gastrointestinal cancers in Saudi Arabia may be influenced by various factors, including lifestyle changes, dietary habits, and possibly genetic factors, although the specific reasons for this shift remain to be determined (PUBMED:11423884). Further research focused on risk factors and treatment outcomes is needed to better understand the biology of these diseases common in the Saudi population (PUBMED:22471466).
Instruction: Can color doppler predict the uniformity of HIFU-induced prostate tissue destruction? Abstracts: abstract_id: PUBMED:15264239 Can color doppler predict the uniformity of HIFU-induced prostate tissue destruction? Background: Tissue blood perfusion influences the results of some hyperthermia and thermotherapy procedures, but its role in the outcome of prostate cancer treatment by high-intensity focused ultrasound (HIFU) has not been evaluated yet. We evaluated preoperative prostate color Doppler as a predictor of the efficacy of HIFU treatment. Methods: Thirty-five patients underwent pre- and post-contrast color Doppler examination of the prostate before HIFU treatment. Specific software was used to calculate, on color Doppler images, the color pixel density (CPD), and the specific flow (SF, i.e., mean velocity x CPD) in different regions of interest. Post-treatment sextant biopsies were obtained in 31 patients, 5.8 +/- 2.8 months after HIFU treatment. Results: No significant correlation was found between the uniformity of HIFU-induced tissue destruction observed on control biopsies and the pre-treatment CPD/SF values in any region of interest, either before or after contrast injection. On the other hand, history of radiation therapy was significantly associated with homogeneous tissue destruction and history of hormone therapy was significantly associated with incomplete tissue destruction. Conclusions: Color Doppler cannot predict the uniformity of HIFU-induced tissue destruction. History of radiation therapy was found to be a factor of favorable prognosis and history of hormone therapy was found to be a factor of poor prognosis in our population. abstract_id: PUBMED:16439055 Evaluation of color Doppler in guiding prostate biopsy after HIFU ablation. Objective: Transrectal ultrasound cannot accurately depict early cancer recurrences after prostate high-intensity focused ultrasound (HIFU) ablation. We evaluated transrectal color Doppler (CD) in guiding post-HIFU prostate biopsy. Methods: Prostate CD-guided sextant biopsies were obtained in 82 patients who had undergone prostate HIFU ablation for cancer, 24 of whom had hormone therapy before the treatment. At the time of biopsy, a subjective CD score was given to all biopsy sites (0=no flow; 1=minimal flow; 2=suspicious flow pattern). CD findings were compared with biopsy results. Results: CD was a significant predictor of biopsy findings, according to univariate and multivariate site-by-site analysis. However, only 36 of 94 sites with residual cancer had positive CD findings, and thus, negative CD findings should not preclude random biopsy. There was a significant interaction between CD diagnostic capability and a history of hormone therapy before HIFU treatment. CD was a significant and independent predictor of biopsy findings in patients who had not received hormone therapy (odds ratio: 4.4; 95%CI: 2.5-7.9; p<0.0001), but not in those who had (odds ratio: 1.3; 95%CI: 0.5-3.4; p>0.5). Conclusion: Biopsy taken in CD-positive sites were 4.4 times more likely to contain cancer in patients who did not receive hormone therapy. CD could not reliably depict cancer recurrence in patients with history of hormone therapy. abstract_id: PUBMED:9422445 Color Doppler sonography of the prostate. The digital rectal exam (DRE), serum prostate specific antigen (PSA) level and transrectal ultrasound (TRUS) serve as the primary means of early detection of prostate cancer, but all have known limitations. Because of the low predictive value for the gray scale detection of prostate cancer with TRUS, attempts have been made to improve the diagnostic accuracy of TRUS by incorporating color Doppler imaging (CDI) into the standard examination. With CDI, the sonographer has the ability to investigate both normal and abnormal flow within the prostate gland. Recent studies have supported the ability of CDI to detect vascularity in tumors that otherwise were not detectable with conventional gray scale TRUS, which suggests the potential for increase tumor detection. In the following article the rationale behind the use of color Doppler, the normal Doppler features of the prostate gland, and the result of the published studies on the diagnostic accuracy of color Doppler in prostate cancer are analyzed. In addition, areas of future research interest are reviewed. abstract_id: PUBMED:7687359 Color-Doppler in transrectal echography of the prostate. Preliminary results The authors report their experience with the color Doppler study of the prostate in a series of 1,075 transrectal US exams performed September 1991 to June 1992. US-guided biopsy was performed in 82 patients; histopathology confirmed the diagnosis in 32 cases. Color-Doppler US provided accurate blood vessel mapping in the normal gland. As for benign prostate conditions, color-Doppler features definitely helped a diagnosis to be made, especially in nodular hyperplasia. Carcinomas usually exhibited marked and inhomogeneous increase in the vascularization of peri- and intralesional areas. This feature was especially valuable in isoechoic carcinomas where US is known to exhibit limitations. The authors suggest that the integration of clinical, laboratory and US with color-Doppler findings allows higher diagnostic accuracy. abstract_id: PUBMED:8912444 Color Doppler sonography of the prostate. Digital rectal examination, prostate-specific antigen and transrectal sonography (TRS) are the main diagnostic tools for evaluating men at risk for carcinoma of the prostate (PCa). PCa has a hypoechoic echo pattern via TRS in 60- 85% of cases. Hypoechoic areas are observed, however, in benign prostatic hyperplasia (BPH) as well. For this reason, TRS yields a low specificity and sensitivity in detecting PCa in screening programs. Additional data, independent of the grey scale information available via TRS, can be obtained by using the innovative technology of color Doppler sonography. This imaging modality allows simultaneous real-time sonographic visualization and evaluation of vascular structures. The present study analyzes to what extent the analysis of blood flow perfusion of prostatic parenchyma can provide further information concerning the pathologic alterations occurring in abnormal prostate tissue. Blood flow phenomena were reproducibly demonstrated in the prostate and in the periprostatic tissue with the color-coded Doppler sonography (CD-TRS). 31 patients with the diagnosis of BPH could be separated into two different groups according to the findings with TRS and CD-TRS. In contrast, patients with histologically proven PCa (n = 16) had subjectively increased perfusion rates. Blood flow phenomena were analyzed using computer-calculated resistive index and pulsatility index as evaluations of flow and frequency. A preliminary analysis of the wave-form characteristics shows that flow in the prostate gland is associated with a high resistive index in both PCa and BPH. The mean frequency demonstrated in the pulsatility index appears to be high in PCa, but low in BPH cases. Further studies are needed to verify the described factors' abilities to differentiate prostate tissue as either PCa or BPH. abstract_id: PUBMED:29329687 Bubble-Induced Color Doppler Feedback Correlates with Histotripsy-Induced Destruction of Structural Components in Liver Tissue. Bubble-induced color Doppler (BCD) is a histotripsy-therapy monitoring technique that uses Doppler ultrasound to track the motion of residual cavitation nuclei that persist after the collapse of the histotripsy bubble cloud. In this study, BCD is used to monitor tissue fractionation during histotripsy tissue therapy, and the BCD signal is correlated with the destruction of structural and non-structural components identified histologically to further understand how BCD monitors the extent of treatment. A 500-kHz, 112-element phased histotripsy array is used to generate approximately 6- × 6- × 7-mm lesions within ex vivo bovine liver tissue by scanning more than 219 locations with 30-1000 pulses per location. A 128-element L7-4 imaging probe is used to acquire BCD signals during all treatments. The BCD signal is then quantitatively analyzed using the time-to-peak rebound velocity (tprv) metric. Using the Pearson correlation coefficient, the tprv is compared with histologic analytics of lesions generated by various numbers of pulses using a significance level of 0.001. Histologic analytics in this study include viable cell count, reticulin-stained type III collagen area and trichrome-stained type I collagen area. It is found that the tprv metric has a statistically significant correlation with the change in reticulin-stained type III collagen area with a Pearson correlation coefficient of -0.94 (p <0.001), indicating that changes in BCD are more likely because of destruction of the structural components of tissue. abstract_id: PUBMED:1603991 Artifacts in echo-Doppler and color-Doppler It is well known that artifacts can be observed during US examinations; the same is true also for Doppler and color-Doppler images of blood flow. Recognizing these artifacts is important to avoid image misinterpretations and, when possible, to overcome them by modifying either techniques or unit settings, or both. This work was aimed at presenting the several artifacts which can be observed during Doppler investigations, at classifying them, and trying to understand the physical and/or technical principles underlying their origin. Doppler and color-Doppler artifacts can be divided into four large groups: 1) artifacts regarding evaluation of the presence of flow; 2) artifacts regarding evaluation of the direction of flow; 3) artifacts regarding determination of the velocity of flow; 4) artifacts affecting spatial location, on the screen, of the examined vessel. Each of the above can cause severe diagnostic misinterpretations, if not correctly recognized and interpreted. It must be kept in mind that an accurate analysis of unit settings during scanning, and the meticulous evaluation of the obtained color images are of the utmost importance for the proper use of this valuable but difficult diagnostic technique. abstract_id: PUBMED:7617860 Normal prostate gland: examination with color Doppler US. Purpose: To evaluate the vascular anatomy of the prostate gland with transrectal color Doppler ultrasound (US). Materials And Methods: Thirty-five normal prostate glands were studied. Imaging findings were interpreted on the basis of findings from previous anatomic studies. Results: Distribution of flow signals within the gland was differentiated into two parenchymal groups of arteries, urethral and capsular. The urethral group entered the prostate at the bladder neck and sent forth branches to supply the periurethral zone and the inner portion of the gland. The capsular group coursed along the anterolateral surface of the prostate and sent forth perforating branches to the outer portion of the gland. Intraprostatic venous signals were seen mainly within the periurethral zone. Conclusion: Transrectal color Doppler US can depict the vascular anatomy of the prostate gland. Knowledge of the normal vascular appearance of the gland should aid in the differentiation of normal from abnormal findings. abstract_id: PUBMED:9524593 Potential uses of color Doppler in periskeletal soft tissue neoplasms Integrated imaging plays a fundamental role in the study of periskeletal soft tissue tumors, for both diagnosis and treatment planning. The steady and progressive technologic progress of color Doppler US equipment now permits the integration of conventional morphostructural parameters with the biofunctional data of lesion flow patterns and relative qualitative features. To assess color Doppler capabilities in differentiating benign from malignant soft tissue tumors, we reviewed the B-mode and color Doppler findings of 43 consecutive patients with a palpable periskeletal soft tissue mass. All patients were examined with a real time unit (Ultramark 9 HDI), with a broadband (5-10 MHz) linear transducer operating at 6.5 MHz for Doppler measurements. The PRF was set at 1500 to 800 Hz with 70% color gain; a 100 Hz wall filter was used. We kept the color box in the area of interest as small as possible to keep the frame rate high; pulsed Doppler studies were performed with a small sample volume and 2000 Hz PRF. The following signs were considered: morphostructural features; the presence/absence of color signals; the (peripheral/internal) site of vascular branches, their caliber and course; the number of afferent vascular poles; resistance index. As a rule, malignant masses tend to differ from benign masses for the presence of multiple vascular afferent branches, especially if they have an irregular pattern and caliber, and for the variability of the resistance index measured in different parts of the same mass. Further examinations, performed with second level imaging (CT and MRI) and microhistologic tests, respectively after biopsy and surgical resection, confirmed the high predictive value of color Doppler US, with only 1 false negative and 2 false positives; color Doppler sensitivity and specificity were 94.7% and 91.6%, respectively, which are higher values than those obtained with US alone (63% and 66.6%). Therefore, we believe that color Doppler US can be systematically applied to the study of periskeletal soft tissue masses, integrating conventional US for the correct selection of the patients to be submitted to second level investigations. abstract_id: PUBMED:7597229 Color Doppler echography in the tissue characterization of renal masses To investigate color-Doppler US capabilities in tissue characterization, 42 renal masses were studied from November, 1993, to July, 1994. B-mode morphologic patterns were studied first and then integrated with color flow patterns; color areas and blood flow distribution were assessed for each lesion. Color signals were used as a guide to obtain arterial and venous Doppler spectra and to calculate flow velocities and pulsatility index (PI). Based on the us morphologic appearance, the 42 lesions were divided into 3 groups: A) lesions with morphologic and volumetric patterns of malignancy; B) small lesions (< 3 cm, > 3 cm < 5 cm); C) complex cysts. Data were organized and analyzed statistically; some findings were found to be typical of malignancy: hypervascularity, intralesional arterial signals with high flow velocities (systolic peak cut-off: 0.3-0.4 m/sec), high PI values (> 1) and, finally, color signals inside complex cysts. Our results suggest that color-Doppler US is a useful tool to assess renal masses vascularity, with the color-Doppler technique increasing US diagnostic accuracy. Nevertheless, small lesions need more careful study because their features are poorly demonstrated even with CT and MRI. Answer: No, color Doppler cannot predict the uniformity of HIFU-induced prostate tissue destruction. A study evaluating preoperative prostate color Doppler as a predictor of the efficacy of HIFU treatment found no significant correlation between the uniformity of HIFU-induced tissue destruction observed on control biopsies and the pre-treatment color pixel density (CPD) or specific flow (SF) values in any region of interest, either before or after contrast injection. Instead, a history of radiation therapy was significantly associated with homogeneous tissue destruction, and a history of hormone therapy was significantly associated with incomplete tissue destruction (PUBMED:15264239).
Instruction: Recommendations for adnexal cysts: have the Society of Radiologists in Ultrasound consensus conference guidelines affected utilization of ultrasound? Abstracts: abstract_id: PUBMED:23358210 Recommendations for adnexal cysts: have the Society of Radiologists in Ultrasound consensus conference guidelines affected utilization of ultrasound? Purpose: In 2009, the Society of Radiologists in Ultrasound (SRU) convened a consensus conference to discuss the management of adnexal cysts in asymptomatic women and reach consensus on when follow-up imaging was required. The purpose of this quality assurance project was to assess the impact of using the resulting guidelines in our institution during 1-month periods before and after publication of the guidelines. Methods: We retrospectively reviewed all pelvic ultrasound reports for a 1-month period before the conference (February 2009) and 2 years later (February 2011) after publication of the SRU guidelines. Incidence of recommendations for simple cysts and classic hemorrhagic cysts 5 cm or less in premenopausal women and simple cysts less than 1 cm in postmenopausal women was compared. Fisher exact test was used to compare proportions. Results: Over the time period evaluated, the number of pelvic sonograms performed in our department decreased by 27%. The overall number of cysts with recommendations for follow-up decreased from February 2009 (132 studies with recommendations for follow-up in 870 pelvic ultrasound examinations, 15%) to February 2011 (71 recommendations for follow-up in 639 examinations, 11%; P = 0.02). The percentage of premenopausal simple cysts and classic hemorrhagic cysts described as less than 5 cm where follow-up was recommended decreased from 39/48 (89%) to 2/29 (7%, P &lt; 0.0001). Conclusions: Use of the SRU guidelines on the management of adnexal cysts has, in our practice, dramatically decreased radiologist recommendations for follow-up of benign appearing cysts. abstract_id: PUBMED:31770022 Comparison of International Ovarian Tumor Analysis Simple Rules to Society of Radiologists in Ultrasound Guidelines for Detection of Malignancy in Adnexal Cysts. OBJECTIVE. The purpose of this study was to evaluate the International Ovarian Tumor Analysis (IOTA) simple rules and the Society of Radiologists in Ultrasound (SRU) guidelines for detecting ovarian malignancy in a general population of women presenting to radiology departments with adnexal cystic lesions. MATERIALS AND METHODS. A retrospective multicenter study of ultrasound-detected adnexal cystic lesions with appropriate follow-up was conducted. Lesions were classified into benign, indeterminate, or malignant categories according to criteria based on the IOTA simple rules and the SRU guidelines. The prevalence of nonneoplastic cysts, neoplasms, and malignant tumors was calculated. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy were calculated, and ROC analysis for the diagnosis of malignancy was performed. RESULTS. A total of 697 women with 764 cystic lesions were included; 85.2% (651/764) of the lesions were nonneoplastic, 12.2% (93/764) were benign neoplasms, and 2.6% (20/764) were malignant neoplasms. Nearly all malignancies were classified into indeterminate and malignant categories. The prevalence of malignancy in the indeterminate category was 4.8% (7/145) (SRU) to 10.7% (7/65) (IOTA) and in the malignant category was 18.1% (13/72) (SRU) to 34.3% (12/35) (IOTA). Only one malignancy was misclassified as benign by the IOTA simple rules. The sensitivity of the IOTA simple rules for malignancy was 90.0%; specificity, 96.5%; PPV, 29.0%; NPV, 99.8%; and accuracy, 96.4%. The corresponding values for the SRU guidelines were 100%, 89.6%, 14.9%, 100%, and 89.8%. In ROC analysis, the IOTA simple rules were slightly more accurate than the SRU guidelines (AUC, 0.9805 versus 0.9713; p = 0.0003). CONCLUSION. Both imaging characterization methods were sensitive for identifying ovarian malignancies, but the PPV was low among women presenting to radiology departments, and the indeterminate classification harbored one-third of the total malignancies. Exploration of varied clinical settings and inclusion of secondary tests may help to refine these systems. abstract_id: PUBMED:24475836 US of incidental adnexal cysts: adherence of radiologists to the 2010 Society of Radiologists in Ultrasound guidelines. Purpose: To retrospectively evaluate the adherence of radiologists to the 2010 Society of Radiologists in Ultrasound (SRU) guidelines for management of adnexal cysts. Materials And Methods: One radiologist reviewed the interpreting radiologists' reports for 398 adnexal cysts detected at ultrasonography in 398 patients (mean age ± standard deviation, 41.1 years ± 14.3; range, 18-87 years) and classified the management recommendations according to adherence to SRU guidelines. Recommendations that were not adherent were classified as overmanagement, undermanagement, or incomplete. The effects of menopausal status, cyst size and other features, and the individual interpreting radiologist on adherence were assessed by using χ(2) tests, and the association of these factors with over- and undermanagement was identified. Results: Overall adherence was 59%; overmanagement, 27%; undermanagement, 9%; and incomplete, 6%. Menopausal status, cyst size, and other cyst features significantly affected adherence (P &lt; .001). For evaluation of premenopausal patients, adherence was 63% (overmanagement, 30%) and of postmenopausal patients, 42% (undermanagement, 26%). For evaluation of cysts 1-3 cm in size, adherence was 65% (overmanagement, 21%); for cysts 3-5 cm, 33% (overmanagement, 49%); for cysts 5-7 cm, 47% (overmanagement, 20%); for cysts larger than 7 cm, 77% (over- and undermanagement, 12% each). Recommendations were adherent in most instances for corpora lutea (88%), simple cysts (56%), and cysts suggestive of benign diagnosis (60%), or with a nodule or multiple thin or thick septations (60%-100%). Cysts with recommentations for overmanagement in most instances were hemorrhagic (59%) and those with a single thin septation (67%); 31% of cysts measuring 1-7 cm in postmenopausal patients were undermanaged. Adherence varied significantly among radiologists (40%-75% adherence among radiologists who interpreted 20 or more examinations; P = .004). Conclusion: Adherence to SRU guidelines for management of adnexal cysts at our institution was 59%. Overmanagement generally occurred for physiologic cysts in premenopausal patients; undermanagement was observed for simple cysts in postmenopausal patients. abstract_id: PUBMED:29728324 Variation in Downstream Relative Costs Associated With Incidental Ovarian Cysts on Ultrasound. Purpose: To explore variation in downstream relative costs associated with ovarian cysts incidentally detected on ultrasound. Methods: For 200 consecutive incidental ovarian cysts on ultrasound, ultrasound reports were classified in terms of presence of a radiologist recommendation for additional imaging. All downstream events (imaging, office visits, and surgery) associated with the cysts were identified from the electronic health record. Medical costs associated with these downstream events were estimated using national Medicare rates. Average cost per cyst was stratified by various factors; cost ratios were computed among subgroups. Results: Average costs per cyst were 1.9 times greater in postmenopausal than premenopausal women. Relative to when follow-up imaging was neither recommended nor obtained, costs were 1.1 times greater when follow-up imaging was recommended but not obtained, 5.1 times greater when follow-up imaging was both recommended and obtained, and 8.1 times greater when follow-up imaging was obtained despite not being recommended. Costs were 2.5 times greater when the radiologist underrecommended follow-up compared with Society of Radiologists in Ultrasound (SRU) guidelines for management of ovarian cysts, 3.0 times greater when the ordering physician overmanaged compared with the radiologist's recommendation, as well as 1.7 times and 3.8 times greater when the ordering physician undermanaged and overmanaged compared with SRU guidelines, respectively. Four ovarian neoplasms, although no ovarian malignancy, were diagnosed in the cohort. Conclusion: Follow-up costs for incidental ovarian cysts are highly variable based on a range of factors. Radiologist recommendations may contribute to lower costs among patients receiving follow-up imaging. Such recommendations should reflect best practices and support the follow-up that will be of likely greatest value for patient care. abstract_id: PUBMED:20505067 Management of asymptomatic ovarian and other adnexal cysts imaged at US: Society of Radiologists in Ultrasound Consensus Conference Statement. The Society of Radiologists in Ultrasound convened a panel of specialists from gynecology, radiology, and pathology to arrive at a consensus regarding the management of ovarian and other adnexal cysts imaged sonographically in asymptomatic women. The panel met in Chicago, Ill, on October 27-28, 2009, and drafted this consensus statement. The recommendations in this statement are based on analysis of current literature and common practice strategies, and are thought to represent a reasonable approach to asymptomatic ovarian and other adnexal cysts imaged at ultrasonography. abstract_id: PUBMED:20823748 Management of asymptomatic ovarian and other adnexal cysts imaged at US Society of Radiologists in Ultrasound consensus conference statement. The Society of Radiologists in Ultrasound (SRU) convened a panel of specialists from gynecology, radiology, and pathology to arrive at a consensus regarding the management of ovarian and other adnexal cysts imaged sonographically in asymptomatic women. The panel met in Chicago, IL, on October 27-28, 2009, and drafted this consensus statement. The recommendations in this statement are based on analysis of current literature and common practice strategies, and are thought to represent a reasonable approach to asymptomatic ovarian and other adnexal cysts imaged at ultrasonography. abstract_id: PUBMED:28266033 First International Consensus Report on Adnexal Masses: Management Recommendations. The First International Consensus Conference on Adnexal Masses was convened to thoroughly examine the state of the science and to formulate recommendations for clinical assessment and management. The panel included representatives of societies in the fields of gynecology, gynecologic oncology, radiology, and pathology and clinicians from Europe, Canada, and the United States. In the United States, there are approximately 9.1 surgeries per malignancy compared to the European International Ovarian Tumor Analysis center trials, with only 2.3 (oncology centers) and 5.9 (other centers) reported surgeries per malignancy, suggesting that there is room to improve our preoperative assessments. The American College of Obstetricians and Gynecologists Practice Bulletin on "Management of Adnexal Masses," reaffirmed in 2015 (Obstet Gynecol 2007; 110:201-214), still states, "With the exception of simple cysts on a transvaginal ultrasound finding, most pelvic masses in postmenopausal women will require surgical intervention." The panel concluded that patients would benefit not only from a more conservative approach to many benign adnexal masses but also from optimization of physician referral patterns to a gynecologic oncologist in cases of suspected ovarian malignancies. A number of next-step options were offered to aid in management of cases with sonographically indeterminate adnexal masses. This process would provide an opportunity to improve risk stratification for indeterminate masses via the provision of alternatives, including but not limited to evidence-based risk-assessment algorithms and referral to an "expert sonologist" or to a gynecologic oncologist. The panel believed that these efforts to improve clinical management and preoperative triage patterns would ultimately improve patient care. abstract_id: PUBMED:28609360 Pancreatic Cysts - Part 1: Using the American Gastroenterological Association Guidelines for the Management of Pancreatic Cysts-A Practical Approach. The recent publication of evidence-based guidelines by the American Gastroenterological Association for the management of the incidental pancreatic cyst has stimulated considerable discussion among clinicians and researchers. The guidelines examined the evidence in an accompanying technical review defining current diagnostic approaches, as well as cyst natural history. Recognizing that the risk of malignant degeneration of these cystic neoplasms has been overestimated, the guidelines boldly increased thresholds for endoscopic ultrasound imaging and cyst fluid sampling, as well as proposing a reduced frequency of surveillance compared with previous expert consensus recommendations. A practical approach to using these guidelines in daily clinical practice is discussed, emphasizing that an individualized approach is often required for this diverse and common clinical problem. abstract_id: PUBMED:28727500 Risk Stratification of Adnexal Cysts and Cystic Masses: Clinical Performance of Society of Radiologists in Ultrasound Guidelines. Purpose To evaluate the performance of the 2010 Society of Radiologists in Ultrasound (SRU) consensus guidelines in the risk stratification of symptomatic and asymptomatic adnexal cysts. Materials and Methods An institutional review board-approved retrospective review was performed, including adnexal cysts detected with ultrasonography (US) with surgical diagnosis or at least 2 years of imaging or clinical follow-up, from January to June 2011. SRU management recommendations were scored as 0, no follow-up; 1, US follow-up; 2, magnetic resonance (MR) imaging follow-up; and 3, surgical evaluation. Distribution of outcomes (nonneoplastic cyst, benign neoplasm, malignant neoplasm) was compared in each rating group by using the Cochran-Armitage trend test. Where SRU guidelines allow more than one management option, they were classified as being interpreted in either a "surgically focused" environment, with limited MR imaging availability, or an "MR-capable" center, where MR imaging is selected whenever it is an option. Predictors of neoplasms and malignancy were evaluated by using multivariate logistic regression. Results A total of 570 cysts in 500 women aged 18-90 years (mean, 42 years) were included. There were 475 (83.3%) nonneoplastic cysts, 77 (13.5%) benign neoplasms, and 18 (3.2%) malignant neoplasms. Of the 500 women, 161 (32.2%) were asymptomatic. In the surgically focused interpretation of guidelines, proportions of any neoplasm and malignant neoplasm, respectively, were 1% and 0% in SRU 0, 17% and 1% in SRU 1, 48% and 0% in SRU 2, and 48% and 16% in SRU 3 (P &lt; .0001 for both trends). In the interpretation of SRU guidelines with MR imaging when it was an option, proportions of any neoplasm and malignant neoplasm, respectively, were 1% and 0% in SRU 0, 17% and 1% in SRU 1, 38% and 5% in SRU 2, and 81% and 52% in SRU 3 (P &lt; .0001, both trends) and 82 (89.1%) fewer benign cysts would have gone directly to surgical evaluation. In multivariate regression, SRU rating predicted both any neoplasm (odds ratio, 2.58; P &lt; .0001) and malignant neoplasm (odds ratio, 4.94; P = .005). Conclusion SRU consensus guidelines effectively stratified the risk of neoplasia and malignancy. Selecting MR imaging when it is an option in the guidelines would have reduced the number of benign cysts sent for surgical evaluation. © RSNA, 2017. abstract_id: PUBMED:31709639 Utility of endoscopic ultrasound and endoscopic ultrasound-guided fine-needle aspiration for the diagnosis and management of pancreatic cystic lesions: Differences between the guidelines. Recent advances and frequent use of cross-sectional imaging studies have increased opportunities for incidental diagnoses of pancreatic cystic lesions (PCL). In the management of PCL, distinguishing between mucinous versus non-mucinous and malignant versus benign cysts is important to diagnose pancreatic cancer in its early stage. For this reason, there have been several guidelines to manage PCL. Endoscopic ultrasound (EUS) and EUS-guided fine-needle aspiration (FNA) play important roles in the guidelines, although there are some differences in their roles. In this review, we aimed to evaluate the current status of EUS and EUS-FNA in the management of PCL and the status of these procedures in the guidelines. Answer: The Society of Radiologists in Ultrasound (SRU) consensus conference guidelines have indeed affected the utilization of ultrasound for adnexal cysts. A quality assurance project assessed the impact of these guidelines in one institution by reviewing pelvic ultrasound reports from a 1-month period before the conference (February 2009) and 2 years later (February 2011) after the publication of the SRU guidelines. The study found that the overall number of cysts with recommendations for follow-up decreased from 15% in February 2009 to 11% in February 2011 (P = 0.02). Specifically, the percentage of premenopausal simple cysts and classic hemorrhagic cysts described as less than 5 cm where follow-up was recommended decreased dramatically from 89% to 7% (P < 0.0001) (PUBMED:23358210). Additionally, adherence to the SRU guidelines for the management of adnexal cysts was evaluated in another study, which found that overall adherence was 59%. Overmanagement generally occurred for physiologic cysts in premenopausal patients, while undermanagement was observed for simple cysts in postmenopausal patients. The study also noted that adherence varied significantly among radiologists (PUBMED:24475836). Furthermore, the SRU guidelines were compared with the International Ovarian Tumor Analysis (IOTA) simple rules for detecting ovarian malignancy. Both methods were sensitive for identifying ovarian malignancies, but the positive predictive value was low among women presenting to radiology departments. The IOTA simple rules were slightly more accurate than the SRU guidelines (PUBMED:31770022). In terms of cost implications, a study exploring variation in downstream relative costs associated with incidental ovarian cysts on ultrasound found that radiologist recommendations may contribute to lower costs among patients receiving follow-up imaging. The study emphasized that such recommendations should reflect best practices and support the follow-up that will be of likely greatest value for patient care (PUBMED:29728324). Overall, the SRU consensus conference guidelines have led to a decrease in the number of follow-up recommendations for benign appearing cysts and have influenced the management and cost implications associated with adnexal cysts.
Instruction: Doing voluntary extra work? Abstracts: abstract_id: PUBMED:36254320 Situational Strength as a Lens to Understand the Strain Implications of Extra-Normative Work. Employees must often perform work outside of the time and/or space requirements that typically define their job role (e.g., working after-hours, teleworking), especially during the COVID-19 pandemic. We introduce the concept of extra-normative work to capture this idea and draw on situational strength theory to test the seemingly paradoxical hypotheses that (1) the effects of extra-normative work are more harmful to employee strain when this work represents a stronger situation (i.e., one that unambiguously prescribes expected behavior), relative to when this work represents a weaker situation (i.e., one that allows for greater personal choice and behavioral latitude), but that (2) this strain is diminished when situational strength is achieved by maximizing the clarity and consistency of extra-normative work, while this strain is enhanced when situational strength is achieved by imposing greater constraints and consequences surrounding extra-normative work. These predictions were supported in an experimental vignette study, a survey focused on after-hours work experiences, and an investigation of telework in response to COVID-19. We discuss the theoretical implications of viewing extra-normative work through the lens of situational strength, while also outlining how our findings inform best practices surrounding how to communicate about and frame extra-normative work to employees. abstract_id: PUBMED:37128681 Types of Voluntary Work and Influence of Participation for Older Volunteers: A Systematic Review of Qualitative Studies. This systematic review synthesized findings on socio-demographic characteristics of older adults who engaged in formal volunteering, types of voluntary work for engagement, and the outcomes resulting from different types of voluntary work participation. Studies published in peer-reviewed journals were identified from six electronic databases. Studies were included if they: (1) involved a study sample of adults aged 65 and older or had a mean age of 65 , (2) reported any type of formal voluntary work at any setting, and (3) reported at least one influence of voluntary work on volunteers. Older adults who were female, married, retired, and have a higher education, fair health, and more volunteering experience participated in voluntary work. The majority of older volunteers preferred to volunteer in a community setting, and they most commonly participated in healthcare or social care related voluntary work. The older volunteers perceived positive influences mostly related to health-relatedoutcomes or helping knowledge and skills. abstract_id: PUBMED:33962171 Liminal space and the negotiation of care work in extra-care housing. This is a qualitative case study of care work in a liminal space, specifically the case of an extra-care housing residence, which is an innovative housing alternative for elderly people in need of care in Sweden. The study is an exploration of social care workers' perceptions about their workplaces and their understandings of themselves, which are shaped by their embeddedness in architectural space. The extra-care housing residence appeared as a liminal space in which two dominant spaces - home care services and residential care - underpinned the staff's perceptions of an unclear workplace and their identity work. abstract_id: PUBMED:24271149 Voluntary work in social and health care: a critical review of assessment instruments Background: Due to an increasing need for voluntary workers in the provision of care for care-dependent people, sustainable recruiting, retention, and quality management strategies are required. In this regard, assessment instruments (AI) are helpful as a basis for obtaining information and planning. The aims of this critical review are to identify AIs in voluntary work and to analyze and critically appraise their domains, target groups, and psychometric properties. Methods: A systematic search with an interdisciplinary emphasis was conducted in Medline, CINAHL, PsycINFO and other socioscientific databases. After the selection of identified articles, 12 research papers remained for analysis and were summarized in a critical review. Results: In total, 15 AIs, 13 domains (e.g., motivation, satisfaction), and 4 target groups could be identified. The domain "motivation" and the target group "voluntary workers in hospice/palliative care" were most frequently represented. Currently, there are no instruments available which cover a broad range of domains and target groups. Conclusion: To support sustainability and quality of care in volunteerism, the development of new AIs is necessary. abstract_id: PUBMED:38041419 Woman's work: The moderating effects of gender role orientations between the relationships of work-family conflict with voluntary turnover and being valued by one's spouse. Gender role attitudes determine the importance of work-family domains to individuals and how they are influenced by work-family conflict (WFC). In this study, we draw on gender role and social role theories to hypothesize that working mothers' gender role perceptions moderate the relationships between WFC and two outcomes: voluntary turnover (work domain) and feeling valued by one's spouse (family domain). We tested our hypotheses with 14-month time-lagged survey data from 731 working mothers in Japan. The results of our regression analyses suggest that working mothers scoring high on WFC and egalitarian gender roles have a lower propensity to voluntary turnover and a higher propensity to feel valued by their spouses than working mothers low on WFC and egalitarian gender roles. Our study highlights the importance of considering within-gender differences in research on WFC and moderating effects of gender role attitudes between WFC and outcomes in work and family domains. abstract_id: PUBMED:35034350 "How you keep going": Voluntary sector practitioners' story-lines as emotion work. The voluntary sector acts as the last line of defense for some of the most marginalized people in societies around the world, yet its capacities are significantly reduced by chronic resource shortages and dynamic political obstacles. Existing research has scarcely examined what it is like for voluntary sector practitioners working amidst these conditions. In this paper, we explore how penal voluntary sector practitioners across England and Scotland marshaled their personal and professional resources to "keep going" amidst significant challenges. Our analysis combines symbolic interactionism with the concept of story-lines. We illuminate the narratives that practitioners mobilized to understand and motivate their efforts amidst the significant barriers, chronic limitations, and difficult emotions brought forth by their work. We position practitioners' story-lines as a form of emotion work that mitigated their experiences of anger, frustration, overwhelm, sadness, and disappointment, enabling them to move forward and continue to support criminalized individuals. Our analysis details three story-lines-resignation, strategy, and refuge-and examines their consequences for practitioners and their capacities to intervene in wicked social problems. abstract_id: PUBMED:27069517 Linking Quality of Work in Midlife to Volunteering During Retirement: a European Study. There is increasing recognition that disadvantaged work and employment conditions over the life course, including psychosocial stress at work, have negative long-term effects on health at older ages. Yet, the question whether stress at work additionally influences the likelihood of participating in voluntary work during retirement still needs to be explored. This paper studies long-term influences of stressful work during adulthood (as defined by low control and low reward at work) on participation in voluntary work in older ages. Analyses are based on the Survey of Health Ageing and Retirement in Europe (SHARE), with detailed retrospective information on individual life courses collected among 11,751 retired men and women in 13 European countries. Results of multivariable analyses demonstrate that people who experienced stressful work are also less likely to participate in voluntary work during retirement. Importantly, these associations remain significant after controlling for important factors, including disability in older ages and disadvantaged socioeconomic circumstances. In conclusion, findings suggest that promoting good working conditions may not only increase health and well-being, but also encourage participation in productive activities after labour market exit. abstract_id: PUBMED:32926522 The moderating effect of core self-evaluations between the relationships of work-family conflict and voluntary turnover, job promotions and physical health. Despite strong evidence that individuals process stressor-strain relationships differently, little attention in work-family conflict research has been given to moderating effects of core self-evaluations (CSE). Integrating conservation of resources theory with work-family conflict and CSE research, we predicted that CSE has moderating effects between the relationships of work-to-family conflict (WFC) and voluntary turnover, job promotions, and physical health. We tested our predictions at two time points over a 14-month period with a sample of 731 working mothers in Japan. Results confirmed that CSE moderated the relationships between WFC and voluntary turnover, job promotions, and physical health, such that respondents with higher CSE had lower degrees of voluntary turnover, higher degrees of job promotions, and lower degrees of health problems. This study helps clarify the inconsistent effects of WFC on voluntary turnover in previous research, expands on the limited research examining WFC and job promotion, and provides consistent evidence that CSE act as a moderator between WFC and outcomes. abstract_id: PUBMED:33833703 Linking Self-Control to Voluntary Behaviors at Workplace: The Mediating Role of Job Satisfaction. Voluntary work behavior (VWB) refers to spontaneous workplace behaviors that extend beyond role norms, including extra-role behaviors that benefit the organization (i. e., organizational citizenship behavior, OCB) and negative behaviors that may harm the organization (i.e., counterproductive work behavior, CWB). This study examined the relationship between self-control and VWB and the mediating role of job satisfaction. A total of 1,101 full-time employees from China completed a battery of self-report measures online. The results show that self-control positively predicts employees' OCB and negatively predicts employees' CWB. Moreover, job satisfaction significantly mediates the relationship between self-control and OCB/CWB. The results confirm that employees with high self-control are more public-spirited, which previous studies have described as being "highly committed" (high OCB) or "less harmful" (low CWB). This finding closely relates to the observation that employees with high self-control tend to have more satisfying work outcomes or higher workplace status than those with low self-control. abstract_id: PUBMED:38415959 Going the extra mile for patients: Service-oriented high-performance work systems drive nurses' job crafting and extra-role service behaviour. Aim: This study intends to investigate whether, how and when service-oriented high-performance work systems (SHPWSs) drive nurses' extra-role service behaviour. Design: This was a quantitative cross-sectional study conducted with matched nurse-patient participants. Method: We tested hypotheses using data from 284 nurses and their matched 566 patients. The data were collected in 2019. We conducted a set of hierarchical regression analyses to test our hypotheses. Results: The results showed that SHPWSs have a positive impact on job crafting, which, in turn, mediates the link between SHPWSs and extra-role service behaviours. Additionally, the influence of professional identification moderates these relationships. Specifically, SHPWSs are significantly and positively associated with job crafting among highly professionally identified nurses. The indirect effect is significantly positive when nurses strongly identify with their profession but not significant when their professional identification is low. Conclusion: The results indicated that SHPWSs can elicit job crafting among higher professional identifiers, which further increases extra-role service behaviours towards patients. Impact: Our research emphasizes the significance of HRM themes in the healthcare service industry and their direct impact on healthcare personnel. Shifting from a management-centric to an individual-centric perspective, we focus on the proactive role of nurses. Furthermore, this study enhances the understanding of the boundary conditions for the effectiveness of SHPWSs. Patient Or Public Contribution: Nurses and their mated patients from a Chinese hospital contributed to this study by completing the survey. Answer: The concept of voluntary extra work encompasses a variety of activities and behaviors that extend beyond the formal requirements of an individual's job role. This can include work that is performed outside of typical working hours or in non-traditional workspaces, such as teleworking or after-hours work, especially highlighted during the COVID-19 pandemic (PUBMED:36254320). Voluntary work can also refer to formal volunteering activities, particularly among older adults, who often engage in community-based, healthcare, or social care-related voluntary work, with positive influences on health-related outcomes and knowledge and skills (PUBMED:37128681). In the context of care work, voluntary extra work can occur in liminal spaces like extra-care housing residences, where social care workers navigate unclear workplace identities and roles (PUBMED:33962171). The voluntary sector itself is characterized by practitioners who perform emotion work to "keep going" amidst resource shortages and political challenges, using personal and professional narratives to motivate their efforts (PUBMED:35034350). Quality of work during midlife has been linked to the likelihood of engaging in voluntary work during retirement, with stressful work conditions potentially reducing the propensity to volunteer (PUBMED:27069517). Additionally, individual differences, such as gender role orientations and core self-evaluations, can moderate the relationships between work-family conflict and outcomes like voluntary turnover, job promotions, and physical health (PUBMED:38041419; PUBMED:32926522). Voluntary work behavior (VWB) in the workplace, which includes organizational citizenship behavior (OCB) and counterproductive work behavior (CWB), has been associated with self-control and job satisfaction, suggesting that employees with higher self-control are more likely to engage in positive VWB (PUBMED:33833703). Furthermore, service-oriented high-performance work systems can drive nurses' job crafting and extra-role service behavior, particularly among those with strong professional identification (PUBMED:38415959). Overall, voluntary extra work can be seen as a multifaceted concept that encompasses a range of behaviors and activities, influenced by individual characteristics, workplace conditions, and broader social and organizational contexts.
Instruction: Laparoscopic nephrectomy for renal diseases in children: is there a learning curve? Abstracts: abstract_id: PUBMED:8156470 Laparoscopic nephrectomy: the learning-curve experience. Objective: To report the learning-curve experience with laparoscopic nephrectomy. Design: Case series. Setting: A tertiary-care referral centre serving southern Saskatchewan. Patients: Sixteen patients (7 men, 9 women), selected to undergo laparoscopic nephrectomy. They ranged in age from 19 to 83 years. Indications for surgery were: recurrent atrophic pyelonephritis with pain (three patients), obstruction at the ureteropelvic junction with pain (three), small ectopic kidney with pain (one), renovascular hypertension (two), a solid renal mass confirmed by computed tomography (four), Staghorn calculus (one), transitional cell tumour of upper ureter (one) and pyonephrosis with a nonfunctioning kidney (one). Interventions: Laparoscopic nephrectomy. Main Outcome Measures: Postoperative morbidity, complications of the procedure and duration of postoperative hospitalization. Results: Kidneys were removed laparoscopically in 13 patients. Open nephrectomy was necessary in three patients, owing to a lack of experience in patient selection in two cases and to intraoperative hemorrhage in the third. All patients resumed oral intake on the 1st postoperative day, and most did not require analgesics for relief of pain beyond 36 hours postoperatively. Complications of laparoscopic nephrectomy included pneumonia (one patient), low-grade fever (two patients), need for blood transfusion (three patients) and transient ischemic attack (one patient). The mean postoperative hospital stay was 4.3 days. Conclusions: When patients are properly selected, laparoscopic nephrectomy provides decreased postoperative morbidity, a shorter convalescence, and thus cost savings, compared with open nephrectomy. abstract_id: PUBMED:32897030 Complications of open, laparoscopic and retroperitoneoscopic nephrectomy in various kidney diseases The results of the analysis of domestic and foreign literature on complications of various approaches for nephrectomy are presented in the review. Along with open nephrectomy, complications of various minimally-invasive approaches are described, including laparoscopic, retroperitoneoscopic and robot-assisted nephrectomy. Recently, a large number of publications have been dedicated to donor nephrectomy, which is associated with the growing trend for these procedures in many clinics throughout the world using different approaches. The most of studies show that complications are more common for open nephrectomy (up to 30.4%) compared to laparoscopic (5.0-25.8%), retroperitoneoscopic (up to 17.1%) and robot-assisted (0-15%) nephrectomy. Unlike open procedure, minimally-invasive approaches have specific complications; however, most of the complications are identical for various methods. Retroperitoneoscopic access is associated with a minimal risk of damage to the abdominal organs. abstract_id: PUBMED:16563130 Complications and the learning curve for a laparoscopic nephrectomy at a single institution. Background: We assessed our experiences in performing a laparoscopic nephrectomy, with regard to complications and the learning curve, during a 4-year period. Methods: Between November 2000 and October 2004, a total of 78 laparoscopic nephrectomies were performed at our institution (37 radical nephrectomies, 30 nephroureterectomies and 11 simple nephrectomies). The patient charts were retrospectively reviewed to identify any operative and postoperative complications, and also to evaluate the operating time. Results: A total of eleven complications (14.1%) occurred in our series (nine operative and two postoperative complications). All operative complications were due to vascular injuries (n=9), five (2.6%) of which required an open conversion. The operating time and the rates of complications decreased significantly as the surgeons' experiences increased. Conclusion: A laparoscopic nephrectomy could be performed as safely as previously reported. In addition, the learning curve for a laparoscopic nephrectomy appeared to be good over the initial 50 procedures at our institution. abstract_id: PUBMED:38156691 Minimizing the number of trocars during laparoscopic partial nephrectomy. Surgical technique Introduction: During last 20 years in urology there has been a number of significant advancements, which were due to the introduction into practice and improvement of minimally invasive techniques. Development of laparoscopic surgery allowed to actively introduce these procedures in various kidney disorders, including renal tumors. Laparoscopic partial nephrectomy is also undergoing changes in order to improve the technique. Standard technique requires four or more trocars, where fourth (additional) trocar is put for the assistant. However, there is an opinion that in most cases it is possible to perform partial nephrectomy without an assistant trocar, while maintaining the safety and efficiency of the procedure and improving some perioperative outcomes. The aim of our study was to compare the safety and efficiency of the three-trocar and four-trocar techniques during transperitoneal partial nephrectomy. This article also presents the technical features of laparoscopic partial nephrectomy. Materials And Methods: Between 2021 and 2023, a total of 200 patients were included in the study comparing three- and four-trocar partial nephrectomy. Results: There was no difference in the rate of achieving renal trifecta between the two groups. In the three-trocar group, 94 cases of renal trifecta were found, while in the four-trocar group, there were 95 patients with renal trifecta. Conclusions: The three-trocar technique is not inferior in safety and efficiency to the standard four-trocar technique. The main advantages of the three-trocar technique are less pain, cost and post-operative scarring. abstract_id: PUBMED:33633798 Comparison of laparoscopic and hand-assisted laparoscopic nephrectomy for inflammatory renal disease: which is the preferred approach? Aims: Management of inflammatory renal disease (IRD) can still be technically challenging for laparoscopic procedures. The aim of the present study was to compare the safety and feasibility of laparoscopic and hand-assisted laparoscopic nephrectomy in patients with IRD. Patients And Methods: We retrospectively analyzed the data of 107 patients who underwent laparoscopic nephrectomy (LN) and hand-assisted laparoscopic nephrectomy (HALN) for IRD from January 2008 to March 2020, including pyonephrosis, renal tuberculosis, hydronephrosis, and xanthogranulomatous pyelonephritis. Patient demographics, operative outcomes, and postoperative recovery and complications were compared between the LN and HALN groups. Multivariable logistic regression analysis was conducted to identify the independent predictors of adverse outcomes. Results: Fifty-five subjects in the LN group and 52 subjects in the HALN group were enrolled in this study. In the LN group, laparoscopic nephrectomy was successfully performed in 50 patients (90.9%), while four (7.3%) patients were converted to HALN and one (1.8%) case was converted to open procedure. In HALN group, operations were completed in 51 (98.1%) patients and conversion to open surgery was necessary in one patient (1.9%). The LN group had a shorter median incision length (5 cm versus 7 cm, p &lt; 0.01) but a longer median operative duration (140 min versus 105 min, p &lt; 0.01) than the HALN group. There was no significant difference in blood loss, intraoperative complication rate, postoperative complication rate, recovery of bowel function, and hospital stay between the two groups. Multivariable logistic regression revealed that severe perinephric adhesions was an independent predictor of adverse outcomes. Conclusion: Both LN and HALN appear to be safe and feasible for IRD. As a still minimally invasive approach, HALN provided an alternative to IRD or when conversion was needed in LN. abstract_id: PUBMED:20465431 Retroperitoneal laparoscopic nephrectomy: the effect of the learning curve, and concentrating expertise, on operating times. Introduction: Laparoscopic nephrectomy is an accepted alternative to open nephrectomy. We analyzed our first 80 procedures of laparoscopic nephrectomy to evaluate the effect of experience and configuration of service on operative times. Materials And Methods: A retrospective review of 80 consecutive children who underwent retroperitoneal laparoscopic nephrectomy or heminephrectomy during an 11-year period from 1997 at Christchurch Hospital (Christchurch, New Zealand) was conducted. Operative times, in relation to the experience of the surgeon for this procedure, were analyzed. Results: Four surgeons, assisted by an annually rotating trainee registrar, performed the procedure in 26 girls and 54 boys (range, 8 months to 15 years). Operating times ranged from 38 to 225 minutes (mean, 104). The average operative time fell from 105 to 90 minutes. One surgeon performed 40% of the procedures and assisted with a further 55%. The operative times for all surgeons showed a tendency to reduce, but this was not marked. Conclusions: Most procedures were performed by two surgeons working together, although one surgeon was involved in the majority of cases. The lead surgeon is often assisted by a fellow consultant colleague. Operative times were influenced by experience, but not markedly so. The shorter operative times and minimal "learning curve," compared with other reported series, may, in part, be due to the involvement of two surgeons experienced in laparoscopy for the majority of cases. abstract_id: PUBMED:9516560 Laparoscopic nephrectomy in children. Using the techniques for laparoscopic nephrectomy developed in adults, pediatric nephrectomy results in equivalent surgical outcomes compared with open surgery. To date, both transperitoneal and retroperitoneal laparoscopic approaches to nephrectomy for benign renal disease have been described. Controversy exists among pediatric urologists, however, regarding the advantages of laparoscopic nephrectomy in children because convalescence from an open nephrectomy is also rapid. This article reviews the current indications and techniques for laparoscopic nephrectomy in children as well as other surgical applications of laparoscopy with respect to the pediatric patient. abstract_id: PUBMED:23730096 Current status of robot-assisted laparoscopic partial nephrectomy. With increased incidence of diagnosis of small renal masses, partial nephrectomy has been preferred over radical nephrectomy as the surgical treatment of choice. The transition from open to laparoscopic partial nephrectomy had been challenging for many urologists. Robotic-assisted laparoscopic partial nephrectomy(RLPN) is increasingly used to facilitate this transition . In this review, we examine the recent technical advances and clinical outcomes in RLPN. Many series had successfully reported the feasibility of using the da Vinci Surgical (Intuitive Surgical Inc, Sunnyvale, CA) System in laparoscopic partial nephrectomy. Recent advances had focused on reducing risk of renal damage by shortening the warm ischaemia time. These techniques included unclamped excision, selective arterial clamping and improved renorrhaphy methods. Operative times and warm ischaemia times have also improved once the learning curve are overcome, which is less steep than conventional laparoscopy. With longer follow-up and more widespread experience, the outcome of RLPN could be favourable compared to conventional laparoscopy. Improving techniques had made this surgery a safe and efficacious treatment option for small renal masses. abstract_id: PUBMED:8239735 Laparoscopic nephrectomy: current status. In this manuscript we describe our initial experience with 22 patients who underwent laparoscopic nephrectomy at our institution. Of the 22 patients, 16 patients had benign disease, and 6 patients had malignant disease. Of the 6 patients with malignancy, 3 patients underwent laparoscopic nephroureterectomy for transitional cell cancer of the upper urinary tract. In 21 patients, a transabdominal approach was utilized, whereas 1 patient underwent laparoscopic nephrectomy using an entirely retroperitoneal approach. Laparoscopic nephrectomy showed lengthy operative times, but resulted in reduced postoperative discomfort, shortened hospital stay, and rapid convalescence. As laparoscopic nephrectomy has been performed with reproducible success by various other groups worldwide, further refinements in surgical technique, paired with advances in laparoscopic instruments designed for stapling and suturing, will most likely result in an increasing application of the principles of laparoscopy to diseases of the kidney, renal pelvis and ureter. abstract_id: PUBMED:29416273 This challenging procedure has successful outcomes: Laparoscopic nephrectomy in inflammatory renal diseases. In prospective study from November 2011- November 2013, we performed 44 laparoscopic nephrectomies for benign non-functioning kidney diseases. Twenty eight patients underwent laparoscopic transperitoneal nephrectomies (63.6%), ten were laparoscopic assisted (22.7%) and six (13.6%) were converted to open. Patient's age, gender, laterality and etiology of renal failure were noted. Outcomes were measured as operative time, intraoperative and post operative complications, blood loss, pain score and hospital stay. Patients were followed up at one, three and 12 weeks and 6 monthly thereafter. Of the 44, ten (22.7%) were 15-24 years old, 32 (72%) between 25-50 years and two were more than 50 years old. Females were 54.6%. 22 patients had either right or left nephrectomy. Pelviureteric-junction (PUJ) obstruction was the commonest cause, 26 cases (59.0%). Operative time: less than two hours in 30 (68.2%) patients, more than two hours in 14 cases. Blood loss: less than 100 ml in 12 (27.3%), 100-200 ml in 20 (45.4%) and more than 200 ml in 12 (27.3%) patients. All four major complications were converted to open, two had injury to mesocolic veins and two had vascular stapler malfunction. Post-operative complications: surgical site infection (SSI), paralytic ileus and mild grade fever in six cases each and non infected benign intra abdominal collection in two cases. Maximum pain score on POD-1: four in 20 cases (45.7%), two in 24 (54%). Two had pain score between 3-4 three weeks after surgery. Oral intake started by POD-2 in 30 (68.2%) and by POD-4 in 100% cases. 22 (50%) patients were ambulating by POD-2, 16 (36.7%) by POD-4. Our study and randomized and non-randomized published literature report acceptables complication and conversion rates. In conclusion, laparoscopic nephrectomy for benign non functional kidney is a better alternative to open nephrectomy. Answer: Yes, there is a learning curve associated with laparoscopic nephrectomy for renal diseases in children. The learning curve for laparoscopic nephrectomy is evident in the reduction of operative times and complication rates as surgeons gain more experience with the procedure. For instance, a study assessing the experiences of performing laparoscopic nephrectomy over a 4-year period at a single institution found that the operating time and the rates of complications decreased significantly as the surgeons' experience increased, suggesting that the learning curve for a laparoscopic nephrectomy appeared to be good over the initial 50 procedures (PUBMED:16563130). Another study analyzing the first 80 procedures of laparoscopic nephrectomy to evaluate the effect of experience on operative times showed that operative times were influenced by experience, but not markedly so, which may be due to the involvement of two surgeons experienced in laparoscopy for the majority of cases (PUBMED:20465431). Additionally, the use of laparoscopic techniques for nephrectomy in children, which have been developed from adult procedures, results in equivalent surgical outcomes compared with open surgery, but the learning curve and the advantages of laparoscopic nephrectomy in children are still subjects of discussion among pediatric urologists (PUBMED:9516560).
Instruction: Can transient elastography, Fib-4, Forns Index, and Lok Score predict esophageal varices in HCV-related cirrhotic patients? Abstracts: abstract_id: PUBMED:24365388 Can transient elastography, Fib-4, Forns Index, and Lok Score predict esophageal varices in HCV-related cirrhotic patients? Background: Gastroesophageal varices are present in approximately 50% of patients with liver cirrhosis. The aim of this study was to evaluate liver stiffness measurement (LSM), Fib-4, Forns Index and Lok Score as noninvasive predictors of esophageal varices (EV). Methods: This prospective study included 65 patients with HCV-related liver cirrhosis. All patients underwent routine laboratory tests, transient elastograhy (TE) and esophagogastroduodenoscopy. FIB-4, Forns Index and Lok Score were calculated. The diagnostic performances of these methods were assessed using sensitivity, specificity, positive predictive value, negative predictive value, accuracy and receiver operating characteristic curves. Results: All predictors (LSM, FIB-4, Forns Index and Lok Score) demonstrated statistically significant correlation with the presence and the grade of EV. TE could diagnose EV at a cutoff value of 18.2kPa. Fib-4, Forns Index, and Lok Score could diagnose EV at cutoff values of 2.8, 6.61 and 0.63, respectively. For prediction of large varices (grade 2, 3), LSM showed the highest accuracy (80%) with a cutoff of 22.4kPa and AUROC of 0.801. Its sensitivity was 84%, specificity 72%, PPV 84% and NPV 72%. The diagnostic accuracies of FIB-4, Forns Index and Lok Score were 70%, 70% and76%, respectively, at cutoffs of 3.3, 6.9 and 0.7, respectively. For diagnosis of large esophageal varices, adding TE to each of the other diagnostic indices (serum fibrosis scores) increased their sensitivities with little decrease in their specificities. Moreover, this combination decreased the LR- in all tests. Conclusion: Noninvasive predictors can restrict endoscopic screening. This is very important as non invasiveness is now a major goal in hepatology. abstract_id: PUBMED:21451799 A new and simple algorithm for the noninvasive assessment of esophageal varices in cirrhotic patients using serum fibrosis markers and transient elastography. Background And Aim: Noninvasive serum liver fibrosis markers and liver stiffness could be used as predictors of esophageal varices in cirrhotic patients because portal hypertension is related to liver fibrosis. The aim of this study was to compare the performance of common serum fibrosis scores and transient elastography in diagnosing esophageal varices and to propose a new algorithm for predicting large varices. Methods: 231 consecutive cirrhotic patients (58.4% males, mean age 55.9 years) were enrolled. Routine biological tests were performed, so that APRI, FIB-4, Forns Index and Lok Score could be calculated. All patients underwent transient elastography and eso-gastroscopy. The diagnostic performance of the methods was assessed using sensitivity, specificity, positive predictive value, negative predictive value, accuracy, likelihood ratios and receiver operating characteristic curves. Results: The Lok Score was the best among all the serum scores for diagnosing the varices. For a value higher than 0.8, it had a 45.5% positive predictive value, 86.4% negative predictive value and 67.72% diagnostic accuracy for prediction of large varices. For liver stiffness higher than 30.8KPa, the positive predictive value was 47.3%, negative predictive value 81% and diagnostic accuracy 68.32%. Using both tests simultaneously, the presence of large varices was predicted with a diagnostic accuracy of 78.12%, obtaining an increment in NPV and -LR up to 93.67% and 0.21, respectively. Conclusion: The Lok Score is a good predictor for excluding the presence of large varices in cirrhotic patients, similarly with liver stiffness. The two methods can be successfully combined into a noninvasive algorithm for the assessment of esophageal varices in cirrhotic patients. abstract_id: PUBMED:30483585 Changes in transient elastography in early cirrhotic patients after receiving nonselective B-blocker for primary variceal bleeding prophylaxis: Three-month follow up. Background And Aim: A nonselective B-blocker (NSBB) is recommended for primary prophylaxis of variceal bleeding. The impact of treatment with NSBB on modulating transient elastography (TE) has not been reported. The aim of the study is to investigate the effect of NSBB treatment on TE in early cirrhotic patients. Methods: In this prospective study, we enrolled all early cirrhotic patients who underwent esophagogastroduodenoscopy (EGD) and showed small esophageal varices (EV) at our institute for a period of 1 year. The TE and heart rate (HR) of all participants were measured before and 3 months after receiving NSBB. Results: Thirty-nine patients receiving propanolol for 3 months were analyzed. There were 16 patients in the HR responder group (41%) and 23 patients in the HR nonresponder group (59%). The reduction of TE was preferably found in the HR responder group compared with the HR nonresponder group, in which mean changes in TE were -5.6 and -0.7 kPa, respectively (P = 0.23). In addition, we categorized the patients using their TE responses. Twenty-five patients (64.1%) showed reduced TE during the follow-up period, in which the mean TE value change was -2.94 kPa. Using correlation analysis, TE and HR responses were insignificantly correlated (r = 0.23, P = 0.15). Conclusion: The NSBB administered for 3 months mainly improved TE value in early cirrhotic patients even though the changes of HR and TE did not correlate. Further study is needed to confirm whether the monitoring of TE change may be a better predictor for pharmacological response than the HR response. abstract_id: PUBMED:38060753 Assessing the Predictive Efficacy of Lok Score in Identifying Esophageal Varices in Liver Cirrhosis Patients: A Cross-Sectional Study. Introduction Liver cirrhosis is a global health concern with various etiologies, leading to portal hypertension and gastroesophageal varices. Variceal bleeding, a severe complication of cirrhosis, necessitates early detection and intervention to reduce mortality. Endoscopic screening is the gold standard for varices detection but is invasive and expensive. This study evaluates the Lok Score, a non-invasive predictive tool, for identifying esophageal varices in patients with liver cirrhosis. Materials and methods A cross-sectional study involving 150 liver cirrhosis patients was conducted. The Lok score was calculated using specific parameters. Patient data, including age, gender, etiology of liver cirrhosis, Child-Pugh class, varices presence, and grades, were recorded. Statistical analysis was performed using IBM Corp. Released 2013. IBM SPSS Statistics for Windows, Version 22.0. Armonk, NY: IBM Corp., and diagnostic parameters for Lok Score were computed. Results The study demonstrates that the Lok score exhibits significant potential as a predictive tool for esophageal varices. The mean Lok score significantly differed between individuals with and without varices, suggesting a correlation between Lok score and varices presence. Higher Lok scores may indicate more advanced varices. Utilizing the Lok score in clinical practice could lead to timely interventions, improving patient outcomes. Conclusion The Lok score shows promise as a valuable predictive tool for esophageal varices in liver cirrhosis patients. Early identification using this non-invasive parameter can aid in risk stratification and guide appropriate management strategies. However, further validation and larger studies are needed to fully integrate the Lok score into clinical practice for the benefit of cirrhosis patients. abstract_id: PUBMED:37981928 Transient elastography and platelet count as noninvasive predictors of gastroesophageal varices in patients with compensated hepatitis C virus-related liver cirrhosis. Background: Early detection of esophageal varices (EV) before the first attack of bleeding is crucial for primary prophylaxis. The current work aims to investigate the use of a combination of FibroScan and platelet count as noninvasive means to identify EV in patients with compensated cirrhosis. Methods: Sixty-two patients with compensated hepatitis C virus (HCV)-related cirrhosis were divided into two groups with and without EV. All patients were exposed to complete history, physical examination, laboratory, and endoscopic evaluation. FibroScan was performed for all patients, and the two groups were compared. Results: A statistically significant higher mean liver stiffness measurement (LSM) (KPa), lower mean platelet count to splenic diameter ratio (PSR), and higher mean fibrosis-4 (FIB4) score were noticed in those with EV with P &lt; 0.0005. A cutoff value of ≥23.1 for LSM, ≥3.71 for FIB4, and ≥130 mm for splenic diameter have a sensitivity of 94%, 97%, and 97% and a specificity of 81%, 81%, and 68%, respectively, in the detection of varices. Platelet count of ≥112,500 (×103/dl) and of ≥771.33 for PSR have a sensitivity of 84% and 77% and a specificity of 87% and 90%, respectively, to rule out the presence of varices. LSM, FIB4 score, and splenic diameter are predictors of the presence of varices where platelet count and PSR are negative predictors. Conclusion: The combination of LSM by transient elastography (TE), PSR, or platelet count can be used to detect a relevant category of patients with compensated cirrhosis who have a very low possibility of EV where endoscopy can be avoided. abstract_id: PUBMED:34428123 Transient splenic elastography predicts high-risk esophageal varices in patients with non-cirrhotic portal hypertension. Background And Aims: Non-cirrhotic portal hypertension (NCPH) comprise a group of diseases that cause portal hypertension without cirrhosis, leading to a high risk of hemorrhage from esophageal varices. There are no non-invasive predictors of high-risk varices (HRV) described in the literature for NCPH. This study aimed to evaluate whether transient splenic elastography (TSE) can predict HRV in patients with NCPH. Methods: Prospective study of patients with NCPH who underwent a single timepoint evaluation with transient liver and spleen elastography, ultrasonography, upper endoscopy, and laboratory tests. The study was performed from January to September 2020. Patients were divided into two groups based on the presence of HRV. The relation between TSE, transient liver elastography (TLE), spleen size, and platelet count to the presence of HRV was evaluated. Results: Of 42 patients with NCPH, 50% (21/42) presented HRV. In univariate analysis, TSE (median, 58.4 vs. 28.3, p = 0.009) and spleen size (median, 17.5 vs. 14.5 cm, p = 0.013) were associated with HRV. No statistically significant relationship was found between the presence of HRV and platelet count or TLE. In multivariate analysis, TSE was the only variable related to HRV (OR 1.21, 95% CI 1.02-1.38). TSE had a good performance in predicting HRV in our population (AUROC 0.878; 95% CI 0.751-1000). TSE &gt; 35.4 kPa presents 93.3% sensitivity, 60.0% specificity, and 90.9% negative predictive value. Conclusion: In our population of patients with NCPH, TSE is useful in predicting HRV. TLE, spleen size, and platelet count were not related to HRV. abstract_id: PUBMED:34200885 Noninvasive Assessment of Hepatitis C Virus Infected Patients Using Vibration-Controlled Transient Elastography. Chronic infection with hepatitis C virus (HCV) is one of the leading causes of cirrhosis and hepatocellular carcinoma (HCC). Surveillance of these patients is an essential strategy in the prevention chain, including in the pre/post-antiviral treatment states. Ultrasound elastography techniques are emerging as key methods in the assessment of liver diseases, with a number of advantages such as their rapid, noninvasive, and cost-effective characters. The present paper critically reviews the performance of vibration-controlled transient elastography (VCTE) in the assessment of HCV patients. VCTE measures liver stiffness (LS) and the ultrasonic attenuation through the embedded controlled attenuation parameter (CAP), providing the clinician with a tool for assessing fibrosis, cirrhosis, and steatosis in a noninvasive manner. Moreover, standardized LS values enable proper staging of the underlying fibrosis, leading to an accurate identification of a subset of HCV patients that present a high risk for complications. In addition, VCTE is a valuable technique in evaluating liver fibrosis prior to HCV therapy. However, its applicability in monitoring fibrosis regression after HCV eradication is currently limited and further studies should focus on extending the boundaries of VCTE in this context. From a different perspective, VCTE may be effective in identifying clinically significant portal hypertension (CSPH). An emerging prospect of clinical significance that warrants further study is the identification of esophageal varices. Our opinion is that the advantages of VCTE currently outweigh those of other surveillance methods. abstract_id: PUBMED:30070412 Liver stiffness measured by transient elastography as predictor of prognoses following portosystemic shunt occlusion. Background And Aim: This study aimed to identify predictors of model for end-stage liver disease sodium score reductions and improvements in vital prognoses following portosystemic shunt occlusion in portal hypertension patients. Methods: Seventy cirrhotic patients with major portosystemic shunts and a mean model for end-stage liver disease sodium score of 10.5 underwent balloon-occluded retrograde transvenous obliteration between February 2008 and March 2017. We calculated the scores before and 1 month after shunt occlusion. The long-term outcomes were monitored, and vital prognoses were analyzed. Results: The model for end-stage liver disease sodium score did not change significantly 1 month post-balloon-occluded retrograde transvenous obliteration, and the score decreased postoperatively in 31 (44.3%) patients. Univariate analyses showed that decline in the score after portosystemic shunt occlusion was strongly associated with hepatic encephalopathy as a procedural indication, lower liver volumes, and lower liver stiffness levels measured by transient elastography before treatment (P &lt; 0.05). Multivariate logistic regression analysis identified preoperative liver stiffness level as an independent predictor of model for end-stage liver disease sodium score amelioration following balloon-occluded retrograde transvenous obliteration (P &lt; 0.05), and receiver operating characteristic curve analysis determined a liver stiffness cutoff value of 21.6 kPa, with a sensitivity of 76.0% and specificity of 69.6%. The Kaplan-Meier method determined that overall survival rates after treatment in patients with liver stiffness &lt; 21.6 kPa were significantly higher than in patients with liver stiffness ≥ 21.6 kPa (P &lt; 0.05). Conclusions: Liver stiffness measured by transient elastography may predict improvements in model for end-stage liver disease sodium scores and in survival rates after portosystemic shunt occlusion in portal hypertension patients. abstract_id: PUBMED:24509356 Transient elastography as a predictor of oesophageal varices in patients with liver cirrhosis Introduction: One of the most serious complications of liver cirrhosis is variceal bleeding. Early recognition of the oesophageal varices is of primary importance in the prevention of variceal bleeding. Endoscopy is the only means to directly visualize varices and measure their size, as one of the most important predictor of the risk of bleeding. During the course of cirrhosis repeated oesophago-gastro-bulboscopic examinations are recommended. As these interventions are expensive and often poorly accepted by patients who may refuse further follow-up, there is a need for non-invasive methods to predict the progression of portal hypertension as well as the presence and the size of oesophageal varices. After several combinations of biological and ultrasonographical parameters proposed for the detection of advanced fibrosis, it was suggested that liver stiffness measured by transient elastography, a novel non-invasive technology may reflect not only fibrosis and portal pressure but it may even predict the presence or absence of large oesophageal varices in patients with cirrhosis. Aim: The aim of the authors was to study the diagnostic accuracy of transient elastography using FibroScan for selecting patients who are at risk of bearing large (Paquet-grade ≥ II) oesophageal varices and high risk of bleeding. Method: The authors performed upper tract endoscopy and transient elastography in 74 patients with chronic liver disease (27 patients with chronic hepatitis and 47 patients with liver cirrhosis). The relationships between the presence of oesophageal varices (Paquet-grade 0-IV) and liver stiffness (kPa), as well as the hematological and biochemical laboratory parameters (prothrombine international normalized ratio, platelet count, aspartate aminotransferase, alanine aminotransferase, albumin, and aspartate aminotransferase/platelet ratio index) were investigated. The predictive role of liver stiffness for screening patients with varices and those who are at high risk of variceal bleeding was also analysed. Results: Liver stiffness values significantly correlated with the grade of oesophageal varices (Paquet-grade) (r = 0.67, p&lt;0.0001). The liver stiffness value of 19.2 kPa was highly predictive for the presence of oesophageal varices (AUROC: 0.885, 95% CI: 0.81-0.96) and for the presence of high grade varices (P≥II) (AUROC: 0.850, 95% CI: 0.754-0.94). Using the cut-off value of 19.2 kPa, the sensitivity of transient elastography was 85%, specificity was 87%, positive predictive value was 85%, negative predictive value was 87% and validity was 86% for the detection of varices. Liver stiffness values less than 19.2 kPa were highly predicitive for the absence of large (P≥II) varices (sensitivity, 95%; specificity, 70%; positive predictive value, 54%; negative predictive value, 97%). Conclusions: Transient elastography may help to screen patients who are at high risk of bearing large (P≥II) oesophageal varices which predict variceal bleeding and, therefore, need endoscopic screening. Lives stiffness values higher than 19.2 kPa indicate the need for oesophageal-gastro-bulboscopy, while liver stiffness values lower than 19.2 kPa make the presence of large oesophageal varices unlikely. abstract_id: PUBMED:33097282 Diagnostic accuracy of liver and spleen stiffness measured by fibroscan® in the prediction of esophageal varices in HCV-related cirrhosis patients treated with oral antivirals. Introduction: The aim of this study was to investigate the accuracy of liver and spleen stiffness measurement by transient elastography for the prediction of gastroesophageal varices in patients with HCV-associated cirrhosis treated with new direct-acting antiviral agents. Patients And Methods: This cross-sectional observational study included patients with compensated HCV-related cirrhosis and sustained virological response after direct-acting antiviral therapy. Patients underwent liver and spleen stiffness measurement, abdominal ultrasound and oesophago-gastroduodenoscopy. Clinical and laboratory data and non-invasive markers such as the liver stiffness-spleen diameter to platelet ratio score, variceal risk index and platelet count to spleen diameter ratio were analyzed. Results: Ninety-seven consecutive patients were included. Liver stiffness measurement (12.2 vs 16; p=0.02), spleen stiffness measurement (39.4 vs 46.05; p=0.04), liver stiffness-spleen diameter to platelet ratio score (1.21 vs 2.02; p=0.008), platelet count to spleen diameter ratio (1102.19 vs 829.7; p=0.04) and variceal risk index (-3.4 vs -1.02; p=0.01) showed significant differences between patients without/with gastroesophageal varices. The best cut-off value to discard the presence of gastroesophageal varices was 12.3kPa for liver stiffness measurement and 27kPa for spleen stiffness measurement. However, diagnostic accuracy was moderate (AUROC: 0.671 and 0.624 respectively). Combining different non-invasive parameters did not significantly improve the overall performance. Discussion: Liver and spleen stiffness measurement showed suboptimal results for non-invasive assessment of gastroesophageal varices in HCV cirrhotic patients treated with direct-acting antiviral agents. Our results suggest that non-invasive methods cannot substitute standard procedures for predicting gastroesophageal varices in this population. Answer: Yes, transient elastography, Fib-4, Forns Index, and Lok Score can predict esophageal varices in HCV-related cirrhotic patients. Studies have shown that these noninvasive methods correlate significantly with the presence and grade of esophageal varices (EV) in such patients. Transient elastography (TE) has been demonstrated to diagnose EV with a cutoff value of 18.2 kPa and can predict large varices (grade 2, 3) with a cutoff of 22.4 kPa, showing high accuracy (80%) and an area under the receiver operating characteristic (AUROC) of 0.801. The sensitivity was 84%, specificity 72%, positive predictive value (PPV) 84%, and negative predictive value (NPV) 72% (PUBMED:24365388). The Fib-4, Forns Index, and Lok Score have also been used to diagnose EV with their respective cutoff values. For instance, the Lok Score was found to be the best among serum scores for diagnosing varices, with a positive predictive value of 45.5%, negative predictive value of 86.4%, and diagnostic accuracy of 67.72% for predicting large varices when the value was higher than 0.8 (PUBMED:21451799). Moreover, combining TE with other diagnostic indices like serum fibrosis scores increased their sensitivities with little decrease in their specificities, suggesting that a combination of these methods could enhance the prediction of large esophageal varices (PUBMED:24365388). Additionally, the Lok Score has been shown to have significant potential as a predictive tool for esophageal varices, with a mean Lok Score significantly differing between individuals with and without varices (PUBMED:38060753). Furthermore, a combination of transient elastography and platelet count has been proposed as a noninvasive means to identify EV in patients with compensated cirrhosis, with certain cutoff values for LSM, FIB4 score, and splenic diameter being predictive of the presence of varices (PUBMED:37981928). In conclusion, transient elastography, Fib-4, Forns Index, and Lok Score can be used as noninvasive predictors of esophageal varices in HCV-related cirrhotic patients, potentially restricting the need for endoscopic screening and aiding in the noninvasive management of these patients.
Instruction: Shoulder acute pain in primary healthcare: is retraining effective for GP principals? Abstracts: abstract_id: PUBMED:18820310 Shoulder acute pain in primary healthcare: is retraining effective for GP principals? SAPPHIRE--a randomized controlled trial. Objective: To address the question whether general practitioners (GPs) should receive practical training in shoulder problems and to test whether cortisone injections are better than anaesthetic injections for rotator cuff problems. Methods: A pragmatic split-plot, randomized trial with a cluster factorial design, conducted in general practices across five centres across the United Kingdom. Ninety-one practices were randomized to receive additional training in diagnosing and injecting rotator cuff problems or no additional training. Two hundred patients consulting their general practices with shoulder pain were then randomized to receive either a corticosteroid or lignocaine injection. The main outcome was score on the British Shoulder Disability Questionnaire (BSDQ). The Short-Form 36-item Health Survey and EuroQol at 12 months from entry to the trial were also scored. Results: Over the course of the trial there was a mean difference of 0.94 (s.e. = 1.01) on the BSDQ score between the groups, with patients treated by the untrained group having a mean of 9.46 (s.e. = 0.82) and those by the trained group having a mean of 8.51 (s.e. = 0.60). There were no statistically significant differences between the groups. Analysing by substance injected, there was a mean difference of 0.15 (s.e. = 0.48) throughout the trial between the groups, with patients given the cortisone having a mean BSDQ of 9.67 (s.e. = 0.39) and those given lignocaine, 9.82 (s.e. = 0.39). This was not statistically significantly different. Conclusions: Training GPs in the diagnosis and treatment of shoulder disorders does not make any difference to the outcome, in terms of pain and disability, 1 yr later. Further, there is no advantage to injecting steroid in a group with predominant rotator cuff disorder. Trial registration. International Standard Randomized Controlled Trial Number 58537244. Trial steering committee comprised Prof. Paul Dieppe, Prof. Elaine Hay, Dr Brian Hazleman and Dr Kerenza Hood. abstract_id: PUBMED:9225712 The injured shoulder. Primary care assessment. Shoulder problems are the second most common orthopedic complaint in primary care medicine. The range of motion, ligamentous and muscular support, and central location of the shoulder are key factors for the successful performance of persons at work or on the playing field. These special attributes also contribute to injury and to difficulty in assessing the painful shoulder. An understanding of the pertinent anatomic structures, the differential diagnosis of shoulder pain (intrinsic and referred pains), and a systematic approach to the evaluation including a complete history and physical examination are necessary in this assessment. Adequate examination consists of inspection, muscle strength and range-of-motion testing, palpation, and neurologic testing of the shoulder, neck, and elbow followed by special tests to detect impingement, instability, or tendinosis. This basic assessment is augmented by the proper use of radiographs, arthrography, computed tomography, ultrasonography, and magnetic resonance imaging. An adequate database and proper assessment of the injured shoulder allow the primary care physician to make a pathoanatomic diagnosis and formulate an appropriate treatment plan and make appropriate use of orthopedic consultants. abstract_id: PUBMED:35461039 Physiotherapy and health education protocol in chronic musculoskeletal shoulder pain. Experience in Primary Care Objective: To evaluate the experiencie with a health education program in Primary Care in patients with chronic shoulder pain of musculoskeletal origin, on pain and disability and establish the protocol in primary care. Design: Quasi-experimental longitudinal descriptive observational study. Location: Arroyo de la Vega Health Center, Alcobendas, Madrid. Participants: Patients referred by their Primary Care Physician to the Primary Care Physiotherapy Unit for shoulder pain of musculoskeletal origin. Intervention: 7 group sessions of health education and therapeutic exercise. Main Measurements: Pain intensity was assessed through the Visual Analogue Scale (VAS), the disability of the upper limb with the Disabilities of the Arm, Shoulder and Hand (DASH) questionnaire and the level of disability and shoulder pain with the Shoulder Pain and Disability Index (SPADI) questionnaire. Results: Statistically significant differences were found in the reduction of pain and disability (P&lt;.01), in addition, drug use and recurrences were reduced. Conclusions: The shoulder physiotherapy protocol with health education was effective in reducing pain and disability in patients with chronic shoulder pain of musculoskeletal origin in Primary Care. abstract_id: PUBMED:37997602 Revision reverse shoulder arthroplasty has similar outcomes to primary reverse shoulder arthroplasty at 5 Year average follow-up. Background: Shoulder arthroplasty is a successful procedure that provides pain relief and improvements in function and range of motion. Anatomic and reverse shoulder arthroplasty are both effective procedures, and their indications continue to expand. We look at the outcomes of revision reverse total shoulder arthroplasty and compare it to the outcomes of primary reverse and anatomic total shoulder arthroplasty. Methods: We identified patients undergoing total shoulder arthroplasty at our institution between the years of 2010 and 2020. Data was prospectively collected and retrospectively reviewed for post-operative range of motion and strength in patients with revision surgery and compared to controls. Measurements were collected preoperatively and postoperatively including range of motion and strength in the affected and unaffected shoulder. We collected patient reported outcome measures in person and via phone to identify subjective outcomes of total shoulder arthroplasty. Average final follow-up was 5.27 years. Results: Our total patient sample was split between three groups: those with primary anatomic arthroplasty those who underwent primary reverse arthroplasty, and those who were revised to a reverse shoulder arthroplasty. All three groups had significant improvements in abduction and forward elevation from their pre-operative baseline to two years follow-up. Primary reverse had a significant improvement over revision reverse in abduction at one year follow-up. For all other range of motion measurements, there was no statistically significant difference at 2 years between primary and revision reverse shoulder arthroplasty. Patient reported outcomes had a significant increase from pre-op to most recent follow-up in all three groups. Conclusion: Overall, our data suggest there is an improvement in outcomes with both primary and revision surgeries, and that results after revision reverse total shoulder arthroplasty may be comparable to primary reverse total shoulder arthroplasty. abstract_id: PUBMED:28932287 The effectiveness of ultrasound guided hydrodistension and physiotherapy in the treatment of frozen shoulder/adhesive capsulitis in primary care: a single centre service evaluation. Background: Evidence for optimal non-operative treatment of frozen shoulder is lacking. The present study aimed to evaluate a treatment strategy for stage II to III frozen shoulder provided by the current primary care musculoskeletal service. Methods: General practioner referrals of shoulder pain to the musculoskeletal service diagnosed with stage II to III frozen shoulder and who opted for a treatment strategy of hydrodistension and guided physiotherapy exercise programme over a 12-month period were evaluated for 6 months. Thirty-three patients were diagnosed with stage II to III frozen shoulder by specialist physiotherapists and opted for the treatment strategy. Outcome measures included Shoulder Pain Disability Index (SPADI) and Shortened Disabilities of the Arm, Shoulder and Hand (QuickDASH), pain score and range of movement. Data were collected at baseline, as well as at 6 weeks, 12 weeks and 6 months. Results: All patients significantly improved in shoulder symptoms on the SPADI and QuickDASH scores (p &lt; 0.001). Pain scores and range of shoulder movement flexion, abduction, external rotation showed significant improvement at all time points (p &lt; 0.001). Conclusions: This service evaluation demonstrates that management of frozen shoulder stage II to III, as conducted by physiotherapists in a primary care setting utilizing hydrodistension and a guided exercise programme, represents an effective non-operative treatment strategy. abstract_id: PUBMED:8903700 Implementation of neck/shoulder rehabilitation in primary health care clinics. Objective: To provide primary health care practitioners with the practical information necessary to implement rehabilitation of patients with neck/shoulder pain in clinical facilities. Equipment and program design are discussed in detail. Data Source: A Medline literature search was performed. English and Scandinavian language references were included in the review. Key words included neck/shoulder pain, neck pain, rehabilitation, equipment, exercise. Study Selection: The studies selected included clinical trials in which rehabilitation was the main intervention in the treatment of patients with neck/shoulder pain. Data Synthesis: Reestablishing functional capacities of the musculature of the cervical spine can be accomplished with modest equipment and can be easily incorporated into the clinical setting. The extensors of the cervical spine are the most important muscle group that require rehabilitation and exhibit the highest values in all strength parameters. High-intensity rehabilitative protocols accompanied by the encouragement of fellow patients and support from personnel are essential for treatment success. Conclusions: Complete primary health care treatment of patients with chronic, recurrent and post-traumatic neck/shoulder pain will necessitate longer rehabilitation programs. Future studies will provide additional information regarding specific program design. abstract_id: PUBMED:34387955 Shoulder patients in primary and specialist health care. A cross-sectional study. Objectives: Shoulder pain is a prevalent problem and has a considerable impact on the use of primary and specialist health care in Norway. It is important to improve short-term recovery and reduce long-term consequences regarding pain and disability, the high costs of treatment and the amount of sick-leave. Treatment for non-specific shoulder pain is mainly non-operative. The aims of this study were to investigate if there are differences in main characteristics, pain and disability (SPADI-score) and psycho-social factors between patients in primary and specialist health care. Methods: This cross-sectional study included patients consulting physiotherapy in primary health care and patients at an outpatient clinic in specialist health care. Well-known and tested questionnaires for these populations were used and variables were divided into clinical, sociodemographic, psycho-social, and shoulder pain and disability. Descriptive statistics were applied. Two-sample t-test and linear regression were used for continuous data whereas chi-square tests and logistic regression were applied to test differences in categorical data between the two study populations. Results: Two hundred and 36 patients were recruited from primary health care (FYSIOPRIM, Physiotherapy In Primary Care) and 167 from specialist health care. Patients in primary health care reported less regular use of pain medication (30.7 vs. 61.3%) and fewer patients had symptom duration &gt;12 months (41.9 vs. 51.0%). Furthermore, they reported lower pain intensity, less shoulder pain and disability (SPADI-score), lower scores on psycho-social factors, but higher on expectations of recovery. Conclusions: Patients with shoulder pain treated in primary health care and in specialist health care are different according to factors such as duration of symptoms, pain and disability, and some of the psycho-social variables. However, the differences are small and the variations within the two study samples is large. Patients treated in primary health care seemed to be less affected and to have higher expectations concerning their recovery. However, based on our results we may question why many patients are referred to specialist health care rather than continuing treatment in primary health care. abstract_id: PUBMED:8903699 Rehabilitation of neck/shoulder patients in primary health care clinics. Objective: To review the literature pertaining to neck/shoulder pain and its treatment primarily through rehabilitation. A discussion regarding the theoretical basis for clinical rehabilitation is presented. Data Source: A Medline literature search was performed. English and Scandinavian language articles were reviewed. Key words included neck/shoulder pain, neck pain, rehabilitation and exercise. Study Selection: The studies included used rehabilitation of the cervical musculature as the main treatment therapy on patients suffering from neck/shoulder pain as defined by the Standardized Nordic Questionnaires. Data Synthesis: The relatively new concept of neck/shoulder rehabilitation seems to result in pain reduction, restoration of functional values and psychological benefit for patients suffering from chronic, recurrent, post-traumatic or postoperative neck/shoulder pain. Successful protocols seem to be dependent upon sufficient dosage, duration and a relative disregard of pain. Side effects seem to be rare. Conclusions: Rehabilitation of patients with neck/shoulder pain offers great promise; sufficient documentation exists to warrant serious consideration from all practitioners in the primary health sector. Future studies will be necessary to further clarify which rehabilitative procedures are most suitable for different patient types. abstract_id: PUBMED:29798518 Effectiveness of arthroscopic capsular release to treat primary severe frozen shoulder through trans cuff portal Objective: To assess the effectiveness of arthroscopic capsular release to treat primary severe frozen shoulder through trans cuff portal. Methods: Between June 2012 and January 2015, 28 patients with primary severe frozen shoulder were enrolled in the study. There were 8 males and 20 females with an average age of 57 years (range, 42-81 years). The left shoulder was involved in 16 cases and the right one in 12 cases. The mean disease duration was 11 months (range, 7-21 months). Six patients had diabetes. All patients underwent arthroscopic capsular release by trans cuff portal. The range of motion (ROM) of the shoulder were measured at preoperation and at 6 weeks and 24 months after operation; the scores of American Shoulder and Elbow Surgeons (ASES) and visual analogue scale (VAS) were used to evaluate the shoulder function and pain. Results: Primary healing of incision was obtained, and no complications of infection, shoulder instability, and nerve injury were found. All patients were followed up 24 months. Pain of the shoulder was obviously relieved; VAS score was significantly lower at 6 weeks and 24 months after operation than preoperation ( P&lt;0.05), and at 24 months than 6 weeks ( P&lt;0.05). ROM of the shoulder and ASES score at 6 weeks and 24 months after operation were significantly increased when compared with preoperative ones ( P&lt;0.05); significant difference was found in ROM of forward flexion and external rotation and the ASES score between at 6 weeks and 24 months ( P&lt;0.05). And internal rotation in-creased from trochanter (9 cases), hip (6 cases), sacrum (7 cases), and L 4 vertebral level (6 cases) before operation to the T 12-T 6 vertebral level at 6 weeks and 24 months after operation, which were close to normal side. Conclusion: Arthroscopic capsular release through trans cuff portal is an effective and safe management for primary severe frozen shoulder. abstract_id: PUBMED:15361390 A comparison of four shoulder-specific questionnaires in primary care. Objectives: To compare the validity, responsiveness to change, and user friendliness of four self completed, shoulder-specific questionnaires in primary care. Methods: A cross sectional assessment of validity and a longitudinal assessment of responsiveness to change of four shoulder questionnaires was carried out: the Dutch Shoulder Disability Questionnaire (SDQ-NL); the United Kingdom Shoulder Disability Questionnaire (SDQ-UK); and two American instruments, the Shoulder Pain and Disability Index (SPADI) and the Shoulder Rating Questionnaire (SRQ). 180 primary care consulters with new shoulder region pain each completed two of the questionnaires, as well as EuroQoL and 10 cm visual analogue scales (VAS) for overall pain and difficulty due to the shoulder problem. Each participant was assessed by a standardised clinical schedule. Postal follow up at 6 weeks included baseline measures and self rated assessment of global change of the shoulder problem (seven point Likert scale). Results: Strongest correlations were found for SDQ-UK with EuroQoL 5 score, and for SPADI and SRQ with shoulder pain and difficulty VAS. All shoulder questionnaires correlated poorly with active movement at the painful shoulder. SPADI and SRQ performed better on ROC analysis than SDQ-NL and SDQ-UK (areas under the curve of 0.87, 0.85, 0.77, and 0.77, respectively). However, SRQ scores changed significantly over time in stable subjects. Conclusions: Cross sectional comparison of the four shoulder questionnaires showed they had similar overall validity and patient acceptability. SPADI and SRQ were most responsive to change. Additionally, SPADI was the quickest to complete and scores did not change significantly in stable subjects. Answer: The SAPPHIRE trial addressed the effectiveness of retraining general practitioners (GPs) in diagnosing and injecting rotator cuff problems and whether cortisone injections are superior to anaesthetic injections for these issues. The study found that training GPs in the diagnosis and treatment of shoulder disorders did not make any difference to the outcome in terms of pain and disability one year later. Additionally, there was no advantage to injecting steroid over lignocaine in patients with predominant rotator cuff disorder (PUBMED:18820310). Therefore, based on this trial, retraining GP principals in the specific area of shoulder acute pain management, at least in terms of injection techniques for rotator cuff problems, does not appear to be effective.
Instruction: Outcomes after acute ischemic stroke in the United States: does residential ZIP code matter? Abstracts: abstract_id: PUBMED:25773298 Outcomes after acute ischemic stroke in the United States: does residential ZIP code matter? Background: We sought to analyze the impact of socioeconomic status (SES) on in-hospital outcomes, cost of hospitalization, and resource use after acute ischemic stroke. Methods And Results: We used the 2003-2011 Nationwide Inpatient Sample database for this analysis. All admissions with a principal diagnosis of acute ischemic stroke were identified by using International Classification of Diseases, Ninth Revision codes. SES was assessed by using median household income of the residential ZIP code for each patient. Quartile 1 and quartile 4 reflect the lowest-income and highest-income SES quartile, respectively. During a 9-year period, 775,905 discharges with acute ischemic stroke were analyzed. There was a progressive increase in the incidence of reperfusion on the first admission day across the SES quartiles (P-trend&lt;0.001). In addition, we observed a significant reduction in discharge to nursing facility, across the SES quartiles (P-trend&lt;0.001). Although we did not observe a significant difference in in-hospital mortality across the SES quartiles in the overall cohort (P-trend=0.22), there was a significant trend toward reduced in-hospital mortality across the SES quartiles in younger patients (&lt;75 years) (P-trend&lt;0.001). The mean length of stay in the lowest-income quartile was 5.75 days, which was significantly higher compared with other SES quartiles. Furthermore, the mean adjusted cost of hospitalization among quartiles 2, 3, and 4, compared with quartile 1, was significantly higher by $621, $1238, and $2577, respectively. Compared with the lowest-income quartile, there was a significantly higher use of echocardiography, invasive angiography, and operative procedures, including carotid endarterectomy, in the highest-income quartile. Conclusions: Patients from lower-income quartiles had decreased reperfusion on the first admission day, compared with patients from higher-income quartiles. The cost of hospitalization of patients from higher-income quartiles was significantly higher than that of patients from lowest-income quartiles, despite longer hospital stays in the latter group. This might be partially attributable to a lower use of key procedures among patients from lowest-income quartile. abstract_id: PUBMED:30693424 White Matter Integrity and Early Outcomes After Acute Ischemic Stroke. Chronic white matter structural injury is a risk factor for poor long-term outcomes after acute ischemic stroke (AIS). However, it is unclear how white matter structural injury predisposes to poor outcomes after AIS. To explore this question, in 42 AIS patients with moderate to severe white matter hyperintensity (WMH) burden, we characterized WMH and normal-appearing white matter (NAWM) diffusivity anisotropy metrics in the hemisphere contralateral to acute ischemia in relation to ischemic tissue and early functional outcomes. All patients underwent brain MRI with dynamic susceptibility contrast perfusion and diffusion tensor imaging within 12 h and at day 3-5 post stroke. Early neurological outcomes were measured as the change in NIH Stroke Scale score from admission to day 3-5 post stroke. Target mismatch profile, percent mismatch lost, infarct growth, and rates of good perfusion were measured to assess ischemic tissue outcomes. NAWM mean diffusivity was significantly lower in the group with early neurological improvement (ENI, 0.79 vs. 0.82 × 10-3, mm2/s; P = 0.02). In multivariable logistic regression, NAWM mean diffusivity was an independent radiographic predictor of ENI (β = - 17.6, P = 0.037). Median infarct growth was 118% (IQR 26.8-221.9%) despite good reperfusion being observed in 65.6% of the cohort. NAWM and WMH diffusivity metrics were not associated with target mismatch profile, percent mismatch lost, or infarct growth. Our results suggest that, in AIS patients, white matter structural integrity is associated with poor early neurological outcomes independent of ischemic tissue outcomes. abstract_id: PUBMED:28481164 Diffuse microvascular dysfunction and loss of white matter integrity predict poor outcomes in patients with acute ischemic stroke. We sought to investigate the relationship between blood-brain barrier (BBB) permeability and microstructural white matter integrity, and their potential impact on long-term functional outcomes in patients with acute ischemic stroke (AIS). We studied 184 AIS subjects with perfusion-weighted MRI (PWI) performed &lt;9 h from last known well time. White matter hyperintensity (WMH), acute infarct, and PWI-derived mean transit time lesion volumes were calculated. Mean BBB leakage rates (K2 coefficient) and mean diffusivity values were measured in contralesional normal-appearing white matter (NAWM). Plasma matrix metalloproteinase-2 (MMP-2) levels were studied at baseline and 48 h. Admission stroke severity was evaluated using the NIH Stroke Scale (NIHSS). Modified Rankin Scale (mRS) was obtained at 90-days post-stroke. We found that higher mean K2 and diffusivity values correlated with age, elevated baseline MMP-2 levels, greater NIHSS and worse 90-day mRS (all p &lt; 0.05). In multivariable analysis, WMH volume was associated with mean K2 ( p = 0.0007) and diffusivity ( p = 0.006) values in contralesional NAWM. In summary, WMH severity measured on brain MRI of AIS patients is associated with metrics of increased BBB permeability and abnormal white matter microstructural integrity. In future studies, these MRI markers of diffuse cerebral microvascular dysfunction may improve prediction of cerebral tissue infarction and functional post-stroke outcomes. abstract_id: PUBMED:35750164 The effect of long-term exposure to air pollution and seasonal temperature on hospital admissions with cardiovascular and respiratory disease in the United States: A difference-in-differences analysis. Background: Few studies have simultaneously examined the effect of long-term exposure to air pollution and ambient temperature on the rate of hospital admissions with cardiovascular and respiratory disease using causal inference methods. Methods: We used a variation of a difference-in-difference (DID) approach to assess the effects of long-term exposure to warm-season temperature, cold-season temperature, NO2, O3, and PM2.5 on the rate of hospital admissions for cardiovascular disease (CVD), myocardial infarction (MI), ischemic stroke, and respiratory diseases from 2001 to 2016 among Medicare beneficiaries who use fee-for-service programs. We computed the rate of admissions by zip code and year. Covariates included demographic and socioeconomic variables which were obtained from the decennial Census, the American Community Survey, the Behavioral Risk Factor Surveillance System, and the Dartmouth Health Atlas. As a secondary analysis, we restricted the analysis to zip code-years that had exposure to low concentrations of our pollutants. Results: PM2.5 was associated with a significant increase in the absolute rate of annual admissions with cardiovascular disease by 47.71 admissions (95 % CI: 41.25-56.05) per 100,000 person-years, myocardial infarction by 7.44 admissions (95 % CI: 5.53-9.63) per 100,000 person-years, and 18.58 respiratory admissions (95 % CI: 12.42-23.72) for each one μg/m3 increase in two-year average levels. O3 significantly increased the rates of all the studied outcomes. NO2 was associated with a decreased rate of admissions with MI by 0.83 admissions (95 % CI: 0.10-1.55) per 100,000 person-years but increased rate of admissions for respiratory disease by 3.16 admissions (95 % CI: 1.34-5.24) per 100,000 person-years. Warmer cold-season temperature was associated with a decreased admissions rate for all outcomes. Conclusion: Air pollutants, particularly PM2.5 and O3, increased the rate of hospital admissions with cardiovascular and respiratory disease among the elderly, while higher cold-season temperatures decreased the rate of admissions with these conditions. abstract_id: PUBMED:36842351 Household income is associated with functional outcomes in a multi-institutional cohort of patients with ischemic stroke and COVID-19. Objectives: The COVID-19 pandemic has heightened awareness of health disparities associated with socioeconomic status (SES) across the United States. We examined whether household income is associated with functional outcomes after stroke and COVID-19. Materials And Methods: This was a multi-institutional, retrospective cohort study of consecutively hospitalized patients with SARS-CoV-2 and radiographically confirmed stroke presenting from March through November 2020 to any of five comprehensive stroke centers in metropolitan Chicago, Illinois, USA. Zip-code-derived household income was dichotomized at the Chicago median. Logistic regression was used to examine the relationship between household income and good functional outcome (modified Rankin Scale 0-3 at discharge, after ischemic stroke). Results: Across five hospitals, 159 patients were included. Black patients comprised 48.1%, White patients 38.6%, and Hispanic patients 27.7%. Median household income was $46,938 [IQR: $32,460-63,219]. Ischemic stroke occurred in 115 (72.3%) patients (median NIHSS 7, IQR: 0.5-18.5) and hemorrhagic stroke in 37 (23.7%). When controlling for age, sex, severe COVID-19, and NIHSS, patients with ischemic stroke and household income above the Chicago median were more likely to have a good functional outcome at discharge (OR 7.53, 95% CI 1.61 - 45.73; P=0.016). Race/ethnicity were not included in final adjusted models given collinearity with income. Conclusions: In this multi-institutional study of hospitalized patients with stroke, those residing in higher SES zip codes were more likely to have better functional outcomes, despite controlling for stroke severity and COVID-19 severity. This suggests that area-based SES factors may play a role in outcomes from stroke and COVID-19. abstract_id: PUBMED:34566844 Excessive White Matter Hyperintensity Increases Susceptibility to Poor Functional Outcomes After Acute Ischemic Stroke. Objective: To personalize the prognostication of post-stroke outcome using MRI-detected cerebrovascular pathology, we sought to investigate the association between the excessive white matter hyperintensity (WMH) burden unaccounted for by the traditional stroke risk profile of individual patients and their long-term functional outcomes after a stroke. Methods: We included 890 patients who survived after an acute ischemic stroke from the MRI-Genetics Interface Exploration (MRI-GENIE) study, for whom data on vascular risk factors (VRFs), including age, sex, atrial fibrillation, diabetes mellitus, hypertension, coronary artery disease, smoking, prior stroke history, as well as acute stroke severity, 3- to-6-month modified Rankin Scale score (mRS), WMH, and brain volumes, were available. We defined the unaccounted WMH (uWMH) burden via modeling of expected WMH burden based on the VRF profile of each individual patient. The association of uWMH and mRS score was analyzed by linear regression analysis. The odds ratios of patients who achieved full functional independence (mRS &lt; 2) in between trichotomized uWMH burden groups were calculated by pair-wise comparisons. Results: The expected WMH volume was estimated with respect to known VRFs. The uWMH burden was associated with a long-term functional outcome (β = 0.104, p &lt; 0.01). Excessive uWMH burden significantly reduced the odds of achieving full functional independence after a stroke compared to the low and average uWMH burden [OR = 0.4, 95% CI: (0.25, 0.63), p &lt; 0.01 and OR = 0.61, 95% CI: (0.42, 0.87), p &lt; 0.01, respectively]. Conclusion: The excessive amount of uWMH burden unaccounted for by the traditional VRF profile was associated with worse post-stroke functional outcomes. Further studies are needed to evaluate a lifetime brain injury reflected in WMH unrelated to the VRF profile of a patient as an important factor for stroke recovery and a plausible indicator of brain health. abstract_id: PUBMED:27212273 Racial Differences in Outcomes after Acute Ischemic Stroke Hospitalization in the United States. Background And Objectives: Racial differences in stroke outcomes have major health policy implications. There is paucity of contemporary data on racial differences in clinical outcomes and resource utilization in acute ischemic stroke hospitalizations in the United States. Methods: We used the 2011-2012 National Inpatient Sample to identify hospitalizations with a primary diagnosis of acute ischemic stroke. Primary outcomes were in-hospital mortality, utilization of thrombolysis, and endovascular mechanical thrombectomy (EMT). Secondary outcomes were length of stay (LOS) and average inflation-adjusted charges. Results: A total of 173,910 hospitalizations representing 835,811 hospitalizations nationwide were included in the study. Mean age was 70.9 years and 52.3% were women. Blacks (adjusted OR .71, 95% CI .64-.78, P &lt; .001) and Asian or Pacific Islanders (adjusted OR .80, 95% CI .66-.97, P = .02) had a lower in-hospital mortality compared to Whites. Blacks were less likely to be treated with thrombolysis (adjusted OR .84, 95% CI .76-.92, P &lt; .001) and EMT (OR .73, 95% CI .58-.91, P = .01). Average LOS and inflation-adjusted charges were significantly higher for racial minorities compared to Whites. Conclusions: Blacks and Asians hospitalized for ischemic stroke are less likely to die in the hospital compared to Whites. Hospitalization for stroke in Blacks is associated with lower rates of reperfusion therapy, longer lengths of stay, and higher costs compared to Whites. abstract_id: PUBMED:34649872 Acute Ischemic Stroke Interventions in the United States and Racial, Socioeconomic, and Geographic Disparities. Background And Objectives: In patients with ischemic stroke (IS), IV alteplase (tissue plasminogen activator [tPA]) and endovascular thrombectomy (EVT) reduce long-term disability, but their utilization has not been fully optimized. Prior research has also demonstrated disparities in the use of tPA and EVT specific to sex, race/ethnicity, socioeconomic status, and geographic location. We sought to determine the utilization of tPA and EVT in the United States from 2016-2018 and if disparities in utilization persist. Methods: This is a retrospective, longitudinal analysis of the 2016-2018 National Inpatient Sample. We included adult patients who had a primary discharge diagnosis of IS. The primary study outcomes were the proportions who received tPA or EVT. We fit a multivariate logistic regression model to our outcomes in the full cohort and also in the subset of patients who had an available baseline National Institutes of Health Stroke Scale (NIHSS) score. Results: The full cohort after weighting included 1,439,295 patients with IS. The proportion who received tPA increased from 8.8% in 2016 to 10.2% in 2018 (p &lt; 0.001) and who had EVT from 2.8% in 2016 to 4.9% in 2018 (p &lt; 0.001). Comparing Black to White patients, the odds ratio (OR) of receiving tPA was 0.82 (95% confidence interval [CI] 0.79-0.86) and for having EVT was 0.75 (95% CI 0.70-0.81). Comparing patients with a median income in their zip code of ≤$37,999 to &gt;$64,000, the OR of receiving tPA was 0.81 (95% CI 0.78-0.85) and for having EVT was 0.84 (95% CI 0.77-0.91). Comparing patients living in a rural area to a large metro area, the OR of receiving tPA was 0.48 (95% CI 0.44-0.52) and for having EVT was 0.92 (95% CI 0.81-1.05). These associations were largely maintained after adjustment for NIHSS, although the effect size changed for many of them. Contrary to prior reports with older datasets, sex was not consistently associated with tPA or EVT. Discussion: Utilization of tPA and EVT for IS in the United States increased from 2016 to 2018. There are racial, socioeconomic, and geographic disparities in the accessibility of tPA and EVT for patients with IS, with important public health implications that require further study. abstract_id: PUBMED:36007263 Association between white matter hyperintensity volume and social functioning limitations among stroke survivors. Objective: Existing literature on white matter hyperintensity volume (WMHV) in stroke patients has rarely focused on post-stroke outcomes related to social functioning limitations, such as transportation, social interaction, food preparation, grocery shopping, and housekeeping. Using prospective data from the VITamin D and OmegA-3 TriaL (VITAL) study, we evaluated the association between WMHV and social functioning limitations among 151 ischemic stroke patients. Materials And Methods: WMHV was ascertained from magnetic resonance imaging (MRI) collected at the time of the stroke event using a validated semiautomated method, and social functioning limitations were assessed using a stroke outcomes questionnaire administered a median of 1.25 years after the date of the MRI scan. Logistic regression was used to explore the association between WMHV and social functioning limitations. Results: After adjusting for age and sex, a statistically significant association was found between WMHV and limitations in social interaction (OR=2.82; 95% CI: 1.21-7.55). Increased risks were seen for limitations related to food preparation (OR=2.06; 95% CI: 0.99-4.54), transportation (OR=1.39; 95% CI: 0.85-2.27), and housekeeping (OR=1.37; 95% CI: 0.91-2.11); however, the associations did not reach statistical significance. We observed no association between WMHV and limitations in grocery shopping (OR=1.08; 95% CI: 0.61-1.89). Conclusions: Future studies are needed to further explore the biological mechanisms underlying the relationship with limitations in social interaction and to replicate our findings using a larger and more diverse study sample. abstract_id: PUBMED:29127272 Structural Integrity of Normal Appearing White Matter and Sex-Specific Outcomes After Acute Ischemic Stroke. Background And Purpose: Women have worse poststroke outcomes than men. We evaluated sex-specific clinical and neuroimaging characteristics of white matter in association with functional recovery after acute ischemic stroke. Methods: We performed a retrospective analysis of acute ischemic stroke patients with admission brain MRI and 3- to 6-month modified Rankin Scale score. White matter hyperintensity and acute infarct volume were quantified on fluid-attenuated inversion recovery and diffusion tensor imaging MRI, respectively. Diffusivity anisotropy metrics were calculated in normal appearing white matter contralateral to the acute ischemia. Results: Among 319 patients with acute ischemic stroke, women were older (68.0 versus 62.7 years; P=0.004), had increased incidence of atrial fibrillation (21.4% versus 12.2%; P=0.04), and lower rate of tobacco use (21.1% versus 35.9%; P=0.03). There was no sex-specific difference in white matter hyperintensity volume, acute infarct volume, National Institutes of Health Stroke Scale, prestroke modified Rankin Scale score, or normal appearing white matter diffusivity anisotropy metrics. However, women were less likely to have an excellent outcome (modified Rankin Scale score &lt;2: 49.6% versus 67.0%; P=0.005). In logistic regression analysis, female sex and the interaction of sex with fractional anisotropy, radial diffusivity, and axial diffusivity were independent predictors of functional outcome. Conclusions: Female sex is associated with decreased likelihood of excellent outcome after acute ischemic stroke. The correlation between markers of white matter integrity and functional outcomes in women, but not men, suggests a potential sex-specific mechanism. Answer: Yes, residential ZIP code does matter for outcomes after acute ischemic stroke in the United States. Studies have shown that socioeconomic status (SES), often inferred from residential ZIP code, impacts various aspects of stroke care and recovery. Patients from lower-income quartiles, as indicated by their ZIP codes, had decreased reperfusion on the first admission day compared with patients from higher-income quartiles (PUBMED:25773298). Furthermore, the cost of hospitalization for patients from higher-income quartiles was significantly higher than that for patients from the lowest-income quartiles, despite longer hospital stays in the latter group. This discrepancy might be partially due to a lower use of key procedures among patients from the lowest-income quartile (PUBMED:25773298). Additionally, household income, which can be associated with ZIP code, has been linked to functional outcomes after stroke. In a multi-institutional study, patients with ischemic stroke and household income above the median for Chicago were more likely to have a good functional outcome at discharge when controlling for age, sex, severe COVID-19, and stroke severity (PUBMED:36842351). Racial disparities also intersect with geographic and socioeconomic factors. For instance, Blacks and Asians hospitalized for ischemic stroke are less likely to die in the hospital compared to Whites, but hospitalization for stroke in Blacks is associated with lower rates of reperfusion therapy, longer lengths of stay, and higher costs compared to Whites (PUBMED:27212273). Moreover, there are racial, socioeconomic, and geographic disparities in the accessibility of acute ischemic stroke interventions such as IV alteplase and endovascular thrombectomy, with Black patients and those from lower-income ZIP codes being less likely to receive these treatments (PUBMED:34649872). In summary, residential ZIP code, as a proxy for SES, is associated with differences in stroke treatment, cost of care, and functional outcomes in the United States, highlighting the influence of socioeconomic factors on health disparities in acute ischemic stroke care and recovery.
Instruction: Postoperative septic arthritis after anterior cruciate ligament reconstruction: does it affect the outcome? Abstracts: abstract_id: PUBMED:10496570 Outcomes of postoperative septic arthritis after anterior cruciate ligament reconstruction. Arthroscopically guided reconstruction of the anterior cruciate ligament is a common orthopaedic procedure. While many associated complications have been described in the literature, postoperative septic arthritis has received little attention. Although rare after anterior cruciate ligament reconstruction, septic arthritis can have devastating consequences. From a group of 831 consecutive patients, we report 4 (0.48%) who sustained septic arthritis. All patients had similar symptoms and were treated by the same surgeon in the same manner. All underwent immediate arthroscopic lavage, open incision, drainage of associated wounds, debridement with graft retention, and treatment with intravenous and then oral antibiotics. The patients underwent an average of 2.75 procedures after the diagnosis to eradicate the infection and restore knee motion. All patients were evaluated at an average of 3 years after surgery. We found that previous knee surgery and meniscal repair were risk factors for the development of postoperative septic arthritis. The infection was successfully eradicated, the ligament graft was preserved, and knee stability and mobility were adequately restored in all patients. However, the clinical outcome of these patients appeared to be inferior to that of patients who had undergone uncomplicated anterior cruciate ligament reconstruction. This inferior outcome appeared to be secondary to damage to the articular cartilage from the infection. abstract_id: PUBMED:35254803 Postoperative Infection After Anterior Cruciate Ligament Reconstruction: Prevention and Management. Because septic arthritis after anterior cruciate ligament reconstruction is an uncommon complication, information on prevention strategies has not been extensively studied. Recommendations that can be made from the available evidence are as follows: (1) prophylactic intravenous antibiotics should be given preoperatively; (2) patellar tendon autograft use decreases the possibility of a postoperative infection; and (3) vancomycin presoaking of grafts is strongly recommended, especially when performing hamstring autograft anterior cruciate ligament reconstruction. When septic arthritis presents after anterior cruciate ligament reconstruction, early management is crucial to allow a satisfactory outcome. Accordingly, early clinical suspicion is the most important factor to achieve a timely diagnosis. The treatment of choice is arthroscopic lavage with intravenous antibiotic treatment. Graft retention, when possible, is important for obtaining better functional results. abstract_id: PUBMED:11964649 Septic arthritis in postoperative anterior cruciate ligament reconstruction. A review of postoperative infected anterior cruciate ligament reconstructions was done on 3500 consecutive arthroscopic procedures. The purpose was to assess incidence, diagnosis, treatment, and outcome factors. Six postoperative intraarticular infections were detected. Average followup was 3 years (range, 2-8 years). The rate of infection was 0.14%. Five men and one woman with a median age of 32.5 years (range, 20-51 years) comprised the study group. The average interval from the onset of symptoms to the initial arthroscopic intervention was 7.5 days (range, 2-20 days). Staphylococcus aureus was present in three knees, Staphylococcus epidermidis in two, and Streptococcus nonhemolytic in one. All patients had initial arthroscopic debridement and lavage followed by 6 weeks of intravenous antibiotics. Two grafts were removed: one patient had delayed ligament reconstruction and the other had total knee arthroplasty. The remaining patients had full range of motion. In the group with the best result, two patients had Staphylococcus epidermidis and one had Staphylococcus aureus, which was treated 2 days after clinical symptoms began. The other two patients infected with Staphylococcus aureus had unsatisfactory results. Anterior cruciate ligament infection is rare, but diagnosable. When treated early with appropriate antibiotic therapy and arthroscopic debridement, four of six grafts were retained. If the infection does not respond rapidly to early therapy, then graft removal is an option. abstract_id: PUBMED:12750136 Treatment of postoperative anterior cruciate ligament infections with graft removal and early reimplantation. Background: Septic arthritis after arthroscopic anterior cruciate ligament reconstruction is rare, and the most appropriate treatment is unclear. Current recommendations are that, if the graft is removed, reimplantation should be delayed for 6 to 9 months. Hypothesis: Early removal of the graft with appropriate infection management followed by early reimplantation can lead to good results. Study Design: Uncontrolled retrospective review. Methods: Records of all patients who developed postoperative infection after anterior cruciate ligament reconstruction were reviewed. Four patients had early graft removal and appropriate infection management including 6 weeks of intravenous antibiotics followed by anterior cruciate ligament graft reimplantation within 6 weeks of completion of antibiotic therapy. Results: Follow-up at an average of 21 months (range, 14 to 31) showed that the patients treated with early reimplantation had full symmetric knee range of motion and no effusion. The average modified Lysholm score was 92.5. Radiographs demonstrated no joint-space narrowing or osteophyte formation. The 30-pound KT-1000 arthrometer side-to-side difference averaged 3 mm. Conclusion: Graft removal after confirmed anterior cruciate ligament graft infection and intravenous antibiotic administration followed by early graft reimplantation can give excellent results. abstract_id: PUBMED:34650284 Management algorithm for septic arthritis after anterior cruciate ligament reconstruction Objective: To summarize the experience in the diagnosis and management of septic arthritis after anterior cruciate ligament reconstruction. Methods: A retrospective review was conducted of all the arthroscopic anterior cruciate ligament reconstructions performed at Department of Sports Medicine, Peking University Third Hospital between January 2001 and December 2020. In the study, 65 of 27 867 patients experienced postoperative septic arthritis. The incidence, presentation, laboratory results, treatment, and outcome of all the infected patients were analyzed. The experiences of diagnosis and management of septic arthritis after anterior cruciate ligament reconstruction were summarized. Results: A total of 27 867 anterior cruciate ligament reconstructions were performed at our department between January 2001 and December 2020. In the study, 65 (0.23%) patients were identified with postoperative septic arthritis. The most common symptoms of the infected patients were fever (38.7±0.5) ℃, knee swelling, pain, and restricted motion. The mean peripheral white blood cell count (WBC) was (9.2±2.6)×109/L (range 4.2×109/L-19.4×109/L), with (72.5±6.3) % (range 54.9%-85.1%) polymorphonuclear neutrophils (N). The mean erythrocyte sedimentation rate (ESR) was (59.9±24.1) mm/h (range 9-108 mm/h), C-reactive protein (CRP) was (10.9±5.7) mg/dL (range 1.2-30.8 mg/dL), and fibrinogen (FIB) level was (7.0±1.6) g/L (range 3.7-10.8 g/L). All of the laboratory results were statistically higher in the infection group compared with the normal postoperative group (P<0.001). The synovial white blood cell count (SWBC) of aspirated knee joint fluid was (45.0±29.8)×109/L (range 7.1×109-76.5×109/L). Polymorphonuclear cell percentage (PMNC) was (90.27±7.86) % (range 60%-97%). In the study, 45 patients (69.2%) had positive aspirate cultures. Microbiology showed coagulase-negative Staphylococcus (CNS) and Staphylococcus aureus (SA) were the most common bacterium (34 cases and 7 cases, individually). There were 26 methicillin-resistant Staphylococcus. Both conservative (16 patients) and operative (49 patients) treatments were effective, but conservative group had a longer recovery time (5.6 d vs. 1.6 d, P=0.042). Conclusion: Septic arthritis after arthroscopic anterior cruciate ligament reconstruction is a rare but potentially devastating complication. The correct diagnosis relies on synovial fluid analysis and bacterial culture. Our proposed treatment protocol is arthroscopic debridement and antibiotic therapy as quickly as possible. abstract_id: PUBMED:17369557 Septic arthritis after arthroscopic anterior cruciate ligament reconstruction: a retrospective analysis of incidence, management and outcome. Background: Septic arthritis after arthroscopic anterior cruciate ligament reconstruction is a rare complication. In the literature, several different managements are proposed. Hypothesis: The graft can be retained during treatment of a septic arthritis after anterior cruciate ligament reconstruction. Study Design: Case series; Level of evidence, 4. Methods: A retrospective analysis of knee joint infections occurring after arthroscopically assisted anterior cruciate ligament reconstructions was conducted. Fifteen patients were treated for postoperative septic arthritis of the knee after anterior cruciate ligament reconstruction between 1996 and 2005. All patients underwent an urgent extensive arthroscopic debridement (wash-out and synovectomy) and parenteral antibiotics and oral antibiotics subsequently. Repetitive wash-outs were performed if necessary. The average time at follow-up for our series was 58 months (range, 9-99 months). Results: Only 1 graft was removed during debridement because it was nonfunctional. All other patients retained their anterior cruciate ligament reconstruction. There was no reinfection. There were 2 traumatic reruptures. We evaluated 11 patients: in 6 patients the Lachman test showed a translation of more than 3 mm, but all patients had a firm endpoint and there was no subjective instability. Early signs of radiological degeneration were seen in 3 patients. The value for the Lysholm knee scoring scale was 83, on average, ranging from 57 to 100. Regarding the International Knee Documentation Committee score, 2 patients had a final evaluation of normal, 7 patients nearly normal, and 2 patients abnormal. Conclusion: The graft can be retained during treatment of septic arthritis after anterior cruciate ligament reconstruction. abstract_id: PUBMED:37020216 Vancomycin presoak reduces infection in anterior cruciate ligament reconstruction: a systematic review and meta-analysis. Purpose: To compare the effect of vancomycin presoak treatment of grafts during anterior cruciate ligament reconstruction on the incidence of postoperative infection or septic arthritis. Methods: Studies published before May 3, 2022 investigating vancomycin presoak of grafts during anterior cruciate ligament reconstruction were searched in the PubMed and Cochrane Central Register of Controlled Trials. Studies were screened, and data on the incidence of postoperative infection or septic arthritis were extracted and included in the analysis. Results: Thirteen studies were included for analysis after search screening, yielding a total of 31,150 participants for analysis, of whom 11,437 received graft vancomycin presoak treatment, and 19,713 did not receive treatment. Participants who received vancomycin treatment had significantly lower infection rates (0.09% versus 0.74%; OR 0.17; 95% CI 0.10, 0.30; P &lt; 0.00001). Conclusion: Pre-soaking of the graft with vancomycin during ACL reconstruction reduced the incidence of postoperative infection and septic arthritis. abstract_id: PUBMED:29422445 Septic arthritis caused by Granulicatella adiacens after anterior cruciate ligament reconstruction. Septic arthritis after knee arthroscopy is an extremely rare condition. A rate of 0.15-0.84% cases of postoperative infection is estimated in several series. The arthroscopy procedure most frequently related with an infectious complication is anterior cruciate ligament (ACL) reconstruction, with a prevalence of 0.3-1.7% cases. Staphylococcus sp. is the pathogen most commonly cultured. We describe a case of septic arthritis after ACL reconstruction. We found no published case of septic arthritis caused by this microorganism after knee arthroscopy. It is probably the first case published in the literature. abstract_id: PUBMED:25727493 Functional Outcome and Graft Retention in Patients With Septic Arthritis After Anterior Cruciate Ligament Reconstruction: A Systematic Review. Purpose: To provide a comprehensive review of clinical and functional outcomes after treatment for septic arthritis after anterior cruciate ligament reconstruction. Methods: A systematic review of the literature was performed. Two reviewers assessed and confirmed the methodologic quality of each study. Studies that met the inclusion criteria were assessed for pertinent data, and when available, similar outcomes were combined to generate frequency-weighted means. Results: Nineteen studies met the inclusion and exclusion criteria for this review, reporting on a total of 203 infected knees. The mean age was 27.5 years and the mean length of follow-up was 44.2 months, with male patients comprising 88% of the population. Hamstring and bone-patellar tendon-bone autografts were used in 63% and 33% of patients, respectively, with 78% of patients retaining their grafts. The studies reported mean flexion and extension deficits of 5.8° and 1.0°, respectively, and laxity testing showed a mean difference of 1.9 mm. The studies reported mean Lysholm, International Knee Documentation Committee, and Tegner scores of 82.1, 68.2, and 5.6, respectively. Of the patients, 83% reported an ability to return to activities of daily living whereas 67% reported a return to their preinjury level of athletics. Evidence of new degenerative changes was seen in 22% of patients. Conclusions: Septic arthritis after anterior cruciate ligament reconstruction remains a very infrequent event, affecting 0.5% of patients. On average, outcomes in these patients are comparable with those in patients in whom infection does not develop, including postoperative range of motion, residual instability, Lysholm scores, and return to preinjury level of activity. These patients do exhibit decreased International Knee Documentation Committee scores compared with patients without septic arthritis, however. The impact of this differential is not clear, but this scoring difference suggests that septic arthritis may be associated with more severe symptoms and reduced functional outcomes. In addition, there is limited evidence suggesting that septic arthritis leads to early degenerative changes found on imaging. Level Of Evidence: Level IV, systematic review of Level III and IV studies. abstract_id: PUBMED:24992054 Infection after arthroscopic anterior cruciate ligament reconstruction. Septic arthritis is a rare but potentially devastating complication of anterior cruciate ligament (ACL) reconstruction surgery. The purpose of this study was to provide an evidence-based summarization of the treatment and outcome of infection after ACL reconstruction with a pooled analysis of the reported cases. The authors conducted a systematic review of published studies that evaluated the outcome of septic arthritis after arthroscopic ACL reconstruction. A structured literature review of multiple databases referenced articles from 1950 to 2012. A total of 22,836 knees from 14 published studies were assessed. Postoperative septic arthritis occurred in 121 knees, with a pooled percentage of 0.5%. Mean duration of follow-up after ACL reconstruction was 53.6 months (range, 4-218 months). An average of 1.92 procedures (range, 1-5 procedures) were performed to eradicate the infection. The grafts were retained in 77% of cases. Postoperative intravenous antibiotics were used for at least 5 days (range, 5-90 days) after debridement. At final follow-up, mean postoperative Lysholm score was 80.2 (range, 23-100). No reinfection was observed in 121 patients. This study has helped to further elucidate the outcomes of infection after ACL reconstruction. Once an infection is encountered, culture-specific antibiotics and surgical joint irrigation with graft retention are recommended as initial treatment. Graft removal can be considered only for those infections resistant to initial treatment. Answer: Postoperative septic arthritis after anterior cruciate ligament (ACL) reconstruction is a rare but serious complication that can indeed affect the outcome of the surgery. The incidence of this complication is reported to be between 0.14% and 0.48% in different studies (PUBMED:10496570, PUBMED:11964649). When it occurs, it can lead to inferior clinical outcomes compared to patients who undergo uncomplicated ACL reconstruction, primarily due to damage to the articular cartilage from the infection (PUBMED:10496570). Early management of septic arthritis is crucial for a satisfactory outcome, and the treatment of choice is arthroscopic lavage with intravenous antibiotic treatment. Graft retention, when possible, is important for obtaining better functional results (PUBMED:35254803). In cases where the infection does not respond rapidly to early therapy, graft removal is an option, and early removal followed by appropriate infection management and early reimplantation can lead to good results (PUBMED:12750136). A systematic review found that on average, outcomes in patients with septic arthritis after ACL reconstruction are comparable with those in patients without infection, including postoperative range of motion, residual instability, and return to preinjury level of activity. However, these patients may exhibit decreased International Knee Documentation Committee scores, suggesting more severe symptoms and reduced functional outcomes. There is also limited evidence suggesting that septic arthritis may lead to early degenerative changes on imaging (PUBMED:25727493). In terms of graft retention, studies have shown that the graft can often be retained during treatment of septic arthritis, with no reinfection observed in a majority of cases and a reasonable functional outcome achieved (PUBMED:17369557). However, the functional outcome and graft retention rates can vary, with some studies reporting that 78% of patients retained their grafts and others reporting graft removal in resistant cases (PUBMED:24992054). Preventative measures, such as prophylactic intravenous antibiotics and vancomycin presoaking of grafts, have been recommended to decrease the possibility of postoperative infection (PUBMED:35254803, PUBMED:37020216). In conclusion, while septic arthritis after ACL reconstruction is rare, it can negatively impact the outcome of the surgery.
Instruction: Diagnosing HIV infection in patients presenting with glandular fever-like illness in primary care: are we missing primary HIV infection? Abstracts: abstract_id: PUBMED:22575070 Diagnosing HIV infection in patients presenting with glandular fever-like illness in primary care: are we missing primary HIV infection? Objectives: The aim of the study was to examine the prevalence of HIV infection in patients presenting in primary care with glandular fever (GF)-like illness. Methods: Samples from primary care submitted for a GF screen between April 2009 and June 2010 were identified. Samples without an HIV request were anonymized and retrospectively tested using a 4th-generation HIV antigen/antibody screening test. Reactive samples were further confirmed by an HIV antibody only test, with or without a p24 antigen assay. Antibody avidity testing based on the Recent HIV Infection Testing Algorithm (RITA) was used to identify individuals with evidence of recent acquisition (within 4-5 months). Results: Of 1046 GF screening requests, concomitant HIV requests were made in 119 patients. Excluding one known positive patient, 2.5% (three of 118) tested HIV positive. Forty-five (4.3%) had a subsequent HIV test through another consultation within 1 year; of these, 4.4% (two of 45) tested positive. Of the remaining 882 patients, 694 (78.7%) had samples available for unlinked anonymous HIV testing, of which six (0.9%) tested positive. The overall HIV prevalence was 1.3% (11 of 857), with 72.7% (eight of 11) of cases missed at initial primary care presentation. Four of the nine (44.4%) available positive samples had evidence of recent acquisition, with three (75.0%) missed at initial primary care presentation. Conclusion: Low levels of HIV testing in patients presenting in primary care with GF-like illness are resulting in a significant number of missed HIV and seroconversion diagnoses. Local policy should consider adopting an opt-out strategy to include HIV testing routinely within the GF-screening investigation panel. abstract_id: PUBMED:3146367 Clinical picture of primary HIV infection presenting as a glandular-fever-like illness. The clinical symptoms and signs were assessed in 20 consecutive patients developing infection with the human immunodeficiency virus (HIV). All were male homosexuals and all presented with a glandular-fever-like illness. Changes in laboratory values were compared with findings in 40 HIV negative male homosexual controls. In the 10 patients for whom date of exposure to the virus could be established the incubation period was 11-28 days (median 14). One or two days after the sudden onset of fever patients developed sore throat, lymphadenopathy, rash, lethargy, coated tongue, tonsillar hypertrophy, dry cough, headache, myalgia, conjunctivitis, vomiting, night sweats, nausea, diarrhoea, and palatal enanthema. Twelve patients had painful, shallow ulcers in the mouth or on the genitals or anus or as manifested by oesophageal symptoms; these ulcers may have been the site of entry of the virus. During the first week after the onset of symptoms mild leucopenia, thrombocytopenia, and increased numbers of banded neutrophils were detected (p less than 0.0005). The mean duration of acute illness was 12.7 days (range 5-44). All patients remained healthy during a mean follow up period of 2.5 years. Heightened awareness of the typical clinical picture in patients developing primary HIV infection will alert the physician at an early stage and so aid prompt diagnosis and help contain the epidemic spread of AIDS. abstract_id: PUBMED:21995931 Is the addition of a standard HIV educational comment to virology laboratory reports effective in changing requesting behaviour? Background: It is not known whether the addition of general educational comments to virology laboratory reports can influence the requesting behaviour of practitioners. Objectives: To establish if there is any change in requesting behaviour after the addition of a standard comment to virology laboratory reports highlighting the need to include HIV testing when investigating patients presenting with a glandular fever (GF)-like illness. Study Design: A standard comment to encourage inclusion of HIV testing was added to all GF screening reports from April 2010. The proportion of GF screening samples with concomitant HIV test requests before and after the introduction of the standard comment were compared over a 1 year period. Results: A significant increase in concomitant HIV requests from 9.5% to 19.6% on GF screening samples from primary care practitioners was observed after the addition of the standard comment (p&lt;0.0000001). This effect peaked at 5 months and although it waned, requests at one year were still higher than at baseline. Conclusions: Addition of a general HIV educational comment to virology laboratory reports is effective in changing requesting behaviour. abstract_id: PUBMED:8746338 Mucocutaneous manifestations in 22 consecutive cases of primary HIV-1 infection. Twenty-two consecutive patients presenting with symptomatic human immunodeficiency virus 1 (HIV-1) seroconversion were studied. Most of the patients had a glandular fever-like illness. All patients had fever and pharyngitis, and eight of them also suffered from ulcers of the oral, genital or anal mucosa. Uniform skin eruptions were observed in 17 of the 22 patients. The exanthem consisted of varying numbers of macular or maculopapular lesions that were oval or rounded in shape, ranging from a few millimetres to 1 cm in diameter. The lesions were distributed on the upper thorax in all cases, and were particularly profuse in the collar region. The face, forehead and scalp were involved in most cases, but the eruption was sparse or absent at the periphery of the extremities. In the majority of patients, the exanthem appeared after 2 or 3 days of fever. The exanthem developed during the first day, persisted for 5-8 days, and then cleared concurrently with the general recovery of the patients. Histopathological studies of skin punch biopsy specimens from four patients showed a sparse lymphocytic cell infiltrate distributed around vessels of the dermal superficial plexus. The infiltrates predominantly consisted of equally represented T-helper/inducer and T-suppressor/cytotoxic cells. A vacuolar aberration of basal layer cells was found in two of the four cases studied histologically. The microscopic findings correspond to the histopathological patterns seen in toxicodermia and in the interface dermatitis of morbilliform viral exanthems. The exanthem is a frequent and characteristic sign of primary HIV infection, which is further indicated if mucosal ulcers are present. abstract_id: PUBMED:2569901 Clinical course of primary HIV infection: consequences for subsequent course of infection. Objective: To investigate the impact of the clinical course of the primary HIV infection on the subsequent course of the infection. Design: Prospective documenting of seroconversion, follow up at six month intervals, and analysis of disease progression by life tables. Patients: 86 Men in whom seroconversion occurred within 12 months. Primary Outcome Measure: Progression of HIV infection, defined as CD4 lymphocyte count less than 0.5 X 10(9)/l, recurrence of HIV antigenaemia, or progression to Centers for Disease Control group IV. Main Results: Median follow up was 670 (range 45-1506) days. An acute illness like glandular fever occurred in 46 (53%) subjects. Three year progression rates to Centers for Disease Control group IV was 78% at three years for those who had longlasting illnesses (duration greater than or equal to 14 days) during seroconversion as compared with 10% for those who were free of symptoms or had mild illness. All six patients who developed AIDS had had longlasting primary illnesses. Three year progression rates to a CD4 lymphocyte count less than 0.5 X 10(9)/l and to recurrence of HIV antigenaemia were significantly higher for those who had longlasting primary illnesses than those who had no symptoms or mild illness (75% v 42% and 55% v 14%, respectively). Conclusion: The course of primary infection may determine the subsequent course of the infection. abstract_id: PUBMED:12487194 Hepatic encephalopathy in primary human immunodeficiency virus type 1 (HIV-1) infection. Primary infection of human immunodeficiency virus type 1 (HIV-1) is occasionally associated with common cold-like symptoms, and rarely with a self-limited illness resembling infectious mononucleosis. We report a 32-year-old man who presented with infectious mononucleosis-like blood picture on admission. Five days after admission he developed hepatic encephalopathy, which was ameliorated by administration of bolus corticosteroid. Based on the results of serologic studies, we diagnosed that he had primary HIV-1 infection. To our knowledge, this is the first published report of hepatic encephalopathy as a clinical manifestation of primary HIV-1 infection. abstract_id: PUBMED:7819891 Does symptomatic primary HIV-1 infection accelerate progression to CDC stage IV disease, CD4 count below 200 x 10(6)/l, AIDS, and death from AIDS? Objective: To investigate the prognostic significance of symptomatic primary HIV-1 infection. Design: Prospective study of homosexual men seroconverting to HIV in 1985 and 1986. Patients were followed up at least three times yearly with clinical examinations and T cell subset determinations for an average of 7.2 years. Setting: Research project centred on attenders for treatment and screening for HIV at the Karolinska Institute, Stockholm. Subjects: 19 patients presenting with a glandular-fever-like illness associated with seroconversion to HIV and 29 asymptomatic seroconverters. Main Outcome Measures: Progression to Centers for Disease Control and Prevention stage IV disease, CD4 cell count below 200 x 10(6)/l, AIDS, and death from AIDS. Results: Symptomatic seroconverters were significantly more likely to develop Centers for Disease Control and Prevention stage IV disease (95% v 66%), CD4 cell counts below 200 x 10(6)/l (84% v 55%), and AIDS (58% v 28%) and die of AIDS (53% v 7%). Conclusion: A glandular-fever-like illness associated with seroconversion to HIV-1 predicts accelerated progression to AIDS and other HIV related diseases. abstract_id: PUBMED:2364536 Lymphocytic alveolitis in primary HIV infection. Primary infection with the human immunodeficiency virus (HIV-1) has been associated with a self-limited illness resembling acute infectious mononucleosis. Pulmonary manifestations have been notably absent in published reports. The authors describe a 28-year-old homosexual male who presented with primary HIV-1 infection associated with CD8+ lymphocytic alveolitis. Diagnosis was delayed because HIV antibody was not detected by the Abbott ELISA, although the same and subsequent specimens were later found to be positive by Genetic Systems' ELISA and Western blot analysis. Lymphocytic alveolitis must be added to the expanding clinical spectrum of acute HIV-1 infection. The time to detection of seroconversion may vary with different immunoassays. abstract_id: PUBMED:17205433 Multiple organ failure during primary HIV infection. The appearance of primary HIV infection ranges from an asymptomatic presentation to a symptomatic illness resembling infectious mononucleosis. Severe unusual presentations include acute myopericarditis, renal failure, and opportunistic infections such as esophageal candidiasis, cytomegalovirus infection, and Pneumocystis jirovecii pneumonia. We report a case of multiple organ failure during primary HIV infection. abstract_id: PUBMED:29488700 Monitoring HIV-indicator condition guided HIV testing in Estonia. Objectives: The aim of the study was to assess indicator condition (IC) guided HIV testing in Estonia from 2012-2015. Methods: We used Estonian Health Insurance Fund (EHIF) data. EHIF is the core purchaser of health care services in Estonia, covering health care costs for insured people (94% of the total population). After health care services' provision, the provider sends an invoice to EHIF, which includes patient information (e.g. age, gender, diagnoses based on ICD-10) and services provided (e.g. what tests were performed). Results: Among the ICs analysed, the highest proportion of patients tested was among those presenting with infectious mononucleosis-like illness (27-33% of patients) and viral hepatitis (28-32%), the lowest proportion of patients tested was among those presenting with herpes zoster (4-5%) and pneumonia (4-8%). Women were tested somewhat less than men, especially in cases of sexually transmitted infections (9-13% and 18-21%, respectively). Conclusions: Our data shows that IC-guided HIV testing rates are low in Estonia. Therefore, it is critical to follow Estonian HIV testing guidelines, which recommend IC-guided testing. In general, health insurance data can be used to monitor IC-guided HIV testing. Answer: Yes, diagnosing HIV infection in patients presenting with glandular fever-like illness in primary care may result in missed primary HIV infections. A study examining the prevalence of HIV in patients presenting in primary care with glandular fever (GF)-like illness found that low levels of HIV testing are leading to a significant number of missed HIV and seroconversion diagnoses. The overall HIV prevalence in the study was 1.3%, with 72.7% of cases missed at initial primary care presentation. Among the positive samples, 44.4% had evidence of recent acquisition, and 75.0% of these were missed at initial presentation (PUBMED:22575070). This is supported by another study that reported a clinical picture of primary HIV infection presenting as a glandular-fever-like illness, where all patients had fever and pharyngitis, among other symptoms. The study emphasized the importance of heightened awareness of the typical clinical picture in patients developing primary HIV infection to aid prompt diagnosis (PUBMED:3146367). Furthermore, the addition of a standard HIV educational comment to virology laboratory reports was found to be effective in changing requesting behavior, with a significant increase in concomitant HIV requests observed after the addition of the comment (PUBMED:21995931). This suggests that educational interventions can improve the rate of HIV testing in primary care when GF-like symptoms are present. The missed diagnosis of primary HIV infection is a concern because the clinical course of primary infection may determine the subsequent course of the infection, with symptomatic primary HIV-1 infection predicting accelerated progression to AIDS and other HIV-related diseases (PUBMED:7819891). In conclusion, primary HIV infection can present with symptoms similar to glandular fever, and without routine HIV testing, primary HIV infection may be missed in primary care settings. Implementing strategies such as opt-out HIV testing and educational comments on laboratory reports could improve diagnosis rates and early detection of HIV during the primary infection stage.
Instruction: Wasted, overdosed, or beyond saving--to act or not to act? Abstracts: abstract_id: PUBMED:25767558 Flexible ACT &amp; Resource-group ACT: Different Working Procedures Which Can Supplement and Strengthen Each Other. A Response. This article is a response to Nordén and Norlander's 'Absence of Positive Results for Flexible Assertive Community Treatment. What is the next approach?'[1], in which they assert that 'at present [there is] no evidence for Flexible ACT and… that RACT might be able to provide new impulses and new vitality to the treatment mode of ACT'. We question their analyses and conclusions. We clarify Flexible ACT, referring to the Flexible Assertive Community Treatment Manual (van Veldhuizen, 2013) [2] to rectify misconceptions. We discuss Nordén and Norlander's interpretation of research on Flexible ACT. The fact that too little research has been done and that there are insufficient positive results cannot serve as a reason to propagate RACT. However, the Resource Group method does provide inspiration for working with clients to involve their networks more effectively in Flexible ACT. abstract_id: PUBMED:37432509 Recent Amendments to the Australian Privacy Act. The recently passed Privacy Legislation Amendment (Enforcement and Other Measures) Act 2022 (Cth) introduced important changes to the Australian Privacy Act 1988 (Cth) which increase penalties for serious and repeated interferences with privacy and strengthen the investigative and enforcement powers of the Information Commissioner. The amendments were made subsequent to a number of high profile data breaches and represent the first set of changes to the Privacy Act following the review of the Act commenced by the Attorney-General in October 2020. The submissions made to the review emphasized the need for more effective enforcement mechanisms to increase individuals' control over their personal information and as a form of deterrence. This article reviews the recent amendments to the Privacy Act and explains their effect. It comments upon the relevance of the amendments for health and medical data and other data collected in the context of healthcare, and refers to the Attorney-General's Department's review of the Privacy Act regarding other proposals relating to enforcement which have not as yet been put into effect in legislation. abstract_id: PUBMED:28870687 ACT in the treatment of sexual addiction: About a clinical case While referring to the theoretical and practical model of R. Harris (2012), this article proposes the use of acceptance and commitment therapy (ACT) in the treatment of sexual addiction. While dependent behaviors begin and may be maintained by refusal of unpleasant emotional states (that is, addict individuals tend to fight or flee negative emotions by replacing eventually painful or otherwise unpleasant states by behaviors providing pleasure), the ACT framework appears as an ally in treatment of addictive behaviors, in that its goal is a fit between patients' behaviors and their values (instead avoidance of unpleasant states avoidance). Within a clinical case of a patient suffering from sexual addiction, we show that using ACT helps (i) to decrease the intensity of sexual addiction (as assessed through QMICA-Se in a pre-post design), (ii) to develop psychological flexibility (AAQ-II), and (iii) to decrease anxious and depressive symptoms of the patient (HADS). Moreover, the patient recognizes to feel himself in the here and now, and also to perform activities related to his own values. Acceptance of this limited form of sexuality now allows him to calmly consider the possibility of meeting a new person. ACT thus appears to provide clinical benefits in treatment of sexual addictions. abstract_id: PUBMED:35818772 In pursuit of equity in cancer care: Moving beyond the Affordable Care Act. Although Medicaid Expansion under the Patient Protection and Affordable Care Act (ACA) has been associated with many improvements for patients with cancer, Snyder et al. provide evidence demonstrating the persistence of racial disparities in cancer. This Editorial describes why insurance coverage alone does not ensure access to health care, highlights various manifestations of structural racism that constitute barriers to access beyond the direct costs of care, and calls for not just equality, but equity, in cancer care. abstract_id: PUBMED:25130869 Wasted, overdosed, or beyond saving--to act or not to act? Heroin users' views, assessments, and responses to witnessed overdoses in Malmö, Sweden. Background: Overdose is a significant cause of death among heroin users. Frequently, other heroin users are present when an overdose occurs, which means the victim's life could be saved. There is a lack of studies that, based on heroin users own stories, examine their views, assessments, and responses to witnessed overdoses. Methods: The study is based on qualitative interviews with thirty-five heroin users who witnessed someone else's overdose. Results: The heroin users generally had a positive attitude towards assisting peers who had overdosed. A number of factors and circumstances, however, contribute to witnesses often experiencing resistance to or ambivalence about responding. The witness's own high, the difficulty in assessing the seriousness of the situation, an unwillingness to disturb someone else's high, uncertainty about the motive behind the overdose and whether the victim does or does not want assistance as well as fear of police involvement, were common factors that acted as barriers to adequate responses in overdose situations. Conclusion: The fact that being high makes it difficult to respond to overdoses, using traditional methods, argues for simpler and more effective response techniques. This can include intranasal naloxone programs for heroin users. The findings regarding the uncertainty about the intention of the overdose victim and the sensitivity to the experience of a good high argue for more up-front communication and discussion amongst using peers so that they can make their intentions clear to each other. Issues like this can be addressed in overdose education interventions. Overdose prevention measures also need to address the fact that fear of the police acts as a barrier to call emergency services. abstract_id: PUBMED:29251440 Building an ACT-R Reader for Eye-Tracking Corpus Data. Cognitive architectures have often been applied to data from individual experiments. In this paper, I develop an ACT-R reader that can model a much larger set of data, eye-tracking corpus data. It is shown that the resulting model has a good fit to the data for the considered low-level processes. Unlike previous related works (most prominently, Engelmann, Vasishth, Engbert &amp; Kliegl, ), the model achieves the fit by estimating free parameters of ACT-R using Bayesian estimation and Markov-Chain Monte Carlo (MCMC) techniques, rather than by relying on the mix of manual selection + default values. The method used in the paper is generalizable beyond this particular model and data set and could be used on other ACT-R models. abstract_id: PUBMED:15099245 Oregon versus Ashcroft: pain relief, physician-assisted suicide, and the Controlled Substances Act. Objective: Late in 2001, the State of Oregon filed suit against Attorney General John Ashcroft, seeking to halt his recent directive that physicians who comply with the Oregon Death with Dignity Act by writing a lethal prescription for a controlled substance should be prosecuted for violating the federal Controlled Substances Act (CSA). This special article reviews the history of the series of challenges to the Oregon Act since its initial adoption in 1994, with particular consideration of the arguments on both sides of Oregon v. Ashcroft and the disposition of the case by the district court. Design: The article utilizes an historical review of the Oregon Act, including legal and political challenges to it, as well as discussion of the 3 years of data on the experience with legalized physician-assisted suicide in Oregon, and analysis of the legal issues in the current litigation. Conclusions: The federal district court concluded that the Attorney General's interpretation of certain provisions of the CSA so as to preclude the writing of a lethal prescription where otherwise permitted by state law as a legitimate medical practice was inconsistent with the CSA and, therefore, beyond the scope of his authority. The Oregon Act will continue in force while the Attorney General's appeal of the district court ruling is considered by the Ninth Circuit Court of Appeals. abstract_id: PUBMED:36662139 The ACT Predicts Academic Performance-But Why? Scores on the ACT college entrance exam predict college grades to a statistically and practically significant degree, but what explains this predictive validity? The most obvious possibility is general intelligence-or psychometric "g". However, inconsistent with this hypothesis, even when independent measures of g are statistically controlled, ACT scores still positively predict college grades. Here, in a study of 182 students enrolled in two Introductory Psychology courses, we tested whether pre-course knowledge, motivation, interest, and/or personality characteristics such as grit and self-control could explain the relationship between ACT and course performance after controlling for g. Surprisingly, none could. We speculate about what other factors might explain the robust relationship between ACT scores and academic performance. abstract_id: PUBMED:21407970 A decade with the mental health act, 1987. The Mental Health Act, 1987 came into force in 1993. Mental Health Authorities that were created by this Act are useful, but the present situation of not having Government Mental Hospitals under the scrutiny of these authorities is a shortcoming. The high capital needed for upgradation of Government mental hospitals; is likely to be found, only with the intervention of Mental Health Authorities Creation of a funding agency of Government of India is also needed Denying profoundly retarded persons access to a psychiatric hospital is a hard situation. Psychiatric patients in general hospitals' having to face the hassles of mental hospital admission is against the spirit of the act and needs to be remedied Courts' directly determining the presence of psychiatric illness in persons is not serving the end of justice. They should do this on the basis of evidence Several avoidable hardships that may be caused by having the act in the present form has to be corrected This could be done by amendment of the act in certain cases by approaching the High Court in certain others and by thoughtfully framing the State Mental Health Rules in a quite a few other situations. The success of Mental Health Act, 1987 is in its effectiveness to ensure basic human rights of mental patients. A set of Mental Health Rules, that incorporates adequate provisions to protect human rights of patients, in all respects, can go a long way to strengthen the Mental Health Act. abstract_id: PUBMED:9834880 Preserving end-of-life autonomy: the Patient Self-Determination Act and the Uniform Health Care Decisions Act. As medical technology continues to advance and health care choices become more complicated, the preservation of end-of-life autonomy is an increasingly important issue faced by various client populations. This article examines two legislative efforts aimed at preserving end-of-life autonomy: The Patient Self-Determination Act and the Uniform Health Care Decisions Act. The enactment of both acts will provide a more comprehensive approach to advance directive planning. The article advocates for passage of the Uniform Health Care Decisions Act in all 50 states. Implications for social work practice are discussed from a research, community action, and clinical perspective. Answer: The question "Wasted, overdosed, or beyond saving—to act or not to act?" refers to the dilemma faced by heroin users when witnessing an overdose. According to the study based on qualitative interviews with thirty-five heroin users who witnessed someone else's overdose, these individuals generally have a positive attitude towards assisting peers who have overdosed. However, several factors and circumstances can lead to resistance or ambivalence about responding. These include the witness's own state of intoxication, difficulty in assessing the seriousness of the situation, reluctance to disturb someone else's high, uncertainty about the overdose victim's intentions, and fear of police involvement. The study suggests that simpler and more effective response techniques, such as intranasal naloxone programs, could be beneficial. It also highlights the need for more upfront communication among peers to clarify intentions and for overdose education interventions that address these issues. Additionally, overdose prevention measures should consider the barrier that fear of police creates in calling emergency services (PUBMED:25130869).
Instruction: Does the definition of ADHD affect heritability? Abstracts: abstract_id: PUBMED:35102766 Heritability and Clinical Characteristics of Neuropsychological Profiles in Youth With and Without Elevated ADHD Symptoms. Objective: In the last decade, there has been an increase in research that aims to parse heterogeneity in attention deficit hyperactivity disorder (ADHD). The current study tests heritability of latent class neuropsychological subtypes. Method: Latent class analysis was used to derive subtypes in a sample of school-age twins (N = 2,564) enriched for elevated ADHD symptoms. Results: Five neuropsychological profiles replicated across twin 1 and twin 2 datasets. Latent class membership was heritable overall, but heritability varied by profile and was lower than heritability of ADHD status. Variability in neuropsychological performance across domains was the strongest predictor of elevated ADHD symptoms. Neuropsychological profiles showed distinct associations with age, psychiatric symptoms and reading ability. Conclusion: Neuropsychological profiles are associated with unique neurocognitive presentations, but are not strong candidate endophenotypes for ADHD diagnosis. abstract_id: PUBMED:36384349 The Heritability of ADHD in Children of ADHD Parents: A Post-hoc Analysis of Longitudinal Data. Objective: A growing literature suggests attention-deficit/hyperactivity disorder (ADHD) is a heritable disorder. We evaluated children at risk for ADHD by virtue of having parents with ADHD and compared them with children of parents without ADHD to assess the degree of heritability of ADHD. Method: The sample for this study was derived from three longitudinal studies that tracked families with various disorders, including ADHD. Children were stratified based on presence of parental ADHD, and clinical assessments were taken to evaluate presence of ADHD and related psychiatric and functional outcomes in children. Results: Children with parental ADHD had significantly more full or subthreshold psychiatric disorders (including ADHD) as well as functional impairments compared to children without parental ADHD. Conclusion: Our findings suggest that offspring of parents with ADHD are at significant risk for ADHD and its associated psychiatric, cognitive, and educational impairments. These findings aid in identifying early manifestations of ADHD in young children at risk. abstract_id: PUBMED:23503812 Positive and Negative Affect in Clinic-Referred Youth With ADHD. Objective: To examine self-reported positive affect (PA) and negative affect (NA) among youth with ADHD (only and comorbid) and other non-ADHD-referred youth in an ethnically diverse clinical sample. Method: Semi-structured interviews identified 80 pure ADHD, 284 ADHD plus one or more comorbidities, and 730 non-ADHD youth (e.g., other diagnoses or no diagnosis). The Positive and Negative Affect Scale-Children (PANAS-C) was used to assess affective states. Results: Even after controlling for the influence of potential confounds, youth with only ADHD reported higher PA and lower NA than other clinic-referred youth. The ADHD-comorbid group reported higher PA than the "non-ADHD" group, but these groups did not differ on level of NA. ADHD subtype did not influence results. Conclusion: Among clinic-referred youth, ADHD is associated with higher levels of PA and when there are no comorbid disorders, lower levels of NA. abstract_id: PUBMED:27923146 Behaviors associated with negative affect in the friendships of children with ADHD: An exploratory study. Our objective was to identify behaviors and contextual situations associated with negative affect observed in the interactions of children with and without ADHD and their real-life friends. We expected negative affect to be linked to rule violations and disagreements about the choice of games. Loss of game was associated with episodes of negative affect in a structured game. Negative appraisal of friend's ability was most frequently associated with negative affect during unstructured free play. Comparison children expressed greater frustration regarding their own abilities, whereas children with ADHD commented more frequently about the inabilities of their friends. abstract_id: PUBMED:29589797 ADHD Endophenotypes in Caribbean Families. Objective: The aim of this study is to contrast the genetics of neuropsychological tasks in individuals from nuclear families clustering ADHD in a Caribbean community. Method: We recruited and clinically characterized 408 individuals using an extensive battery of neuropsychological tasks. The genetic variance underpinning these tasks was estimated by heritability. A predictive framework for ADHD diagnosis was derived using these tasks. Results: We found that individuals with ADHD differed from controls in tasks of mental control, visuospatial ability, visuoverbal memory, phonological and verbal fluency, verbal and semantic fluency, cognitive flexibility, and cognitive ability. Among them, tasks of mental control, visuoverbal memory, phonological fluency, semantic verbal fluency, and intelligence had a significant heritability. A predictive model of ADHD diagnosis using these endophenotypes yields remarkable classification rate, sensitivity, specificity, and precision values (above 80%). Conclusion: We have dissected new cognitive endophenotypes in ADHD that can be suitable to assess the neurobiological and genetic basis of ADHD. abstract_id: PUBMED:36238202 Early development of negative and positive affect: Implications for ADHD symptomatology across three birth cohorts. High levels of early emotionality (of either negative or positive valence) are hypothesized to be important precursors to early psychopathology, with attention-deficit/hyperactivity disorder (ADHD) a prime early target. The positive and negative affect domains are prime examples of Research Domain Criteria (RDoC) concepts that may enrich a multilevel mechanistic map of psychopathology risk. Utilizing both variable-centered and person-centered approaches, the current study examined whether levels and trajectories of infant negative and positive emotionality, considered either in isolation or together, predicted children's ADHD symptoms at 4 to 8 years of age. In variable-centered analyses, higher levels of infant negative affect (at as early as 3 months of age) were associated with childhood ADHD symptoms. Findings for positive affect failed to reach statistical threshold. Results from person-centered trajectory analyses suggest that additional information is gained by simultaneously considering the trajectories of positive and negative emotionality. Specifically, only when exhibiting moderate, stable or low levels of positive affect did negative affect and its trajectory relate to child ADHD symptoms. These findings add to a growing literature that suggests that infant negative emotionality is a promising early life marker of future ADHD risk and suggest secondarily that moderation by positive affectivity warrants more consideration. abstract_id: PUBMED:35674983 The role of moment-to-moment dynamics of perceived stress and negative affect in co-occurring ADHD and internalising symptoms. A maladaptive response to stress in individuals with high ADHD traits may be a key factor explaining co-occurring ADHD symptoms and internalising problems. The current study investigates whether between-person differences in ADHD traits are associated with differences in the within-person moment-to-moment coupling of stress and negative affect; and whether these can explain between-person differences in internalising problems (N = 262, median-age 20). Results of a dynamic structural equation model indicated that between-person differences in ADHD traits significantly moderated the daily life coupling between stress and negative affect. Further, higher ADHD traits were associated with stronger stress carry-over and higher mean levels of negative affect. Stress carry-over and mean levels of negative affect mediated the association between ADHD traits and internalising problems. abstract_id: PUBMED:24628222 Attention deficit hyperactivity disorder (ADHD): an affect-processing and thought disorder? In the literature on child and adolescent psychoanalysis attention deficit hyperactivity disorder (ADHD) is described as complex syndrome with wide-ranging psychodynamic features. Broadly speaking, the disorder is divided into three categories: 1. a disorder in early object relations leading to the development of a maniform defence organization in which object-loss anxieties and depressed affects are not worked through via symbolization but are organized in a body-near manner; 2. a triangulation disorder in which the cathexis of the paternal position is not stable; structures providing little support alternate with excessive arousal, affect regulation is restricted; 3. current emotional stress or a traumatic experience. I suggest taking a fresh look at ADHD from a psychoanalytic vantage point. With respect to the phenomenology of the disorder, the conflict-dynamic approach should be supplemented by a perspective regarding deficits in α-function as constitutive for ADHD. These deficits cause affect-processing and thought disorders compensated for (though not fully) by the symptomatology. At a secondary level, a vicious circle develops through the mutual reinforcement of defective processing of sense data and affects into potential thought content, on the one hand, and secondary, largely narcissistic defence processes on the other. These considerations have major relevance for the improved understanding of ADHD and for psychoanalytic technique. abstract_id: PUBMED:11128330 Does the definition of ADHD affect heritability? Objective: A twin study design was used to examine the genetic validity of attention-deficit hyperactivity disorder (ADHD)-related phenotypes. Method: Questionnaires covering ADHD symptoms were sent to the families of 2,846 school-age twins. Parent-rated symptoms were obtained for 2,082 twin pairs and teacher-rated symptoms were available for 1,470 twin pairs. Results: Broadly defined parent-rated, teacher-rated, and "pervasive" (both parent- and teacher-rated) ADHD categories were found to be highly heritable. Significant shared environmental effects were also detected for teacher-rated ADHD. A common genetic factor was found to have a modest influence on both parent- and teacher-rated symptom scores and categories, but additional genetic and environmental influences were also found forteacher-rated ADHD. Consistent with previous findings, ADHD symptom scores were again found to be highly heritable. Maternal contrast effects were found for the Rutter A scale items but could not be detected for the DuPaul ADHD rating scale. Conclusions: Broadly defined pervasive ADHD appears to be as heritable as ADHD behaviors defined by maternal reports alone. A common genetic factor influences maternally rated and teacher-rated ADHD but does not account for all of the genetic variance for teacher-rated ADHD. ADHD symptom scores are highly heritable, and maternal contrast effects appear to vary for different measures. abstract_id: PUBMED:35151951 The dynamical association between physical activity and affect in the daily life of individuals with ADHD. Exercise interventions in mental disorders have evidenced a mood-enhancing effect. However, the association between physical activity and affect in everyday life has not been investigated in adult individuals with ADHD, despite being important features of this disorder. As physical activity and affect are dynamic processes in nature, assessing those in everyday life with e-diaries and wearables, has become the gold standard. Thus, we used an mHealth approach to prospectively assess physical activity and affect processes in individuals with ADHD and controls aged 14-45 years. Participants wore accelerometers across a four-day period and reported their affect via e-diaries twelve times daily. We used multilevel models to identify the within-subject effects of physical activity on positive and negative affect. We split our sample into three groups: 1. individuals with ADHD who were predominantly inattentive (n = 48), 2. individuals with ADHD having a combined presentation (i.e., being inattentive and hyperactive; n = 95), and 3. controls (n = 42). Our analyses revealed a significant cross-level interaction (F(2, 135.072)=5.733, p = 0.004) of physical activity and group on positive affect. In details, all groups showed a positive association between physical activity and positive affect. Individuals with a combined presentation significantly showed the steepest slope of physical activity on positive affect (slope_inattentive=0.005, p&lt;0.001; slope_combined=0.009, p&lt;0.001; slope_controls=0.004, p = 0.008). Our analyses on negative affect revealed a negative association only in the individuals with a combined presentation (slope=-0.003; p = 0.001). Whether this specifically pronounced association in individuals being more hyperactive might be a mechanism reinforcing hyperactivity needs to be empirically clarified in future studies. Answer: Yes, the definition of ADHD does affect heritability. A twin study design examining the genetic validity of ADHD-related phenotypes found that broadly defined parent-rated, teacher-rated, and "pervasive" ADHD categories were highly heritable. However, the study also detected significant shared environmental effects for teacher-rated ADHD and found that a common genetic factor had a modest influence on both parent- and teacher-rated symptom scores and categories. Additional genetic and environmental influences were also found for teacher-rated ADHD. The study concluded that broadly defined pervasive ADHD appears to be as heritable as ADHD behaviors defined by maternal reports alone, and that a common genetic factor influences maternally rated and teacher-rated ADHD but does not account for all of the genetic variance for teacher-rated ADHD (PUBMED:11128330).
Instruction: Is hippocampal volume a good marker to differentiate Alzheimer's disease from frontotemporal dementia? Abstracts: abstract_id: PUBMED:23542864 Is hippocampal volume a good marker to differentiate Alzheimer's disease from frontotemporal dementia? Background: Previous studies analyzed the ability of hippocampal volumes (HV) to differentiate Alzheimer's disease (AD) from frontotemporal dementia (FTD). However, these studies did not include patients selected according to clinico-biological criteria, using pathophysiological biomarkers. Objective: To analyze the effectiveness of hippocampal volumetric measures to distinguish AD from behavioral variant FTD (bvFTD), using strict inclusion criteria based on clinical and pathophysiological markers. Methods: Seventy-two participants were included: 31 AD patients with predominant and progressive episodic memory deficits associated with typical AD cerebrospinal fluid (CSF) profile and/or positive amyloid imaging (PET with 11C-labeled Pittsburgh Compound B [PiB]), 26 bvFTD patients diagnosed according to consensual clinical criteria and with no AD CSF profile, and 15 healthy controls without amyloid retention on PiB-PET exam. HV were segmented with an automated method and were normalized to total intracranial volume (nHV). Results: Significant reductions in HV were found in both AD and bvFTD patients compared with controls, but there were no significant difference between AD and bvFTD patients. Mean nHV distinguished normal controls from either AD or bvFTD with high sensitivity (80.6% and 76.9%, respectively) and specificity (93.3% for both), but it was inefficient in differentiating AD from bvFTD (9.7% specificity). There was no difference in the clinical and neuropsychological profiles according to HV in bvFTD and AD patients. Conclusions: When considered alone, measures of HV are not good markers to differentiate AD from bvFTD. Hippocampal sclerosis associated with FTD may explain the high degree of overlap in nHV between both groups. abstract_id: PUBMED:22414571 Hippocampal shape analysis in Alzheimer's disease and frontotemporal lobar degeneration subtypes. Hippocampal pathology is central to Alzheimer's disease (AD) and other forms of dementia such as frontotemporal lobar degeneration (FTLD). Autopsy studies have shown that certain hippocampal subfields are more vulnerable than others to AD and FTLD pathology, in particular the subiculum and cornu ammonis 1 (CA1). We conducted shape analysis of hippocampi segmented from structural T1 MRI images on clinically diagnosed dementia patients and controls. The subjects included 19 AD and 35 FTLD patients [13 frontotemporal dementia (FTD), 13 semantic dementia (SD), and 9 progressive nonfluent aphasia (PNFA)] and 21 controls. Compared to controls, SD displayed severe atrophy of the whole left hippocampus. PNFA and FTD also displayed atrophy on the left side, restricted to the hippocampal head in FTD. Finally, AD displayed most atrophy in left hippocampal body with relative sparing of the hippocampal head. Consistent with neuropathological studies, most atrophic deformation was found in CA1 and subiculum areas in FTLD and AD. abstract_id: PUBMED:24363834 Hippocampal sclerosis dementia: An amnesic variant of frontotemporal degeneration. Objective: To describe characteristics of hippocampal sclerosis dementia. Methods: Convenience sample of Hippocampal sclerosis dementia (HSD) recruited from the Johns Hopkins University Brain Resource Center. Twenty-four cases with post-mortem pathological diagnosis of hippocampal sclerosis dementia were reviewed for clinical characterization. Results: The cases showed atrophy and neuronal loss localized to the hippocampus, amygdala and entorrhinal cortex. The majority (79.2%) had amnesia at illness onset, and many (54.2%) showed abnormal conduct and psychiatric disorder. Nearly 42% presented with an amnesic state, and 37.5% presented with amnesia plus abnormal conduct and psychiatric disorder. All eventually developed a behavioral or psychiatric disorder. Disorientation, executive dysfunction, aphasia, agnosia and apraxia were uncommon at onset. Alzheimer disease (AD) was the initial clinical diagnosis in 89% and the final clinical diagnosis in 75%. Diagnosis of frontotemporal dementia (FTD) was uncommon (seen in 8%). Conclusion: HSD shows pathological characteristics of FTD and clinical features that mimic AD and overlap with FTD. The findings, placed in the context of earlier work, support the proposition that HSD belongs to the FTD family, where it may be identified as an amnesic variant. abstract_id: PUBMED:24028428 A comparative study of the dentate gyrus in hippocampal sclerosis in epilepsy and dementia. Aims: Hippocampal sclerosis (HS) is long-recognized in association with epilepsy (HSE ) and more recently in the context of cognitive decline or dementia in the elderly (HSD ), in some cases as a component of neurodegenerative diseases, including Alzheimer's disease (AD) and fronto-temporal lobe dementia (FTLD). There is an increased risk of seizures in AD and spontaneous epileptiform discharges in the dentate gyrus of transgenic AD models; epilepsy can be associated with an age-accelerated increase in AD-type pathology and cognitive decline. The convergence between these disease processes could be related to hippocampal pathology. HSE typically shows re-organization of both excitatory and inhibitory neuronal networks in the dentate gyrus, and is considered to be relevant to hippocampal excitability. We sought to compare the pathology of HSE and HSD , focusing on re-organization in the dentate gyrus. Methods: In nine post mortem cases with HSE and bilateral damage, 18 HSD and 11 controls we carried out immunostaining for mossy fibres (dynorphin), and interneuronal networks (NPY, calbindin and calretinin) on sections from the mid-hippocampal body. Fibre sprouting (FS) or loss of expression in the dentate gyrus was semi-quantitatively graded from grade 0 (normal) to grade 3 (marked alteration). Results: Significantly more re-organization was seen with all four markers in the HSE than HSD group (P &lt; 0.01). Mild alterations were noted in HSD group with dynorphin (FS in 3 cases), calretinin (FS in 6 cases), NPY (FS in 11 cases) and calbindin (loss in 10 cases). In eight HSD cases, alteration was seen with more than one antibody but in no cases were the highest grades seen. We also noted NPY and, to a lesser extent, calretinin labelling of Hirano bodies in CA1 of AD cases and some older controls, but not in HSE . Conclusion: Reorganization of excitatory and inhibitory networks in the dentate gyrus is more typical of HSE . Subtle alterations in HSD may be a result of increased hippocampal excitability, including unrecognized seizure activity. An unexpected finding was the identification of NPY-positive Hirano bodies in HSD but not HSE , which may be a consequence of the relative vulnerabilities of interneurons in these conditions. abstract_id: PUBMED:35253765 Quantified Brain Magnetic Resonance Imaging Volumes Differentiate Behavioral Variant Frontotemporal Dementia from Early-Onset Alzheimer's Disease. Background: The differentiation of behavioral variant frontotemporal dementia (bvFTD) from early-onset Alzheimer's disease (EOAD) by clinical criteria can be inaccurate. The volumetric quantification of clinically available magnetic resonance (MR) brain scans may facilitate early diagnosis of these neurodegenerative dementias. Objective: To determine if volumetric quantification of brain MR imaging can identify persons with bvFTD from EOAD. Methods: 3D T1 MR brain scans of 20 persons with bvFTD and 45 with EOAD were compared using Neuroreader to measure subcortical, and lobar volumes, and Volbrain for hippocampal subfields. Analyses included: 1) discriminant analysis with leave one out cross-validation; 2) input of predicted probabilities from this process into a receiver operator characteristic (ROC) analysis; and 3) Automated linear regression to identify predictive regions. Results: Both groups were comparable in age and sex with no statistically significant differences in symptom duration. bvFTD had lower volume percentiles in frontal lobes, thalamus, and putamen. EOAD had lower parietal lobe volumes. ROC analyses showed 99.3% accuracy with Neuroreader percentiles and 80.2% with subfields. The parietal lobe was the most predictive percentile. Although there were differences in hippocampal (particularly left CA2-CA3) subfields, it did not add to the discriminant analysis. Conclusion: Percentiles from an MR based volumetric quantification can help differentiate between bvFTD from EOAD in routine clinical care. Use of hippocampal subfield volumes does not enhance the diagnostic separation of these two early-onset dementias. abstract_id: PUBMED:25324686 Prevalence of Hippocampal Sclerosis in a Clinicopathologically Characterized Cohort. Background: Hippocampal sclerosis (HS) is a neuropathological finding that frequently occurs with pathologies, such as Alzheimer's disease (AD). Prevalence estimates of HS in autopsy-confirmed dementia samples have varied between 0.4% and 24.5%. However, the prevalence of HS within other pathologic groups has not been well characterized. Methods: Utilizing a sample of 910 prospectively followed and clinicopathologically confirmed dementia cases, we determined the prevalence of HS among the sample and within specific pathologic groups. HS prevalence of the sample was compared to reported HS prevalence rates in other autopsy-confirmed dementia samples. Results: The age range of the sample was 43 to 106 years, with a mean of 81.49±8.45. Of the 910 cases, 505 were male and 405 were female. For the entire sample, the average educational level was 14.59±2.65years. Of the 910 individuals, 47 (5.16%) cases had HS pathology present at autopsy. Among the 561 AD cases, 26 (4.43%) had HS pathology present. The frontotemporal dementia (FTD)/Pick's group had the highest percentage of cases with HS pathology (23.08%) followed by primary progressive aphasia (PPA) (16.67%) and Parkinson's disease with dementia (PDD) (5.34%). The HS prevalence rate of this study was not significantly different from all but 2 studies. Conclusion: The prevalence of HS pathology in this sample of autopsy-confirmed dementia cases was similar to other reported HS prevalence rates. This study is the first to report the presence of HS pathology in PDD cases. abstract_id: PUBMED:32278495 Consequences of hyperphosphorylated tau on the morphology and excitability of hippocampal neurons in aged tau transgenic mice. The intracellular accumulation of hyperphosphorylated tau characterizes many neurodegenerative diseases such as Alzheimer's disease and frontotemporal dementia. A critical role for tau is supported by studies in transgenic mouse models expressing the P301L mutation with accumulation of hyperphosphorylated human tau in hippocampal pyramidal neurons of aged mice. Especially, the somatodendritic mislocalization of hyperphosphorylated tau seems to affect the neuronal network of the hippocampus. To show the consequences of aggregation of hyperphosphorylated tau within hippocampal neurons of aged mice, the CA1 pyramidal cells were analyzed morphologically and electrophysiologically. Here we demonstrate in the P301L pR5 mouse model that hyperphosphorylated tau leads to an increase in stubby spines and filopodia, as well as a decrease in total dendritic length of hippocampal pyramidal neurons due to a decrease in apical dendritic length and nodes. This atrophy is in line with the significant reduction in CA1 long-term potentiation. Furthermore, mutant tau induced a depolarized threshold for action potential initiation and an increased current of inward rectifying potassium channels, which should lead, together with the long-term potentiation decrease, to a decreased excitability of CA1 neurons. abstract_id: PUBMED:16306153 Hippocampal atrophy on MRI in frontotemporal lobar degeneration and Alzheimer's disease. Background: Hippocampal atrophy on magnetic resonance imaging (MRI) is an early characteristic of Alzheimer's disease. However, hippocampal atrophy may also occur in other dementias, such as frontotemporal lobar degeneration (FTLD). Objective: To investigate hippocampal atrophy on MRI in FTLD and its three clinical subtypes, in comparison with Alzheimer's disease, using volumetry and a visual rating scale. Methods: 42 patients with FTLD (17 frontotemporal dementia, 13 semantic dementia, and 12 progressive non-fluent aphasia), 103 patients with Alzheimer's disease, and 73 controls were included. Hippocampal volumetry and the easily applicable medial temporal lobe atrophy (MTA) rating scale were applied to assess hippocampal atrophy. Results: Multivariate analysis of variance for repeated measures showed an effect of diagnostic group on hippocampal volume. There was a significant diagnosis by side (left v right) interaction. Both FTLD and Alzheimer's disease showed hippocampal atrophy compared with controls. Results of the visual MTA rating scale confirmed these findings. Within the FTLD subtypes there were marked differences in hippocampal atrophy. Frontotemporal dementia and semantic dementia showed bilateral hippocampal atrophy, and in semantic dementia the left hippocampus was smaller than in Alzheimer's disease. No significant hippocampal atrophy was detected in non-fluent progressive aphasia. Conclusions: Hippocampal atrophy is not only a characteristic of Alzheimer's disease but also occurs in FTLD. The three clinical subtypes of FTLD show different patterns of hippocampal atrophy. abstract_id: PUBMED:33638350 Hippocampal Sclerosis in Frontotemporal Dementia: When Vascular Pathology Meets Neurodegeneration. Hippocampal sclerosis (HS) is a common neuropathological finding and has been associated with advanced age, TDP-43 proteinopathy, and cerebrovascular pathology. We analyzed neuropathological data of an autopsy cohort of early-onset frontotemporal dementia patients. The study aimed to determine whether in this cohort HS was related to TDP-43 proteinopathy and whether additional factors could be identified. We examined the relationship between HS, proteinopathies in frontotemporal cortices and hippocampus, Alzheimer disease, cerebrovascular changes, and age. We confirmed a strong association between HS and hippocampal TDP-43, whereas there was a weaker association between HS and frontotemporal lobar degeneration with TDP-43 pathology (FTLD-TDP). Nearly all of the FTLD-TDP cases had TDP-43 pathology in the hippocampus. HS was present in all FTLD-TDP type D cases, in 50% of the FTLD-TDP A cohort and in 6% of the FTLD-TDP B cohort. Our data also showed a significant association between HS and vascular changes. We reviewed the literature on HS and discuss possible pathophysiological mechanisms between TDP-43 pathology, cerebrovascular disease, and HS. Additionally, we introduced a quantitative neuronal cell count in CA1 to objectify the semiquantitative visual appreciation of HS. abstract_id: PUBMED:24085254 Hippocampal and mesial temporal sclerosis in early-onset frontotemporal lobar degeneration versus Alzheimer's disease. Hippocampal sclerosis (HS) and mesial temporal sclerosis (MTS) may occur with frontotemporal lobar degeneration (FTLD) and Alzheimer's disease (AD) as well as with normal aging. Prior studies suggest that HS/MTS may be more closely associated with FTLD but have not directly compared the prevalence and clinical characteristics of HS/MTS between neuropathologically confirmed early-onset (age ≤ 65) cohorts of FTLD and AD. We identified patients with early-onset FTLD (n = 136) and AD (n = 267) from National Alzheimer's Center Consortium databases and compared neuropathological and clinical data between these 2 groups. The FTLD group had a significantly higher prevalence of HS/MTS than that of the AD group. However, HS/MTS was associated with increasing age and memory impairment in the AD group but not in the FTLD group. These findings are consistent with the hypothesis that HS/MTS in FTLD occurs as part of the primary pathological process, rather than as a secondary, nonspecific effect of aging on memory and hippocampal function. Answer: Hippocampal volume (HV) is not a reliable marker to differentiate Alzheimer's disease (AD) from frontotemporal dementia (FTD), particularly the behavioral variant FTD (bvFTD). A study that included patients with AD and bvFTD, who were selected based on clinical and pathophysiological markers, found that while both AD and bvFTD patients showed significant reductions in HV compared to controls, there was no significant difference in HV between the AD and bvFTD groups. The mean normalized hippocampal volume (nHV) could distinguish controls from either AD or bvFTD with high sensitivity and specificity, but it was inefficient in differentiating AD from bvFTD, showing only 9.7% specificity. The overlap in nHV between AD and bvFTD may be due to hippocampal sclerosis associated with FTD (PUBMED:23542864). Additionally, hippocampal shape analysis revealed that while hippocampal atrophy is present in both AD and various subtypes of FTLD, the patterns of atrophy differ. For instance, AD displayed most atrophy in the left hippocampal body with relative sparing of the hippocampal head, whereas FTLD subtypes showed different patterns of atrophy in the hippocampus (PUBMED:22414571). Moreover, hippocampal sclerosis dementia (HSD), which shows pathological characteristics of FTD and clinical features that mimic AD, further complicates the use of HV as a distinguishing marker. HSD is often initially misdiagnosed as AD due to its clinical presentation (PUBMED:24363834). A comparative study of the dentate gyrus in hippocampal sclerosis in epilepsy and dementia suggested that reorganization of excitatory and inhibitory networks in the dentate gyrus is more typical of hippocampal sclerosis in epilepsy than in dementia, indicating that hippocampal changes can be disease-specific (PUBMED:24028428). In summary, while hippocampal atrophy is a feature of both AD and FTD, the use of HV alone is not a good marker for differentiating between these two conditions due to the overlap in hippocampal volume reduction and the presence of hippocampal sclerosis in both diseases. Other factors, such as clinical presentation and additional imaging or pathological markers, may be necessary for accurate differentiation (PUBMED:23542864; PUBMED:22414571; PUBMED:24363834; PUBMED:24028428).
Instruction: Pituitary dysfunction after traumatic brain injury in children: is there a need for ongoing endocrine assessment? Abstracts: abstract_id: PUBMED:32621001 Endocrine Dysfunction Following Stroke. Endocrine dysfunction is known to occur after traumatic brain injury. The purpose of this study was to examine the incidence of various endocrine dysfunctions after a stroke. The Taiwan National Health Insurance Research Database (NHIRD) was searched from 2001 to 2011 for patients with a diagnosis of stroke. Stroke patients were matched by diagnosis date, age, and sex to patients without a stroke. Cox proportional hazards regression analyses were performed to compare the incidence of goiter, acquired hypothyroidism, thyroiditis, pituitary dysfunction, and disorders of the adrenal glands between stroke and non-stroke patients. There were 131,951 patients in the stroke group, and 131,951 in the matched non- stroke group (mean age 66.1 ± 14.9 years). Stroke patients had significantly higher risk of acquired hypothyroidism (crude hazard ratio [cHR] = 1.65, 95% confidence interval [CI]: 1.44, 1.90; adjusted hazard ratio [aHR] = 1.65, 95% CI: 1.42, 1.91), pituitary dysfunction (cHR = 2.32, 95% CI: 1.79, 2.99; aHR = 1.92, 95% CI: 1.46, 2.52), and disorders of the adrenal glands (cHR = 1.79, 95% CI: 1.52, 2.12; aHR =1.62, 95% CI: 1.36, 1.92) than non-stroke patients. Pituitary dysfunction and disorders of the adrenal glands were found in both hemorrhagic stroke and ischemic stroke patients, while hypothyroidism was seen in ischemic stroke patients only. No significant association was found for goiter and thyroiditis. In conclusions, stroke survivors have an approximately 2-fold increased risk of developing acquired hypothyroidism, pituitary dysfunction, or disorders of the adrenal glands. These risks should be taken into account in the management of patients who have ischemic or hemorrhagic strokes. Graphical Abstract. abstract_id: PUBMED:27608606 Endocrine dysfunction following traumatic brain injury: a 5-year follow-up nationwide-based study. Post-traumatic endocrine dysfunction is a complication of traumatic brain injury (TBI). However, there is lack of long-term follow-up and large sample size studies. This study included patients suffering from TBI registered in the Health Insurance Database. Endocrine disorders were identified using the ICD codes: 244 (acquired hypothyroidism), 253 (pituitary dysfunction), 255 (disorders of the adrenal glands), 258 (polyglandular dysfunction), and 259 (other endocrine disorders) with at least three outpatient visits within 1 year or one admission diagnosis. Overall, 156,945 insured subjects were included in the final analysis. The 1- and 5-year incidence rates of post-traumatic endocrinopathies were 0.4% and 2%, respectively. The risks of developing a common endocrinopathy (p &lt; 0.001) or pituitary dysfunction (P &lt; 0.001) were significantly higher in patients with a TBI history. Patients with a skull bone fracture had a higher risk of developing pituitary dysfunction at the 1-year follow up (p value &lt; 0.001). At the 5-year follow up, the association between intracranial hemorrhage and pituitary dysfunction (p value: 0.002) was significant. The risk of developing endocrine dysfunction after TBI increased during the entire 5-year follow-up period. Skull bone fracture and intracranial hemorrhage may be associated with short and long-term post-traumatic pituitary dysfunction, respectively. abstract_id: PUBMED:36788181 Endocrine Dysfunction After Traumatic Brain Injury: An Ignored Clinical Syndrome? Traumatic brain injury (TBI) incurs substantial health and economic burden, as it is the leading reason for death and disability globally. Endocrine abnormalities are no longer considered a rare complication of TBI. The reported prevalence is variable across studies, depending on the time frame of injury, time and type of testing, and variability in hormonal values considered normal across different studies. The present review reports evidence on the endocrine dysfunction that can occur after TBI. Several aspects, including the pathophysiological mechanisms, clinical consequences/challenges (in the acute and chronic phases), screening and diagnostic workup, principles of therapeutic management, and insights on future directions/research agenda, are presented. The management of hypopituitarism following TBI involves hormonal replacement therapy. It is essential for health care providers to be aware of this complication because at times, symptoms may be subtle and may be mistaken to be caused by brain injury itself. There is a need for stronger evidence for establishing recommendations for optimum management so that they can be incorporated as standard of care in TBI management. abstract_id: PUBMED:23650946 Pituitary dysfunction after traumatic brain injury in children: is there a need for ongoing endocrine assessment? Background: Hypopituitarism has been widely described in adults after traumatic brain injury (TBI); however, the available data in paediatric populations are scarce. Here, we report the results of a prospective, long-term study in children, adolescents and young adults. Study Group: Thirty-seven children (age, 2 months to 19·9 years) of 51 eligible patients were followed for 1 year. Clinical and baseline endocrine variables were assessed in all 3 and 12 months after TBI; children ≥ 6 years underwent two stimulation tests (glucagon stimulation and megatest). Results: In the group ≥6 years, 11 of 23 patients (47·8%) had a subnormal GH peak 3 months after TBI that persisted in 8 of 23 patients (34%) after 1 year. The GH response showed no correlation with injury severity (GCS, Marshall classification). Growth velocity was normal in all patients, except for one. Body mass index (BMI) SDS increased significantly in the group with low GH response. A suboptimal cortisol was observed in 10 of 23 subjects, which normalized in all but three, 1 year thereafter. All patients but one showed a pubertal response to GnRH testing. No clinical or hormonal abnormalities were detectable in children &lt;6 years. Conclusion: Our results recommend to prospectively follow children after TBI: firstly, because the impairment of pituitary function cannot be predicted, and secondly, to avoid the potential consequences of pituitary dysfunction. Prospective clinical trials are needed before recommending a systematic screening after TBI and/or GH therapy either in postpubertal children or in prepubertal children who grow normally. abstract_id: PUBMED:15986740 Endocrine dysfunction following traumatic brain injury: mechanisms, pathophysiology and clinical correlations. Despite growing recognition among those who provide care for traumatic brain injury patients, endocrine dysfunction following brain injury is an often under-recognized phenomenon. From historical reports one would conclude that endocrine dysfunctions hardly ever occurs following trauma to the head. However, recent studies suggest that a significant proportion of patients suffer some degree of hypopituitarism. To date, there are no clear predicting factors identifying patients at risk for developing hormonal disturbances and thus no parameters exist for screening. Several retrospective analyses and literature reviews, and more recently, a few longitudinal studies of brain injured patients have been performed. abstract_id: PUBMED:27778105 Can early clinical parameters predict post-traumatic pituitary dysfunction in severe traumatic brain injury? Background: Post-traumatic hypopituitarism is a major complication after severe head trauma. The aim of our study was to evaluate the possible role of early clinical parameters in the development of endocrine deficits. Methods: Data on endocrine function, on-admission clinical-, laboratory-, and ICU-monitored parameters were available in 63 patients of the surviving 86 severe head injury patients (post-resuscitation GCS under 8) treated at one neurosurgical center during a 10-year period. Results: Hypopituitarism was diagnosed in 68.3 % of the patients. The most frequently affected pituitary axis was the growth hormone (GH): GH deficiency or insufficiency was present in 50.8 %. Central hypogonadism affected 23.8 % of male patients; hypothyroidism and secondary adrenal failure were found in 22.2 and 9.5 % of the investigated population, respectively. Early onset (within 1 year of brain injury) hypopituitarism was found in 24 patients. No connection was found between the development of hypopituitarism and any of the clinical parameters assessed on-admission or at ICU. Significant correlations were found between early endocrine dysfunctions and surgical intervention (OR: 4.64) and the diagnosis of subdural hematoma (OR: 12). In our population, after road traffic accidents, the development of late-onset hypopituitarism was less prevalent (OR: 0.22). Conclusions: Since our results do not indicate any reliable predictive parameter for the development of endocrine dysfunction in a cohort of patients with severe traumatic brain injury, regular endocrine screening of this specific patient population seems obligatory. abstract_id: PUBMED:19597761 Assessment of endocrine abnormalities in severe traumatic brain injury: a prospective study. Objective: The frequency and pattern of endocrine abnormalities among patients with traumatic brain injury have been the subject matter of very few studies. This study was intended to assess the pattern of endocrine dysfunction following severe head injury. Methods: Severe head injury patients admitted to the Department of Neurosurgery, All India Institute of Medical Sciences, New Delhi, from January to December in 1 year formed the study group. Apart from clinical assessment, NCCT of the head was performed on all patients on admission. A complete anterior pituitary hormone analysis was performed within 24 h of injury and was repeated at 2 weeks, 3 months and 6 months amongst patients who survived. Results: A total of 99 patients were included in the study. Forty of our patients succumbed in the hospital. Rest of the patients were followed up for 6 months. Elevations of cortisol followed by prolactin were the most common hormonal derangements at admission. Midline shift on CT scans was inversely related to cortisol elevation and directly related to GH elevation. Infarct on CT scans was inversely related to cortisol and LH elevation. A significant alteration was found in the decreasing trend of the mean T4 values and normalisation or a decreasing trend from initially elevated mean cortisol and GH levels during follow-up (p &lt; 0.05). Conclusions: This study reveals that abnormalities in hormonal profiles appear to be relatively common in severe traumatic brain injury and fluctuate significantly over at least 6 months; there is a correlation with age and radiological findings. Performance of hormonal analysis evaluation should be considered in patients with severe brain injury so that appropriate hormonal replacement can be done to optimise the clinical outcome. abstract_id: PUBMED:26287247 Update of Endocrine Dysfunction following Pediatric Traumatic Brain Injury. Traumatic brain injuries (TBI) are common occurrences in childhood, often resulting in long term, life altering consequences. Research into endocrine sequelae following injury has gained attention; however, there are few studies in children. This paper reviews the pathophysiology and current literature documenting risk for endocrine dysfunction in children suffering from TBI. Primary injury following TBI often results in disruption of the hypothalamic-pituitary-adrenal axis and antidiuretic hormone production and release, with implications for both acute management and survival. Secondary injuries, occurring hours to weeks after TBI, result in both temporary and permanent alterations in pituitary function. At five years after moderate to severe TBI, nearly 30% of children suffer from hypopituitarism. Growth hormone deficiency and disturbances in puberty are the most common; however, any part of the hypothalamic-pituitary axis can be affected. In addition, endocrine abnormalities can improve or worsen with time, having a significant impact on children's quality of life both acutely and chronically. Since primary and secondary injuries from TBI commonly result in transient or permanent hypopituitarism, we conclude that survivors should undergo serial screening for possible endocrine disturbances. High indices of suspicion for life threatening endocrine deficiencies should be maintained during acute care. Additionally, survivors of TBI should undergo endocrine surveillance by 6-12 months after injury, and then yearly, to ensure early detection of deficiencies in hormonal production that can substantially influence growth, puberty and quality of life. abstract_id: PUBMED:24497213 Is routine endocrine evaluation necessary after paediatric traumatic brain injury? Background: Traumatic brain injury (TBI) is a common event in childhood. It is a recognised cause of hypopituitarism both in adult and paediatric patients. Routine endocrine evaluation has been proposed for adult TBI-survivors; nevertheless, incongruous data have been reported in children. Aim: The goal of this study was to describe the prevalence of pituitary dysfunction after TBI in a cohort of children. Material/subjects And Methods: This is a cross-sectional study comprising retrospective medical record review and prospective testing. Children with brain injury discharged from the Paediatric Intensive Care Unit from year 2004 to 2009 were recruited. Height and weight were recorded, systemic examination was performed and baseline pituitary function tests were undertaken. Provocative tests were performed only if abnormal basal levels were detected. Results: Thirty-six patients were collected; the mean age at assessment was 7.2 years and the mean interval since injury 3.3 years. All patients had skull fracture or intracranial haemorrhage; 36.6 % of them had moderate to severe TBI. No abnormalities were found on examination. Low serum IGF 1 levels were detected in four patients and two patients had low serum cortisol levels with inappropriately normal plasma ACTH concentrations. No evidence of pituitary dysfunction was observed in these patients after clinical follow-up, repeated baseline hormone levels or dynamic function tests. Conclusions: No endocrine sequelae have been detected in this population. The routine endocrine evaluation in children with mild to moderate TBI might not be justified, according to our findings. abstract_id: PUBMED:9638328 Assessment of neuroendocrine dysfunction following traumatic brain injury. Posttraumatic neuroendocrine pathology may be a clinically significant complication following traumatic brain injury (TBI). Metabolic abnormalities are described after TBI in two cases. A 21 year old male injured in a motor vehicle accident admitted in a minimally responsive condition presented with fluctuating high sodium levels, undetectable serum testosterone, and depressed cortisol and thyroid function. Imaging revealed near complete avulsion of the pituitary stalk leading to panhypopituitarism. A 38 year old male admitted for occipital skull fractures and brain contusions presented with hyponatremia and low serum testosterone. Both patients required hormonal replacement and correction of electrolyte abnormalities. A screening protocol adapted for selected patients at risk for endocrine problems is described. While neuroendocrine screening is not advocated in all TBI patients, physicians should be aware of the importance of neuroendocrine dysfunction following TBI. Answer: Yes, there is a need for ongoing endocrine assessment after traumatic brain injury (TBI) in children. Studies have shown that hypopituitarism is a common consequence of TBI and can occur in both the acute and chronic phases following the injury (PUBMED:23650946, PUBMED:26287247). The prevalence of pituitary dysfunction after TBI in children is significant, with nearly 30% of children suffering from hypopituitarism five years after moderate to severe TBI (PUBMED:26287247). Growth hormone deficiency and disturbances in puberty are the most common endocrine abnormalities, but any part of the hypothalamic-pituitary axis can be affected (PUBMED:26287247). The impairment of pituitary function cannot be predicted based on the severity of the injury or initial clinical parameters (PUBMED:23650946, PUBMED:27778105). Therefore, regular endocrine screening is recommended for the specific patient population of children who have suffered a TBI (PUBMED:27778105). This is important because symptoms of endocrine dysfunction may be subtle and could be mistaken for consequences of the brain injury itself (PUBMED:36788181). Furthermore, endocrine abnormalities can improve or worsen over time, which can significantly impact a child's growth, puberty, and quality of life (PUBMED:26287247). Ongoing endocrine assessment is crucial for early detection of hormonal deficiencies, allowing for appropriate hormonal replacement therapy and optimization of clinical outcomes (PUBMED:19597761). In summary, given the potential for transient or permanent hypopituitarism following TBI in children, survivors should undergo serial screening for endocrine disturbances, starting 6-12 months after the injury and then annually (PUBMED:26287247). This approach ensures that endocrine deficiencies are detected and managed promptly to minimize their impact on the child's development and well-being.
Instruction: Sterilization of the handengine. Is it a necessity? Abstracts: abstract_id: PUBMED:8014263 Sterilization of the handengine. Is it a necessity? Background: The risk of interpatient infection through a dermabrasion handpiece is not known, nor is there a published standard of handpiece maintenance. Objective: To investigate the possibility of transferring material through the handpiece and evaluate the methods used by dermatologists to disinfect and/or sterilize dermabrasion equipment. Methods: Handengines run while immersed in fluorescein-dyed alcohol were observed for fluorescence. After dermabrasion of a fluorescein-basted pig's foot the instrument was inspected. A survey of handpiece disinfection/sterilization procedures was sent to 114 members of the American Society for Dermatologic Surgery. Results: Although material can be transmitted to the motor/handpiece through the chuck and vice versa, contamination of the instrument did not occur in a dermabrasion simulation. 58.7% of respondents sterilize the chuck and only 38.1% sterilize the motor/handpiece. Conclusions: No absolute requirement to sterilize the handengine is warranted. It is prudent however to sterilize the chuck in a steam autoclave and gas sterilize the motor/handpiece to eliminate the rare possibility of cross-contamination. abstract_id: PUBMED:17614990 The therapeutic exception: abortion, sterilization and medical necessity in Costa Rica. Based on the case of Rosa, a nine-year-old girl who was denied a therapeutic abortion, this article analyzes the role played by the social in medical practice. For that purpose, it compares the different application of two similar pieces of legislation in Costa Rica, where both the practice of abortion and sterilization are restricted to the protection of health and life by the Penal Code. As a concept subject to interpretation, a broad conception of medical necessity could enable an ample use of the therapeutic exception and a liberal use of both surgeries. The practice of therapeutic sterilization has been generalized in Costa Rica and has become the legitimate way to distribute contraceptive sterilization. In contrast, therapeutic abortion is very rarely practiced. The analysis carried out proposes that it is the difference in social acceptance of abortion and sterilization that explains the different use that doctors, as gatekeepers of social morality, make of medical necessity. abstract_id: PUBMED:26616831 Regret of female sterilization Regret of sterilization is inversely correlated to age at the time of sterilization. The minimum age for legal sterilization in Denmark has recently been lowered to 18 years. In Denmark surgical refertilization has almost completely been replaced by in vitro fertilization (IVF). In recent literature pregnancy results after surgical refertilization are easily comparable to IVF. Refertilization may in some cases be advantageous to IVF treatment. Women requesting reversal of sterilization should be offered individualized evaluation and differentiated treatment. It is recommended that surgical refertilization is performed at very few centres. abstract_id: PUBMED:11010313 Optimizing cost and service in sterilization. Controlling sterilization costs is a necessity if medical device manufacturers are to remain competitive. This article looks at ways the industry can reduce these costs and also considers the benefits to be gained by improving relationships with sterilization contractors. abstract_id: PUBMED:141511 The snare method of laparoscopic sterilization: an analysis of 1,000 cases with 4 failures. The snare method has proven to be an effective method of tubal resection for sterilization. In the most experienced hands, significant complications are indeed rare. However, the failure rate appears to be higher than that of the coagulation-alone method, at least in the postabortal patient. The necessity for pathologic confirmation, a tradition of surgery, is critically questioned. Additional coagulation of the tubal ends after resection is now recommended, especially in the patient seeking abortion as well as sterilization. The authors await the comparative long-term results of bipolar forceps and the newer mechanical methods of laparoscopic tubal sterilization. It is conceivable that the snare and other unipolar electrosurgical instruments designed for female sterilization will become obsolete. abstract_id: PUBMED:32201635 The Structure of Essentialist Explanations of Necessity. Fine, Lowe and Hale accept the view that necessity is to be explained by essences: Necessarily p iff, and because, there is some x whose essence ensures that p. Hale, however, believes that this strategy is not universally applicable; he argues that the necessity of essentialist truths cannot itself be explained by once again appealing to essentialist truths. As a consequence, Hale holds that there are basic necessities that cannot be explained. Thus, Hale style essentialism falls short of what Wilsch calls the explanation-challenge (EC) for the metaphysics of necessity. Without endorsing the EC, I argue that Hale's argument for basic, unexplained necessities fails due to a misunderstanding of the structure of essentialist explanations. Getting clear about the structure of essentialist explanations of necessity leads to a re-evaluation of crucial circularity- and regress-arguments that have been discussed in the debate about essentialism. abstract_id: PUBMED:1340102 Sterilization of biocompatible materials: which method to choose? Sterilization of biomaterials, in hospitals, must be considered as the re-sterilization (and not reuse) of devices: prostheses, implants, catheters... This practice is not allowed, according to the circular of may 14 th, 1986; however it is a necessity in various cases. It must be realised with an extreme care, after evaluation of the different methods of sterilization: steam, dry-heat, ethylene oxide, formaldehyde or ionizing radiations and of their effects on the behaviour of the biomaterial. abstract_id: PUBMED:1007712 Problems of control of centralized sterilization in the therapeutic-prophylactic institutions The authors analyze their many-year experience in the work of the sanitary-epidemiological station on control of centralized sterilization departments (CSD) in medical institutions. This control system included preventive and current sanitary supervision, assessment of the efficacy and pecularities of the use of special equipment, the checking of the presterilization treatment and sterilization of medical objects. A necessity of improvment of methods of packing the materials and instruments subject to sterilization and of a thorough study of the sanitary-hygienic conditions of the CSD personnel was demonstrated. abstract_id: PUBMED:29312058 Necessity as a Function of Skewness. With necessary condition analysis (NCA), a necessity effect is estimated by calculating the amount of empty space in the upper left corner in a plot with a predictor X and an outcome Y. In the present simulation study, calculated necessity effects were found to have a negative association with the skewness of the predictor and a positive association with the skewness of the outcome. Also the standard error of the necessity effect was found to be influenced by the skewness of the predictor and the skewness of the outcome, as well as by sample size, and a way to calculate a confidence interval for the necessity effect is presented. At least some of the findings obtained with NCA are well within the range of what can be expected from the skewness of the predictor and the outcome alone. abstract_id: PUBMED:2489166 Sterilization and disinfection in clinical orthodontics Recently a great deal of attention has been devoted to the spread of hepatitis b and aids viruses and the high risk of contamination of these viruses during the dental operations has made orthodontists more aware of the necessity of sterilization and disinfection. This article discusses the methods of sterilization and disinfection. Avoidance of corrosion of instruments and the use of a ultrasonic cleaner to avoid contamination has been explained. In the conclusion the measures that the orthodontist has to take in order to protect himself is discussed. Answer: The necessity of sterilizing the handengine, a tool used in dermabrasion procedures, has been debated. According to a study investigating the possibility of transferring material through the handpiece, it was found that while material can be transmitted to the motor/handpiece through the chuck and vice versa, contamination of the instrument did not occur in a dermabrasion simulation. The survey conducted among dermatologists revealed that 58.7% of respondents sterilize the chuck and only 38.1% sterilize the motor/handpiece. Based on these findings, the study concluded that there is no absolute requirement to sterilize the handengine. However, it is considered prudent to sterilize the chuck in a steam autoclave and gas sterilize the motor/handpiece to eliminate the rare possibility of cross-contamination (PUBMED:8014263).
Instruction: Training forward surgical teams: do military-civilian collaborations work? Abstracts: abstract_id: PUBMED:21181670 Training forward surgical teams: do military-civilian collaborations work? Unlabelled: The US Army and the Ryder Trauma Center (Jackson Memorial Hospital, Miami, Florida) teamed up to provide a training environment (ie, the Army Trauma Training Center) in which forward surgical teams can attend to gain critical teamwork and trauma technical skills to prepare for deployment to Iraq or Afghanistan. The purpose of this study was to gather trainee reactions to the military-civilian collaboration provided at ATTC after deployment to Iraq or Afghanistan. Methods: Survey respondents were 135 US Army personnel (an approximately 50% response rate) who participated in the ATTC 2-week team training program between January 2005 and June 2007. The survey asked questions pertaining to their experience in the resuscitation unit and patient contact at the trauma center. Results: Over 90% of participants agreed or strongly agreed that training in the resuscitation area was beneficial. A majority of participants also agreed or strongly agreed that the patient contact experience was sufficient (78.5%), was a good learning opportunity (90%), and that the experience was a good opportunity to apply what they had learned in their classroom training (over 80%). Areas of suggested improvement included the importance of clarifying roles between the ATTC trainees and the Ryder Trauma Center residents and interns. Trainees would have preferred an extension of the training as a whole, as they felt it was rushed in order to fit all training opportunities into the 2 weeks that they were in Miami. Finally, trainees noted the lack of injuries admitted to the trauma center which replicate injuries caused by blasts (ie, improvised explosive devices). Conclusions: The results of our efforts indicate that military-civilian collaborations do in fact work and are beneficial to both military and civilian medical providers. The opportunity to perform as a team in their respective roles, to respond to a variety of actual trauma patients, and access to civilian medical providers were beneficial. As mentioned, such collaborations are not without limitations. The good news is that most of the identified limitations will be corrected to ensure trainees get the best possible experience possible. abstract_id: PUBMED:35487022 Establishing collaborations in global neurosurgery: The role of InterSurgeon. The global deficiency in surgical care has been highlighted in the past several years, through the publication of the Lancet Commission on Global Surgery in 2015, the passage of WHA Resolution 68.15, and concerted efforts by advocacy organizations such as the G4 Alliance.Approximately 23,300 additional neurosurgeons are estimated to be needed to address the greater than 5 million essential neurosurgical cases that are not performed annually, most in low- and middle-income countries (LMICs). However, increasing recognition of the ease and feasibility of virtual technology prompted a shift towards virtual modes of communication. InterSurgeon (https://www.intersurgeon.org/), an independent, internet-based social network platform, has allowed for formal connection between global surgery advocates who may have complementary needs and resources.This manuscript aims to: 1) characterize the current progress of InterSurgeon, 2) describe lessons learned from the creation and use of InterSurgeon, and 3) discuss future directions for InterSurgeon. Equitable, well-designed collaborations are central to progress in global neurosurgery. InterSurgeon has catalyzed collaborations within global neurosurgery across world regions and country income status. In addition to its role in facilitating traditional in person collaborations, InterSurgeon will become an increasingly important tool for connecting surgeons worldwide as virtual collaboration and augmented reality training paradigms become important components of global surgery capacity building. abstract_id: PUBMED:21483517 Surgical training in libya: the way forward. The General Surgical Training Programmes worldwide are designed to ensure that the advanced surgical trainee in surgery achieves competency in knowledge, skill and attitude, both operative and non-operative in a wide range of common surgical conditions, enabling the trainee to practice competently as a surgeon. Therefore, the goal of the surgical training is to train broad-based, highly qualified surgical specialists who can provide excellence in the care of patients with a wide range of surgical diseases. To this end we have to strive to a training programme that emphasizes education over service, training in all major surgical subspecialties, and strong academic affiliations. abstract_id: PUBMED:26170262 Cognitive Training Can Reduce Civilian Casualties in a Simulated Shooting Environment. Shooting a firearm involves a complex series of cognitive abilities. For example, locating an item or a person of interest requires visual search, and firing the weapon (or withholding a trigger squeeze) involves response execution (or inhibition). The present study used a simulated shooting environment to establish a relationship between a particular cognitive ability and a critical shooting error-response inhibition and firing on civilians, respectively. Individual-difference measures demonstrated, perhaps counterintuitively, that simulated civilian casualties were not related to motor impulsivity (i.e., an itchy trigger finger) but rather to an individual's cognitive ability to withhold an already initiated response (i.e., an itchy brain). Furthermore, active-response-inhibition training reduced simulated civilian casualties, which revealed a causal relationship. This study therefore illustrates the potential of using cognitive training to possibly improve shooting performance, which might ultimately provide insight for military and law-enforcement personnel. abstract_id: PUBMED:37603939 Trauma and Critical Care Military-Civilian Publications Increased After the COVID-19 Pandemic: A Literature Review. Introduction: There continues to be a growing demand for military-civilian partnerships (MCPs) in research collaborations developing medical trauma care in domestic and international affairs. The objective of this comprehensive review is to investigate the difference in the quantity of MCP trauma and critical care publications before and after the COVID-19 pandemic. Methods: A systematic literature review was performed for the calendar years 2018 and 2021 utilizing MEDLINE, Cochrane, and EMBASE databases. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, we performed a three-tiered review of 603 English language articles to identify trauma-related military and/or civilian partners and describe the changes in geographical relationships. Results: A total of 96 (2018) and 119 (2021) articles met screening criteria for trauma and critical care studies and were used for final data extraction. Ultimately, 59 (2018) and 71 (2021) papers met the inclusion criteria of identifying trauma/critical care MCPs and identified both military and civilian partners. There was also an increase from 10 (2018) to 17 (2021) publications that mentioned advocacy for MCP. Using the author affiliations, four regional MCP types were recorded: of 2018 articles, locoregional (3.4%), US-national (47.5%), single international country (42.4%), and between multiple countries (6.8%); of 2021 articles, locoregional (15.5%), US-national (38%), single international country (29.6%), and between multiple countries (16.9%). There has been an increase in the number of locoregional and multinational MCPs and an overall increase in the number of collaborative trauma publications and MCP advocacy papers. A national geographical heat map was developed to illustrate the changes from 2018 to 2021. Conclusions: There has been an increase in the number of recorded trauma and critical care MCP publications post-pandemic. The growth in the number of manuscripts in more regions post-pandemic suggests an increase in the recognition of collaborations that contribute not only to conflict readiness but also advancements in trauma and surgical care. abstract_id: PUBMED:34629757 The impact of the COVID-19 pandemic on international reconstructive collaborations in Africa. Background: The SARS-CoV-2 (COVID-19) pandemic has catalysed a widespread humanitarian crisis in many low- and middle-income countries around the world, with many African nations significantly impacted. The aim of this study was to quantify the impact of the COVID-19 pandemic on the planning and provision of international reconstructive collaborations in Africa. Methods: An anonymous, 14-question, multiple choice questionnaire was sent to 27 non-governmental organisations who regularly perform reconstructive surgery in Africa. The survey was open to responses for four weeks, closing on the 7th of March 2021. A single reminder was sent out at 2 weeks. The survey covered four key domains: (1) NGO demographics; (2) the impact of COVID-19 on patient follow-up; (3) barriers to the safe provision of international surgical collaborations during COVID-19; (4) the impact of COVID-19 on NGO funding. Results: A total of ten reconstructive NGOs completed the survey (response rate, 37%). Ethiopia (n = 5) and Tanzania (n = 4) were the countries where most collaborations took place. Plastic, reconstructive and burns surgery was the most common sub-speciality (n = 7). For NGOs that did not have a year-round presence in country (n = 8), only one NGO was able to perform reconstructive surgery in Africa during the pandemic. The most common barrier identified was travel restrictions (within country, n = 8 or country entry-exit, n = 7). Pre-pandemic, 1547 to ≥ 1800 patients received reconstructive surgery on international surgical collaborations. After the outbreak, 70% of NGOs surveyed had treated no patients, with approximately 1405 to ≥ 1640 patients left untreated over the last year. Conclusions: The COVID-19 pandemic has placed huge pressures on health services and their delivery across the globe. This theme has extended into international surgical collaborations leading to increased unmet surgical needs in low- and middle-income countries.Level of evidence: Not gradable. Supplementary Information: The online version contains supplementary material available at 10.1007/s00238-021-01892-4. abstract_id: PUBMED:36423477 Surgical Training for Civilian Surgeons Interested in Humanitarian Surgery: A Scoping Review. Introduction: Humanitarian surgery is essential to surgical care in limited resource settings. The difficulties associated with resource constraints necessitate special training for civilian surgeons to provide care in these situations. Specific training or curricula for humanitarian surgeons are not well described in the literature. This scoping review summarizes the existing literature and identifies areas for potential improvement. Methods: A review of articles describing established courses for civilian surgeons interested in humanitarian surgery, as well as those describing training of civilian surgeons in conflict zones, was performed. A total of 4808 abstracts were screened by two independent reviewers, and 257 abstracts were selected for full-text review. Articles describing prehospital care and military experience were excluded from the full-text review. Results: Of the eight relevant full texts, 10 established courses for civilian surgeons were identified. Cadaver-based teaching combined with didactics were the most common course themes. Courses provided technical education focused on the management of trauma and burns as well as emergencies in orthopedics, neurosurgery, obstetrics, and gynecology. Other courses were in specialty surgery, mainly orthopedics. Two fellowship programs were identified, and these provide a different model for training humanitarian surgeons. Conclusions: Humanitarian surgery is often practiced in austere environments, and civilian surgeons must be adequately trained to first do no harm. Current programs include cadaver-based courses focused on enhancing trauma surgery and surgical subspecialty skills, with adjunctive didactics covering resource allocation in austere environments. Fellowships programs may serve as an avenue to provide a more standardized education and a reliable pipeline of global surgeons. abstract_id: PUBMED:36908715 Understanding implementation research collaborations from a co-creation lens: Recommendations for a path forward. Increasing calls within the field of implementation science (IS) research seek to promote active engagement of diverse and often disenfranchised stakeholder voices to increase buy-in, fidelity, outcome relevance, and sustainment of evidence-based practices (EBPs). Including such voices requires cultural humility and the integration of multiple perspectives and values among organizations, groups, and individuals. However, the IS field lacks guidance for researchers on structuring collaborative approaches to promote a co-created process (i.e., synergistic approach to goal attainment). We contend that improved operationalization of co-created implementation collaborations is critical to sparking synergy and addressing differentials based on power, privilege, knowledge, and access to resources among stakeholders. These differentials can undermine future implementation and sustainment efforts if not addressed early in the research effort. An insufficient understanding of the guiding principles of co-created implementation collaborations may limit the scientific value of evaluation processes, and researchers' ability to replicate outcomes. We propose a perspective foregrounded in the concept of co-creation to guide the structuring of implementation collaboratives through five principles. We offer three case examples informed by the Exploration, Preparation, Implementation, Sustainment (EPIS) Framework to illustrate the application of these co-creation principles. Lastly, we offer recommendations for promoting co-creation in IS research moving forward. abstract_id: PUBMED:30265016 Supervisor support training effects on veteran health and work outcomes in the civilian workplace. This randomized controlled trial involved the development and evaluation of a supervisor support training intervention in the civilian workforce called VSST: Veteran-Supportive Supervisor Training. A theoretically based intervention in the workplace is critical to ensuring a smooth transition for service members and their families to civilian life, leading to improved psychological and physical health and improved work outcomes among service members. Thirty-five organizations were recruited and randomized to the VSST training program or a waitlist control group. Within those organizations, 497 current or former (post 9/11) service member employees were asked to complete baseline and 3- and 9-month follow-up surveys covering work, family, and health domains. The computerized 1-hr training, and the behavior tracking that followed were completed by 928 supervisors from the participating organizations. Intervention training effects were evaluated using an intent-to-treat approach, comparing outcomes for service members who were in organizations assigned to the training group versus those who were in organizations assigned to the control group. Moderation effects revealed the intervention was effective for employees who reported higher levels of supervisor and coworker support at baseline, demonstrating the importance of the organizational context and trainee readiness. The results did not show evidence of direct effects of the intervention on health and work outcomes. Qualitative data from supervisors who took the training also demonstrated the benefits of the training. This study affirms and adds to the literature on the positive effects of organizational programs that train supervisors to provide social support, thereby improving health and work outcomes of employees who receive more support. (PsycINFO Database Record (c) 2018 APA, all rights reserved). abstract_id: PUBMED:35490120 Outcomes of microsurgical free tissue transfer performed on international surgical collaborations in low-income and middle-income countries: A systematic review and meta-analysis. Background: Microsurgical free tissue transfer is the gold standard for reconstruction of significant soft tissue and bony defects following cancer resection and trauma. Many reconstructive units in low-income and middle-income countries (LMICs) do not yet have access to the resources or training required to perform microsurgical procedures. Long-term international collaborations have been formed with annual reconstructive programmes conducting microsurgery. Aims: To critically analyze outcomes of microsurgical free tissue transfer performed on international reconstructive collaborations in LMICs. Methods: PRISMA-compliant systematic review and meta-analysis of outcomes for free tissue transfer performed during international collaborations in LMICs using an inverse variance model. The study protocol was published prospectively and registered with PROSPERO (ID: CRD42021225613). Results: Seven studies, included 290 flaps on 284 patients. The most common sites requiring reconstruction were Head and neck (53% (n = 153)) and lower limb (7.9% (n = 23)) were lower limb reconstruction. The most common free flaps were radial forearm (22%; n = 64) and anterolateral thigh (18%; n = 51). Total Flap Failure rate was 3.8% (n = 13; 95% confidence interval (CI) = 1.9-6.3%) Overall complication rate was 38% (95% CI =27-48%), with 19% of flaps requiring emergency return to theatre (95% CI =14-26%). Flap salvage was successful in 52% of take-backs (95% CI =15% - 88%). Conclusions: Free flaps performed during international surgical collaborations in LMICs have comparable failure rates to those performed in higher-income settings. However, there are higher complication and take-back rates. This should be taken into account when planning international collaborations. These results should help preoperative counselling and the consent process. Answer: Yes, military-civilian collaborations for training forward surgical teams do work. A study involving the US Army and the Ryder Trauma Center (Jackson Memorial Hospital, Miami, Florida) created the Army Trauma Training Center (ATTC), where forward surgical teams could gain critical teamwork and trauma technical skills in preparation for deployment to Iraq or Afghanistan. The survey results from 135 US Army personnel who participated in the ATTC 2-week team training program between January 2005 and June 2007 showed that over 90% of participants found the training in the resuscitation area beneficial. Additionally, a majority agreed that the patient contact experience was sufficient (78.5%), a good learning opportunity (90%), and a good opportunity to apply classroom training (over 80%). Despite some areas for improvement, such as clarifying roles and extending the training duration, the collaboration was deemed beneficial to both military and civilian medical providers (PUBMED:21181670). Furthermore, the increase in trauma and critical care military-civilian publications after the COVID-19 pandemic suggests a growing recognition of the value of such collaborations, not only for conflict readiness but also for advancements in trauma and surgical care (PUBMED:37603939). These collaborations are essential for addressing the global deficiency in surgical care, as highlighted by the Lancet Commission on Global Surgery and the establishment of platforms like InterSurgeon, which facilitate connections between global surgery advocates (PUBMED:35487022). In conclusion, military-civilian collaborations for training forward surgical teams are effective and beneficial, providing essential skills for deployment and contributing to the broader field of trauma and surgical care.
Instruction: Is low b-factors-based apparent diffusion coefficient helpful in assessing renal dysfunction? Abstracts: abstract_id: PUBMED:26280633 Is low b-factors-based apparent diffusion coefficient helpful in assessing renal dysfunction? Purpose: To investigate whether low b-factors-based apparent diffusion coefficient (ADC) is helpful in assessing renal dysfunction. Materials And Methods: Forty-four subjects were recruited, and classified as having severe renal injury group (sRI) or not (non-sRI) according to a threshold of estimated glomerular filtration rate (eGFR) = 30 ml/min/1.73 m(2). A free-breathing DWI with 7 b-factors (0, 20, 40, 80, 200, 400, and 800 s/mm(2)) was performed. After choosing an anatomical section that covered the largest part of right kidney, the regions of interest covered the renal parenchyma. For each subject, two apparent diffusion coefficients based on all b-factors less than 400 s/mm(2) (ADC0-400) and 800 s/mm(2) (ADC0-800) were calculated by fitting the signal intensity at different b-factors to a mono-exponential decay, respectively. It was found that the examination time of the free-breathing DWI sequence in this study can be reduced from 3.27 to 2.55 min if the b-factor (800 s/mm(2)) is removed. Results: ADC0-400 was larger than ADC0-800 in each group (P &lt; 0.05). Both ADC0-400 and ADC0-800 were smaller in the sRI group than those in the non-sRI group (P &lt; 0.01), and they were linearly related with eGFR (P &lt; 0.05). While a significant difference was not found between the two coefficients to differentiate sRI from non-sRI participants (P &gt; 0.05), a strong correlation was present between ADC0-400 and ADC0-800 (r = 0.856, P &lt; 0.001). Conclusion: The low b-factors-based ADC was an economical option for reducing the examination time without sacrificing its diagnostic performance. abstract_id: PUBMED:23599561 Diffusion-weighted MRI in assessment of renal dysfunction. Background: Diffusion-weighted magnetic resonance imaging (DW-MRI) in renal diseases is an evolving field and its potential is yet to be fully realized. Purpose: To study the relationship between apparent diffusion coefficient (ADC) values of renal parenchyma and serum markers of renal function and stage of chronic kidney disease (CKD). Materials And Methods: A retrospective review was performed of all adult patients who underwent DW-MRI (at b-values of 0 and 500 s/mm(2)) for renal lesions from January 2009 to September 2010 and revealed 88 patients, of which 22 patients had renal dysfunction and 66 had normal renal function. Of these 22, 15 patients were known cases of CKD and were staged depending on disease severity. ADC values were determined for renal parenchyma and compared. Receiver operating characteristic (ROC) curves were drawn to establish cut-off ADC values. Pearson's correlation coefficient (R) was calculated between ADC and renal function parameters. Results: ADC values in patients with renal dysfunction were significantly lower than in patients with normal renal function (2.1133 ± 0.2851 vs. 2.3198 ± 0.1246 (×10(-3) mm(2)/s)). ADC values lower than 2.0354 (×10(-3) mm(2)/s) were seen only with renal dysfunction and higher than 2.4516 (×10(-3) mm(2) /s) were seen only with normal function. There was significant inverse correlation between ADC and serum creatinine (R = -0.530), blood urea (R= -0.502), and significant linear correlation (R = 0.784) with estimated glomerular filtration rate (eGFR). ADC values showed a statistically significant decreasing trend with increasing stage of CKD. Conclusion: ADC values may serve as an additional marker for the presence and degree of renal dysfunction. abstract_id: PUBMED:34925650 Does diffusion-weighted magnetic resonance imaging help in the detection of renal parenchymal disease and staging/prognostication in chronic kidney disease? Purpose: Diffusion-weighted imaging (DWI) in renal diseases is an upcoming modality, and its utility as an additional marker is yet to be proven. This study was intended to find the relationship between apparent diffusion coefficient (ADC) values with renal function tests and stages of chronic kidney disease (CKD) to assess renal dysfunction, and to label a cut-off for normal renal function and dysfunction. Material And Methods: A prospective diagnostic study was conducted on 120 patients: 60 with deranged renal function tests (RFT) and 60 with normal RFT. DWI using a 1.5-Tesla MRI (at b-values of 0 and 500 s/mm2) was done. A region of interest of size 1-2 cm2 was placed on renal parenchyma in the region of medulla, one each, over the superior, mid, and lower regions of each kidney separately. ADC values were recorded for renal parenchyma and compared. Results: In patients with renal dysfunction ADC values were significantly lower than in patients with normal function (1.75 ± 0.25 vs. 2.28 ± 0.21 of right kidney and 1.79 ± 0.17 vs. 2.29 ± 0.21 of left kidney [×10-3 mm2/s]; p = 0.001). ADC values of different stages of CKD showed a decreasing trend with increasing stage. Conclusions: ADC values taken at all poles to get focal involvement of the kidney can be used to measure each kidney separately, and values can be individually correlated with the elevated renal parameters. The cut-off value of the mean ADC for individual kidneys was &gt; 2.28 (×10-3 mm2/s) in normal renal function and &lt; 2.00 (×10-3 mm2/s) in renal dysfunction. abstract_id: PUBMED:26937439 Diffusion weighted imaging and diffusion tensor imaging in the evaluation of transplanted kidneys. Objective: The aim of this study is to investigate the relation between renal indexes and functional MRI in a population of kidney transplant recipients who underwent MR with diffusion-weighted imaging (DWI) and diffusion tensor imaging (DTI) of the transplanted graft. Method: Study population included 40 patients with single kidney transplant. The patients were divided into 3 groups, on the basis of creatinine clearance (CrCl) values calculated using Cockcroft-Gault formula: group A, including patients with normal renal function (CrCl ≥ 60 mL/min); group B, which refers to patients with moderate renal impairment (CrCl &gt; 30 but &lt;60 mL/min); and, finally, group C, which means severe renal deterioration (CrCl ≤ 30 mL/min). All patients were investigated with a 1.5 Tesla MRI scanner, acquiring DWI and DTI sequences. A Mann-Whitney U test was adopted to compare apparent diffusion coefficients (ADCs) and fractional anisotropy (FA) measurements between groups. Receiver operating characteristic (ROC) curves were created for prediction of normal renal function (group A) and renal failure (group C). Pearson correlation was performed between renal clearance and functional imaging parameter (ADC and FA), obtained for cortical and medullar regions. Results: Mann-Whitney U test revealed a highly significant difference (p &lt; 0.01) between patients with low CrCl (group C) and normal CrCl (group A) considering both medullar ADC and FA and cortical ADC. Regarding contiguous groups, the difference between group B and C was highly significant (p &lt; 0.01) for medullar ADC and significant (p &lt; 0.05) for cortical ADC and medullar FA. No difference between these groups was found considering cortical FA. Analyzing groups A and B, we found a significant difference (p &lt; 0.05) for medullar both ADC and FA, while no difference was found for cortical ADC and FA. Strongest Pearson correlation was found between CrCl and medullar ADC (r = 0.65). For predicting normal renal function or severe renal impairment, highest values of AUC were observed using medullar ADC cut-off values (respectively 0.885 and 0.871); medullar FA showed also high accuracy (respectively 0.831 and 0.853). Conclusions: DWI and DTI are promising tools for non-invasive monitoring of renal function; medullar ADC proved to be the best parameter for renal function assessment. abstract_id: PUBMED:25382444 Relationship between renal apparent diffusion coefficient values and glomerular filtration rate in infants with congenital hydronephrosis. The aim was to investigate the relationship between apparent diffusion coefficient (ADC) values measured by diffusion-weighted magnetic resonance imaging (DW MRI) and the split glomerular filtration rate (GFR) in infants with congenital hydronephrosis. Diffusion-weighted imaging (DWI) (b = 0 and 700 seconds/mm(2)) was performed with a General Electric Company (GE) Signa 1.5T MR unit in 46 infants suffering single congenital hydronephrosis and in 30 healthy infants as normal control group. The ADCs were calculated with regions of interest (ROIs) positioned in the renal parenchyma. The 46 obstructed kidneys were classified into four groups according to the GFR level: renal dysfunction compensated group, renal dysfunction decompensated group, renal failure group, and uremia group. The renal ADCs in six groups (normal kidneys in control group, contralateral kidneys, and four groups of hydronephrotic kidneys) were compared statistically using analysis of variance (ANOVA), and the correlative relationship between ADCs and GFR was examined by Pearson's correlation test. There were statistically significant differences in renal ADCs among the six groups. The ADCs of hydronephrotic kidneys were lower than that of the normal kidneys. There was a moderate positive correlation between the ADCs of hydronephrotic kidneys and split GFR (r = 0.744). This study indicated that the ADCs of congenital hydronephrotic kidneys were lower than that of normal renal parenchyma, and there was a positive correlation between the ADCs and split renal GFR, which demonstrates that the ADCs can reflect the filtration function of hydronephrotic kidneys and may provide some reference to help clinical physician to explore a novel noninvasive approach to evaluate the single renal function. abstract_id: PUBMED:31956203 Apparent Diffusion Coefficient in the Resolution of Renal Ischemia after Angioplasty on Diffusion-weighted Imaging: Renal Artery Stenosis Caused by Progressive Thrombosis in Residual Chronic Aortic Dissection. We report a case in which diffusion-weighted magnetic resonance imaging (DWI) demonstrated renal artery stenosis-related renal ischemia and the therapeutic efficacy of revascularization. The patient was a 73-year-old man, who underwent descending thoracic aortic replacement due to DeBakey IIIb chronic aortic dissection, and who showed progressive renal dysfunction due to right renal artery stenosis caused by false lumen thrombosis. DWI demonstrated a decreased apparent diffusion coefficient (ADC) in the right kidney, indicating renal ischemia. Angioplasty with stenting restored renal perfusion and improved the renal function, resulting in the normalization of the decreased ADC in the treated kidney. Thus, DWI can be used to monitor renal ischemia in cases involving advanced renal artery stenosis. abstract_id: PUBMED:26189976 Assessment of renal dysfunction with diffusion-weighted imaging: comparing intra-voxel incoherent motion (IVIM) with a mono-exponential model. Background: Because renal dysfunction is a worldwide problem, accurate assessment is required for planning treatment and follow-up. Intra-voxel incoherent motion (IVIM) can isolate fast from slow water motion in well-perfused organs and may be helpful in assessing renal dysfunction. Purpose: To determine the clinical value of IVIM in the assessment of renal dysfunction compared with a mono-exponential model. Material And Methods: Fifty-four consecutive participants (mean age, 53.13 ± 13.96 years) were recruited into this study. The estimated glomerular filtration rate (eGFR) was calculated to classify the participants as having severe renal injury (sRI, eGFR ≤ 30 mL/min/1.73 m(2)) or not (non-sRI). DWI with seven b-factors was performed. Image analysis was performed by a radiologist to generate an apparent diffusion coefficient map (ADCmon) by mono-exponential model, diffusion coefficient (Dslow and Dfast), and fraction of fast diffusion (Ffast) maps by IVIM. The circular regions of interest were placed at the interface between the cortex and medulla for parameter measurements. Results: The ADCmon, Dslow, Dfast, and Ffast were less in sRI than non-sRI (P &lt; 0.05). ADCmon and Dslow were positively related with eGFR (P &lt; 0.05). For differentiating sRI from non-sRI, receiver operating characteristic curve indicated no significant difference between the two methods (P &gt; 0.05). Furthermore, the correlation was 0.93 between ADCmon and Dslow, followed by 0.57 between Dfast and Ffast, 0.48 between ADCmon and Dfast, and 0.34 between ADCmon and Ffast (P &lt; 0.05). Conclusion: The IVIM model contributed little to improving the assessment of renal dysfunction compared with a mono-exponential model. abstract_id: PUBMED:10373031 Measurement of the apparent diffusion coefficient in diffuse renal disease by diffusion-weighted echo-planar MR imaging. The purpose of this study was to determine the relationship between the apparent diffusion coefficient (ADC) and diffuse renal disease by diffusion-weighted echolanar magnetic resonance (MR) imaging (EPI). Thirty-four patients were examined with diffusion-weighted EPI. The average ADC values were 2.55 x 10(-3) mm2/sec for the cortex and 2.84 x 10(-3) mm2/sec for the medulla in the normal kidneys. The ADC values in both the cortex and medulla in chronic renal failure (CRF) kidneys and in acute renal failure (ARF) kidneys were significantly lower than those of the normal kidneys. In renal artery stenosis kidneys, the ADC values in the cortex were significantly lower than those of the normal and the contralateral kidneys. In the cortex, ADC values were above 1.8 x 10(-3) mm2/sec in all 32 normal kidneys, ranging from 1.6 to 2.0 x 10(-3) mm2/sec in all 8 ARF kidneys, and below 1.5 x 10(-3) mm2/sec in 14 of 15 CRF kidneys. In the medulla, there was considerable overlap in the ADC values of the normal and diseased kidneys. There was a linear correlation between ADC value and sCr level in the cortex (r = 0.75) and a weak linear correlation in the medulla (r = 0.60). Our results show that diffusion-weighted MR imaging may be useful to identify renal dysfunction. abstract_id: PUBMED:30039074 Renal Apparent Diffusion Coefficient Values in Patients with Obstructive Uropathy and High Values of Creatinine. Purpose: Diffusion weighted magnetic resonance imaging (DW-MRI) of the kidneys provides noninvasive information on renal function in healthy volunteers, and it is feasible in severely ill patients. It may provide information on the degree of kidney dysfunction. The purpose of this study is to evaluate apparent diffusion coefficient (ADC) values using DW-MRI in patients with obstructive uropathy and elevated serum creatinine levels. Methods: Fifty patients with high serum creatinine levels and diagnoses of chronic urinary obstruction detected by ultrasonography were included in this study. DW-MRIs were obtained from 50 patients with hydronephrotic kidneys and 26 healthy volunteers using a 1.5T whole-body MR scanner. Results: ADC measurements of renal parenchyma in hydronephrotic kidneys were significantly lower compared to normal kidneys (p &lt; 001). Conclusion: The measurement of ADC values has potential value in the evaluation of the functional status of hydronephrotic kidneys. abstract_id: PUBMED:36910887 A study of the diagnostic efficacy of diffusion-weighted magnetic resonance imaging in the diagnosis of perianal fistula and its complications. Purpose: To determine the diagnostic efficacy of diffusion-weighted magnetic resonance imaging (DWI) in the diagnosis of perianal fistula and its complications. Material And Methods: This is a retrospective study based on the data of 47 patients with a clinical diagnosis of perianal fistula, who had an MRI study performed on a 1.5-T GE Signa MR scanner. DWI sequences were done using 3 different b-values. Other routine MR sequences were included. The MR images were studied to compare the diagnostic efficacy of the DW MRI sequence and other sequences in diagnosing perianal fistula and its complications. Apparent diffusion coefficient (ADC) values of abscesses and inflammatory soft tissue lesions were measured using ADC maps. The standard reference to obtain diagnostic efficacy was post-surgical data. Results: Seventy-nine perianal fistulas were diagnosed in 47 patients who had undergone an MRI study. The sensitivity and specificity of different MR sequences in diagnosing perianal fistulas are T2 FSFSE: 92% sensitivity; DWI: 96% sensitivity; combined T2+DWI: 100% sensitivity; and post-gadolinium T1 FS has 100% sensitivity in diagnosing perianal fistulas. The mean apparent diffusion coefficient for the abscess in our study was 0.990 ± 0.05 × 10-3, and the mean apparent diffusion coefficient for an inflammatory soft tissue lesion was 1.440 ± 0.05 × 10-3. The optimal ADC cut-off for the abscess was 1.098 × 10-3 mm2/s showing 100% sensitivity and 93.8% specificity. Conclusions: DW imaging is a reliable sequence to diagnose perianal fistula and its complications. Measurement of ADC values is reliable in diagnosing perianal abscess collection. DWI sequence helps patients with renal impairment in whom IV gadolinium is contraindicated. Answer: Yes, low b-factors-based apparent diffusion coefficient (ADC) is helpful in assessing renal dysfunction. Studies have shown that ADC values, which are derived from diffusion-weighted magnetic resonance imaging (DW-MRI), correlate with renal function and can serve as an additional marker for the presence and degree of renal dysfunction (PUBMED:23599561; PUBMED:34925650). For instance, one study found that ADC values calculated using low b-factors (ADC0-400) were larger than those calculated with a higher b-factor (ADC0-800), but both were smaller in the severe renal injury group compared to the non-severe group. These ADC values were linearly related to the estimated glomerular filtration rate (eGFR), indicating their potential utility in assessing renal function (PUBMED:26280633). Another study confirmed that ADC values were significantly lower in patients with renal dysfunction compared to those with normal renal function, and there was a significant inverse correlation between ADC and serum creatinine, blood urea, and a significant linear correlation with eGFR (PUBMED:23599561). Furthermore, ADC values showed a statistically significant decreasing trend with increasing stage of chronic kidney disease (CKD), suggesting that ADC values can be used to measure each kidney separately and can be individually correlated with elevated renal parameters (PUBMED:34925650). In the context of kidney transplants, medullar ADC was found to be the best parameter for renal function assessment, with a strong correlation between creatinine clearance and medullar ADC (PUBMED:26937439). In summary, low b-factors-based ADC values obtained from DW-MRI are indeed helpful in assessing renal dysfunction, providing a non-invasive method to evaluate renal function and potentially aiding in the diagnosis and staging of CKD.
Instruction: Can low risk cardiac patients be 'fast tracked' to Phase IV community exercise schemes for cardiac rehabilitation? Abstracts: abstract_id: PUBMED:19616860 Can low risk cardiac patients be 'fast tracked' to Phase IV community exercise schemes for cardiac rehabilitation? A randomised controlled trial. Background: A prospective single blinded randomised controlled trial within a university hospital NHS Trust was undertaken to determine if fast tracking low risk cardiac rehabilitation patients, under the supervision of an exercise instructor, is superior in the medium term to conventional service delivery. Methods: 100 low risk cardiac rehabilitation patients were randomised to either a conventional Phase III hospital group or to a fast-tracked group in a community scheme led by an exercise instructor. Both groups undertook once weekly supervised exercise sessions for the duration of six weeks. Both groups were also encouraged to continue with Phase IV and were reassessed at six months. The primary outcome measure was Incremental Shuttle Walking Test (ISWT) distance. Secondary health related quality of life measures were also analysed. Results: ISWT distance statistically significantly increased over time (f=26.80, p&lt;0.001) for both groups. No between group differences were observed (f=0.03, p=0.87). All domains of the MacNew quality of life questionnaire and five domains of the Short Form 36 showed statistical mean score improvements over time (p&lt;0.05). Continued attendance at Phase IV at six months was statistically significantly higher in the fast track group (p=0.04). At six months all attendees of Phase IV had a clinically and statistically significant mean improvement in ISWT distance in comparison to non-attendees (mean difference 40.38 m, 95%CI 4.20 to 76.57, p=0.03). Conclusions: The fast track service model of cardiac rehabilitation is effective and offers the additional benefit of greater medium term adherence to exercise. abstract_id: PUBMED:30050748 Validity of the Low-Impact Dance for exercise-based cardiac rehabilitation program. Purpose: The aim of this study was to evaluate the oxygen uptake in patients with cardiovascular disease during the low-impact dance program and to compare the findings with the values at peak oxygen uptake (VO2) and aerobic threshold (AT). Methods: The study included 19 patients with cardiovascular disease [age, 68.3±8.7 years; left-ventricular ejection fraction, 60.3%±8.7%; peak VO2, 6.6±1.1 metabolic equivalents (METs)] who were receiving optimal medical treatment. Their heart rate and VO2 were monitored during cardiopulmonary exercise testing (CPET) and during the low impact dance. The dance involved low-impact dynamic sequences. The patients completed two patterns of low-impact dance, and metabolic gas exchange measurements were obtained using a portable ergospirometry carried in a backpack during the dance sessions. Results: The mean values of VO2 (4.0±0.2 METs and 3.9±0.3 METs) and those of heart rate (105.2±2.9 bpm and 96.8±2.6 bpm) during the dance program were not significantly differ from the AT value (4.5±0.2 METs) obtained in CPET. The median (and interquartile range) RPE reported after the dance exercise trials was 11 (9-13). No signs of overexertion were observed in any of the patients during either dance exercise trial. Conclusions: The results suggest that it is reasonable to consider the low-impact dance program as an aerobic exercise program in cardiac rehabilitation. Our findings have important implications for exercise training programs in the cardiac rehabilitation setting and for future studies. abstract_id: PUBMED:28662008 Evaluation of oxidative-reduction markers of blood in patients with acute coronary syndromes (ACS) subjected to cardiac rehabilitation Role in the pathogenesis of atherosclerosis play a reactive oxygen species. In the case of disturbance of dynamic balance between their production and antioxidant defense mechanisms comes to undesirable consequences - oxidative stress. Excessive exercise can, among others, disrupting the balance. Aim: The aim of the study was to evaluate the exponents of the processes of oxidation - reduction of blood in patients with ACS undergoing rehabilitation in a hospital setting. Materials And Methods: The study included 25 patients after ACS STEMI, including 19 men and 6 women, aged 51.5±6.5 years, underwent rehabilitation in the Department of Internal Medicine and Cardiac Rehabilitation, University Hospital im. WAM in Lodz. Blood samples were taken after an initial exercise test (I) and after the final exercise test (IV). Marked: SOD-1, CAT, GPX- in red blood cells, plasma antioxidant activity (TAS) and the concentration of MDA in the red blood cells. Cardiac rehabilitation program included 15 interval training, each lasting 40-45 minutes. Results: The results were statistically analyzed. For the statistically significant level of p&lt;0.05. No significant effect of cardiac rehabilitation on the activity of GPX, SOD-1, MDA and antioxidant activity of plasma. There was only a significant impact on the rehabilitation of CAT activity (p=0.002). Conclusions: Properly conducted cardiac rehabilitation does not disturb the balance of oxidation - reduction of blood in patients with ACS. Exercise should be selected in such a way that this balance is maintained. abstract_id: PUBMED:27755513 Leukocytosis and decreased glomerular filtration rate increase the risk of cardiac rehabilitation failure in post-STEMI patients Recently, in the prognostic assessment of ACS patients attention has been paid to hyperglycemia, leukocytosis, decreased GFR, anemia and thrombocytopenia as they are believed to worsen the prognosis. Aim: The aim of this study was to evaluate the results of basic laboratory blood tests in predicting the failure of in-hospital cardiac rehabilitation in post-STEMI patients - using an index of own design. Materials And Methods: The study comprised 100 post-STEMI patients, 70 men and 30 women, aged 60,1±10,3 years, admitted for cardiac rehabilitation (phase II) to the Department of Internal Medicine and Cardiac Rehabilitation, WAM University Hospital in Lodz. On admission fasting blood cell count and glucose and creatinine level was determined (for GFR assessment). The following results were considered abnormal: glucose level ≥100 mg/dl, GFR&lt;60 ml/min/ 1,73m2, WBC&gt; 10x103/μl, RBC&lt;4x106/μl, PLT&lt;150x103/μl. Exercise test was performed twice in all patients - before and after the completion of the II phase of cardiac rehabilitation to assess its effects. Results: Basing on logistic regression analysis and on the results of individual odds ratio (OR) of the tested blood parameters, their prognostic impact on the risk of cardiac rehabilitation failure was determined. This risk was defined on the basis of post-STEMI patients' inability to tolerate any workload increment (0 Watt) between the initial and final result of the exercise test despite the applied cardiac rehabilitation program. This concerned 54 out of 100 post- STEMI patients. Leukocytosis (WBC&gt;10x103/μl) and GFR&lt;60 ml/ min/1,73 m2 determined most significantly the failure of cardiac rehabilitation (respectively: OR=6,65 and OR=3,43; p=0,05). Thus, they were used to construct a cardiac rehabilitation index. The occurrence of leukocytosis or decreased GFR in post-STEMI patients was associated with a 4,5-fold increase of the risk of cardiac rehabilitation failure, whereas simultaneous occurrence of leukocytosis and decreased GFR was associated with over 20-fold increase of this risk. Conclusions: The results of peripheral blood laboratory tests are useful in predicting cardiac rehabilitation effects. Leukocytosis and decreased GFR (&lt;60 ml/min/1,73 m2) determine to the highest degree the risk of cardiac rehabilitation failure (phase II) in post-STEMI patients. Cardiac rehabilitation failure index can be useful in developing individualized rehabilitation program for post-STEMI patients. abstract_id: PUBMED:36818148 Comparison of the effects of the time of home-based cardiac rehabilitation program on the changes in cardiometabolic risk factors in patients with phase-IV myocardial infarction: A randomized controlled trial. Background: It seems that the time of performing cardiac rehabilitation is important in determining the risk of cardiac complications in patients with myocardial infarction (MI). The present study aimed to investigate the effects of a home-based cardiac rehabilitation program (HCRP) conducted in either the morning or evening on cardiometabolic risk factors in phase IV (maintenance) MI patients. Methods: In this randomized controlled clinical trial, 80 patients with MI were divided into 2 groups of intervention and control (40 individuals per group). Patients in each group were categorized into morning and evening subgroups (20 individuals per subgroup). The therapeutic regimen in the intervention group included HCRP, routine medications, and exercise and walking programs for 8 weeks. Patients in the control group received routine treatments for 8 weeks. Cardiovascular risk factors comprising of cardiac troponin I (cTnI), mean platelet volume (MPV), C-reactive protein (CRP), and cardiometabolic indicators including cholesterol (Cho), high-density lipoprotein (HDL), low-density lipoprotein (LDL), triglyceride (TG), and the maximum rate of oxygen consumption (VO2 max) were measured for all patients before and after the intervention. Results: Our results showed significant reductions in Cho, TG, HDL, LDL, VO2 max, CRP, and MPV (P &lt; 0.05) in the group performing HCRP in the evening compared with the morning group. Conclusion: Performing HCRP in the evening, compared with morning, can be significantly more effective in improving the levels of cardiometabolic risk factors in patients with MI. Therefore, it is recommended that rehabilitation programs be implemented in these patients in evening shifts. abstract_id: PUBMED:35122871 Resistance exercise for cardiac rehabilitation. Lean mass abnormalities are highly prevalent in patients referred for cardiac rehabilitation (CR). As such, current guidelines recommend incorporating resistance exercise (RE) into the exercise prescription of Phase II-IV CR. The effects of RE on health-related outcomes in patients with cardiovascular (CV) disease (CVD) have not been extensively investigated in comparison to aerobic exercise, the traditional modality of exercise implemented in CR. The purpose of this review is to highlight the growing prevalence of lean mass abnormalities such as dynapenia and sarcopenia in CVD and briefly outline the contributing pathophysiology of these impairments as potential targets for RE training. An update on the current evidence pertaining to the effects of RE on exercise capacity, skeletal muscle strength, body composition, CV health, and quality of life in CR patient populations is provided. The current recommendations for RE training in CR are discussed. Future directions for research and clinical practice in this field are highlighted, and included the need to identify the most efficacious principles of resistance training for different health related outcomes in CVD, as well as the suggested drive towards a 'personalized medicine' approach to exercise prescription in CR. abstract_id: PUBMED:26180562 Is Cardiac Rehabilitation Exercise Feasible for People with Mild Cognitive Impairment? Background: Exercise is a promising strategy to prevent dementia, but no clinically supervised exercise program is widely available to people with mild cognitive impairment (MCI). The objective was to survey health professionals to assess the feasibility of using cardiac rehabilitation exercise programs for MCI populations. Methods: We distributed surveys to: 1) health professionals working in cardiac rehabilitation exercise programs (36/72 responded); and 2) physicians who treat MCI (22/32 responded). Questions addressed clinician and clinic characteristics and feasibility of referring and accommodating people with MCI. Results: Most cardiac rehabilitation exercise programs currently treat people with MCI (61.1%). Nearly all were willing and able to accept people with MCI and comorbid vascular risk (91.7%), though only a minority could accept MCI without vascular risk (16.7%). Although most physicians recommend exercise to people with MCI (63.6%), few referred patients with MCI to programs or people to guide exercise (27.3%). However, all physicians (100%) would refer patients with MCI to a cardiac rehabilitation exercise program. Conclusions: Our study supports cardiac rehabilitation exercise programs as a feasible model of exercise for patients with MCI with vascular risk. Patients with and without vascular risk could likely be accommodated if program mandates were expanded. abstract_id: PUBMED:29465559 Cardiac rehabilitation in acute myocardial infarction patients after percutaneous coronary intervention: A community-based study. Background: Acute myocardial infarction (AMI) is one of the leading causes of death and physical disability worldwide. However, the development of community- based cardiac rehabilitation (CR) in AMI patients is hysteretic. Here, we aimed to evaluate the safety and efficacy of CR applied in the community in AMI patients who underwent percutaneous coronary intervention (PCI). Methods: A total of 130 ST-segment elevated myocardial infarction (STEMI) patients after PCI were randomly divided into 2 groups in the community, rehabilitation group (n = 65) and control group (n = 65). Cardiac function, a 6-minute walk distance, exercise time and steps, cardiovascular risk factors were monitored respectively and compared before and after the intervention of 2 groups. The software of EpiData 3.1 was used to input research data and SPSS16.0 was used for statistical analysis. Results: After a planned rehabilitation intervention, the rehabilitation group showed better results than the control group. The rehabilitation group had a significant improvement in recurrence angina and readmission (P &lt; .01). Left ventricular ejection fraction (LVEF) of rehabilitation group showed improvement in phase II (t = 4.963, P &lt; .01) and phase III (t = 11.802, P &lt; .01), and the New York Heart Association (NYHA) classification was recovered within class II. There was a significant difference compared with before (Z = 7.238, P &lt; .01). Six minutes walking distance, aerobic exercise time, and steps all achieved rehabilitation requirements in rehabilitation group in phase II and III, there existed distinct variation between 2 phases. Rehabilitation group had a better result in cardiovascular risk factors than control group (P &lt; .05). Conclusion: Community-based CR after PCI through simple but safe exercise methods can improve the AMI patient's living quality, which includes increasing cardiac ejection fraction, exercise tolerance, and physical status. It must be emphasized that the good result should be established by the foundation of close cooperation between cardiologists and general practitioners, also the importance of cooperation of patients and their families should not be ignored. The rehabilitation program we used is feasible, safe, and effective. abstract_id: PUBMED:27353128 A review of guidelines for cardiac rehabilitation exercise programmes: Is there an international consensus? Background: Cardiac rehabilitation is an important component in the continuum of care for individuals with cardiovascular disease, providing a multidisciplinary education and exercise programme to improve morbidity and mortality risk. Internationally, cardiac rehabilitation programmes are implemented through various models. This review compared cardiac rehabilitation guidelines in order to identify any differences and/or consensus in exercise testing, prescription and monitoring. Methods: Guidelines, position statements and policy documents for cardiac rehabilitation, available internationally in the English language, were identified through a search of electronic databases and government and cardiology society websites. Information about programme delivery, exercise testing, prescription and monitoring were extracted and compared. Results: Leading cardiac rehabilitation societies in North America and Europe recommend that patients progress from moderate- to vigorous-intensity aerobic endurance exercise over the course of the programme, with resistance training included as an important adjunct, for maintaining independence and quality of life. North American and European guidelines also recommend electrocardiograph-monitored exercise stress tests. Guidelines for South America and individual European nations typically include similar recommendations; however, those in the United Kingdom, Australia and New Zealand specify lower-intensity exercise and less technical assessment of functional capacity. Conclusion: Higher-intensity aerobic training programmes, supplemented by resistance training, have been recommended and deemed safe for cardiac rehabilitation patients by many authorities. Based on research evidence, this may also provide superior outcomes for patients and should therefore be considered when developing an international consensus for exercise prescription in cardiac rehabilitation. abstract_id: PUBMED:36106736 Exercise-based cardiac rehabilitation is associated with reduced risk of myocardial infarction In patients with coronary artery disease, exercise-based cardiac rehabilitation (EBCR) is recommended to decrease all-cause and cardiovascular mortality, hospitalisation and risk of myocardial infarction and to improve aerobic capacity and muscle strength and endurance. Before starting the EBCR program, a pre-exercise screening is conducted. Exercise is individually prescribed based on tests of aerobic and muscular strength/endurance, as part of a comprehensive and medically supervised program. A post-exercise assessment is performed to evaluate the effects of exercise and to provide an exercise prescription to encourage life-long exercise. Answer: Yes, low-risk cardiac patients can be 'fast tracked' to Phase IV community exercise schemes for cardiac rehabilitation. A randomized controlled trial found that fast-tracking low-risk cardiac rehabilitation patients to a community scheme led by an exercise instructor was effective and resulted in greater medium-term adherence to exercise compared to conventional Phase III hospital group rehabilitation. Both groups showed statistically significant improvements in the Incremental Shuttle Walking Test (ISWT) distance and quality of life measures, but the fast track group had a higher continued attendance at Phase IV at six months (PUBMED:19616860).
Instruction: Do health behaviours cluster in a working population in New Zealand? Abstracts: abstract_id: PUBMED:23540326 Do health behaviours cluster in a working population in New Zealand? Issue Addressed: This study examines whether adhering to healthy weight, physical activity and fruit and vegetable recommendations lead to a cluster in a working population. Methods: An online Health Risk Assessment (HRA) was administered to 1,296 (36%) employees in nine organisations across New Zealand. Clustering was defined as the co-prevalence of behaviours above that which was expected by the laws of probability. Results: Less than half the participants met physical activity guidelines (44.5%) or maintained a healthy weight (46.7%), and 29.4% consumed five or more servings of fruit and vegetables per day. Just 3.8% of participants met all three recommendations, compared to an expected prevalence of 7.6%. There was no clustering of health behaviours, with no difference between expected and observed prevalence (with an observed/expected (O/E) ratio between 0.68-0.93). Conclusion: Few people adhere to two or more simple public health messages recommended to reduce risk of chronic disease. abstract_id: PUBMED:17461007 Do healthy and unhealthy behaviours cluster in New Zealand? Objective: To describe the co-occurrence and clustering of healthy and unhealthy behaviours in New Zealand. Method: Data were sourced from the 2002/03 New Zealand Health Survey. Behaviours selected for analysis were tobacco use, quantity and pattern of alcohol consumption, level of physical activity, and intake of fruit and vegetables. Clustering was defined as co-prevalence of behaviours greater than that expected based on the laws of probability. Co-occurrence was examined using multiple logistic regression modelling, while clustering was examined in a stratified analysis using age and (where appropriate) ethnic standardisation for confounding control. Results: Approximately 29% of adults enjoyed a healthy lifestyle characterised by non-use of tobacco, non- or safe use of alcohol, sufficient physical activity and adequate fruit and vegetable intake. This is only slightly greater than the prevalence expected if all four behaviours were independently distributed through the population i.e. little clustering of healthy behaviours was found. By contrast, 1.5% of adults exhibited all four unhealthy behaviours and 13% exhibited any combination of three of the four unhealthy behaviours. Unhealthy behaviours were more clustered than healthy behaviours, yet Maori exhibited less clustering of unhealthy behaviours than other ethnic groups and no deprivation gradient was seen in clustering. Discussion: The relative lack of clustering of healthy behaviours supports single issue universal health promotion strategies at the population level. Our results also support targeted interventions at the clinical level for the 15% with 'unhealthy lifestyles'. Our finding of only limited clustering of unhealthy behaviours among Maori and no deprivation gradient suggests that clustering does not contribute to the greater burden of disease experienced by these groups. abstract_id: PUBMED:26944753 The Clustering of Lifestyle Behaviours in New Zealand and their Relationship with Optimal Wellbeing. Purpose: The purpose of this research was to determine (1) associations between multiple lifestyle behaviours and optimal wellbeing and (2) the extent to which five lifestyle behaviours-sleep, physical activity, sedentary behaviour, sugary drink consumption, and fruit and vegetable intake-cluster in a national sample. Method: A national sample of New Zealand adults participated in a web-based wellbeing survey. Five lifestyle behaviours-sleep, physical activity, sedentary behaviour, sugary drink consumption, and fruit and vegetable intake-were dichotomised into healthy (meets recommendations) and unhealthy (does not meet recommendations) categories. Optimal wellbeing was calculated using a multi-dimensional flourishing scale, and binary logistic regression analysis was used to calculate the relationship between multiple healthy behaviours and optimal wellbeing. Clustering was examined by comparing the observed and expected prevalence rates (O/E) of healthy and unhealthy two-, three-, four-, and five-behaviour combinations. Results: Data from 9425 participants show those engaging in four to five healthy behaviours (23 %) were 4.7 (95 % confidence interval (CI) 3.8-5.7) times more likely to achieve optimal wellbeing compared to those engaging in zero to one healthy behaviour (21 %). Clustering was observed for healthy (5 %, O/E 2.0, 95 % CI 1.8-2.2) and unhealthy (5 %, O/E 2.1, 95 % CI 1.9-2.3) five-behaviour combinations and for four- and three-behaviour combinations. At the two-behaviour level, healthy fruit and vegetable intake clustered with all behaviours, except sleep which did not cluster with any behaviour. Conclusion: Multiple lifestyle behaviours were positively associated with optimal wellbeing. The results show lifestyle behaviours cluster, providing support for multiple behaviour lifestyle-based interventions for optimising wellbeing. abstract_id: PUBMED:32066443 TeamMate: a longitudinal study of New Zealand working farm dogs. I. Methods, population characteristics and health on enrolment. Background: Working farm dogs are invaluable on New Zealand sheep and beef farms. To date no study describing farm dog population and health has included information about incidence of illness and injury, or risk factors affecting health and career duration. This paper describes the methodology and initial results from TeamMate, a longitudinal study that was designed to address this gap. We describe the study population, husbandry practices, and prevalence of clinical abnormalities on enrolment. Methods: Data about the farms, owners, husbandry practices and dogs were collected on farm at approximately 6-month intervals. All dogs over 18 months old and in full work were enrolled. Dogs were given physical examinations by veterinarians. On examination all abnormalities were noted, regardless of clinical significance. Results: Six hundred forty-one working farm dogs and 126 owners were enrolled from the South Island of New Zealand. Forty-nine percent of dogs were Heading dogs (314 of 641) and 48% Huntaways (308 of 641). Median age of dogs was 4 years (range 1.5-14) and median body condition score (BCS) was four on a 9-point scale (interquartile range (IQR) 3-5). Fifty-four percent of dogs were male (345 of 641), and 6% (41 of 641) were neutered. Eighty-one percent of owners (102 of 126) fed dogs commercial biscuits and meat sourced on farm. Forty-four percent of dogs (279 of 641) had bedding in their kennel, 14% (55 of 393) had insulated kennels, 69% (442 of 641) had been vaccinated and 33% (213 of 641) were insured. Clinical abnormalities were found in 74% of dogs (475 of 641). Common abnormalities involved the musculoskeletal system (43%, 273 of 641), skin (including scars and callouses; 42%, 272 of 641), and oral cavity (including worn and broken teeth; 35%, 227 of 641). Conclusions: Our results expand on those from previous surveys and indicate that musculoskeletal illness and injury, and skin trauma are the most commonly seen clinical abnormalities in working farm dogs. These results will provide a baseline for investigation of incidence and risk factors for illness, injury, retirement and death in New Zealand working farm dogs. abstract_id: PUBMED:36195954 Sociodemographic differences in 24-hour time-use behaviours in New Zealand children. Background: The time that children spend in physical activity, sedentary behaviour, and sleep each day (i.e., 24-h time-use behaviours), is related to physical and mental health outcomes. Currently, there is no comprehensive evidence on New Zealand school-aged children's 24-h time-use behaviours, adherence to the New Zealand 24-h Movement Guidelines, and how these vary among different sociodemographic groups. Methods: This study utilises data from the 8-year wave of the Growing Up in New Zealand longitudinal study. Using two Axivity AX3 accelerometers, children's 24-h time-use behaviours were described from two perspectives: activity intensity and activity type. Compositional data analysis techniques were used to explore the differences in 24-h time-use compositions across various sociodemographic groups. Results: Children spent on average, 31.1%, 22.3%, 6.8%, and 39.8% of their time in sedentary, light physical activity, moderate-to-vigorous physical activity, and sleep, respectively. However, the daily distribution of time in different activity types was 33.2% sitting, 10.8% standing, 7.3% walking, 0.4% running, and 48.2% lying. Both the activity intensity and activity type compositions varied across groups of child ethnicity, gender, and household income or deprivation. The proportion of children meeting each of the guidelines was 90% for physical activity, 62.5% for sleep, 16% for screen time, and 10.6% for the combined guidelines. Both gender and residence location (i.e., urban vs. rural) were associated with meeting the physical activity guideline, whereas child ethnicity, mother's education and residence location were associated with meeting the screen time guideline. Child ethnicity and mother's education were also significantly associated with the adherence to the combined 24-h Movement Guidelines. Conclusions: This study provided comprehensive evidence on how New Zealand children engage in 24-h time-use behaviours, adherence to the New Zealand 24-h Movement Guidelines, and how these behaviours differ across key sociodemographic groups. These findings should be considered in designing future interventions for promoting healthy time-use patterns in New Zealand children. abstract_id: PUBMED:12542810 Comparison of health-risk behaviours among students in alternative high schools from New Zealand and the USA. Objectives: To describe the prevalence of health-risk behaviours of New Zealand alternative education students, and to compare these behaviours to similar students in the USA. Methods: Thirty-six alternative education schools in the northern region of New Zealand were surveyed. A total of 269 students completed a youth health questionnaire using laptop computers. These data were compared to data from an equivalent population of alternative education students in the USA. Results: Alternative education students from New Zealand and the USA engage in similar high levels of health-risk behaviours. Female students in New Zealand are at particularly high risk of poor health and social outcomes due to high levels of alcohol and marijuana use, driving under the influence of alcohol and high prevalence of risky sexual behaviours. Such health-risk behaviours place alternative education students at greater risk of some of the leading causes of morbidity and mortality in both youth and adult populations. Conclusions: Findings from the present study support the need for specific health policies and programmes for alternative high school students. Providers of New Zealand alternative education should be aware that female students are at particularly high risk of many health-risk behaviours. abstract_id: PUBMED:31885232 Atherogenic lipid profile and health behaviours in women post-menopause working in agriculture. Introduction And Objective: There is a significantly higher risk of lipid disorders occurrence, including atherogenic dyslipidemia in women after menopause than it is in general population. The aim of the work was to investigate the correlation between health behaviours and the occurrence of lipid disorders in women after menopause working in agriculture. Material And Methods: The study was conducted in years 2015-2016 and included 843 post-menopausal women working in agriculture. The following were used: a questionnaire including socio-demographic data, laboratory lipid tests, inventory of health behaviours. The following were estimated: logistic regression models for serum lipids concentration versus frequency of health behaviours in the examined women. Results: Adverse lipid profile was found in over a half of post-menopausal women working in agriculture, whereas the frequency of health behaviours were estimated at the average level, although the frequency of correct eating habits and health practices was significantly lower than preventive behaviours and positive psychological attitudes. A correlation was found between the frequency of health behaviours and the occurrence of lipid disorders in women after menopause working in agriculture: more frequent health practices co-existed with the lower concentration of total cholesterol and a higher concentration of HDL-cholesterol, more frequent preventive behaviours co-existed with lower concentration of LDL-cholesterol. Women with higher concentration of triglycerides undertook pro-health practices relatively more often. Conclusions: The study revealed a high prevalence of lipid disorders in postmenopausal women working in agriculture. More effective health education programmes are necessary in the area of reduction the risk factors of CVD in the population of women working in agriculture. abstract_id: PUBMED:27507724 'Poorly defined': unknown unknowns in New Zealand Rural Health. There is a considerable mismatch between the population that accesses rural healthcare in New Zealand and the population defined as 'rural' using the current statistics New Zealand rural and urban categorisations. Statistics New Zealand definitions (based on population size or density) do not accurately identify the population of New Zealanders who actually access rural health services. In fact, around 40% of people who access rural health services are classified as 'urban' under the Statistics New Zealand definition, while a further 20% of people who are currently classified as 'rural' actually have ready access to urban health services. Although there is some recognition that current definitions are suboptimal, the extent of the uncertainty arising from these definitions is not widely appreciated. This mismatch is sufficient to potentially undermine the validity of both nationally-collated statistics and also any research undertaken using Statistics New Zealand data. Under these circumstances it is not surprising that the differences between rural and urban health care found in other countries with similar health services have been difficult to demonstrate in New Zealand. This article explains the extent of this mismatch and suggests how definitions of rural might be improved to allow a better understanding of New Zealand rural health. abstract_id: PUBMED:2079918 Impressions of women's health in New Zealand. In this paper the author presents a personal view of women's health in New Zealand. As in other countries the women are working toward a sane, humane and ecological model of health care but the male, hyper-expansionist model of health care is likely to continue to receive priority funding at least for some years to come. abstract_id: PUBMED:10073156 Recommendations on population screening for colorectal cancer in New Zealand. Members of the National Health Committee Working Party on Population Screening for Colorectal Cancer. The National Advisory Committee on Health and Disability invited a working party to make recommendations on population screening for colorectal cancer in New Zealand. Recent results from randomised controlled trials of screening with guaiac faecal occult blood tests have provided evidence that population screening could reduce mortality from colorectal cancer. However, given the modest potential level of benefit, the considerable commitment of health sector resources, and the small but real potential for harm, the working party does not recommend population screening for colorectal cancer with faecal occult blood tests in New Zealand. The working party does nto recommend pilot colorectal cancer screening programmes in New Zealand because pilot programmes cannot address the issues of concern: the modest potential benefit and the small, but real, potential for harm. The working party does not recommend faecal occult blood testing as a screening test for colorectal cancer in average-risk individuals outside a population screening programme. Those requesting screening by faecal occult blood test should be given information about the potential risks and benefits. Follow-up bowel investigations in the public health system cannot be guaranteed without an increased allocation of resources. As there is yet no evidence from randomised controlled trials that screening with flexible sigmoidoscopy, colonoscopy or double-contrast barium enema produces a reduction in colorectal cancer mortality, the working party does not recommend population screening with these modalities. Wider consultation and further consideration should be undertaken to develop appropriate advice on surveillance recommendations for groups identified to be at increased risk of colorectal cancer. These decisions should be reviewed as evidence of benefit from new types of faecal occult blood test and other screening modalities becomes available. The working party recognises that colorectal cancer is an important cause of morbidity and mortality and recommends that New Zealand participate in international research in this area. Answer: Health behaviours do not appear to cluster significantly in a working population in New Zealand. A study examining adherence to healthy weight, physical activity, and fruit and vegetable recommendations in New Zealand employees found that less than half of the participants met the guidelines for physical activity and maintaining a healthy weight, and only 29.4% consumed the recommended servings of fruit and vegetables per day. Only 3.8% of participants met all three recommendations, which was lower than the expected prevalence of 7.6%. The study concluded that there was no clustering of health behaviours, as the observed prevalence did not differ significantly from what was expected by probability (O/E ratio between 0.68-0.93) (PUBMED:23540326). Another study using data from the 2002/03 New Zealand Health Survey found little clustering of healthy behaviours, with approximately 29% of adults living a healthy lifestyle characterized by non-use of tobacco, non- or safe use of alcohol, sufficient physical activity, and adequate fruit and vegetable intake. This prevalence was only slightly greater than what would be expected if the behaviours were independently distributed. By contrast, unhealthy behaviours were more clustered than healthy behaviours, although Maori exhibited less clustering of unhealthy behaviours than other ethnic groups, and no deprivation gradient was seen in clustering (PUBMED:17461007). These findings suggest that while unhealthy behaviours may show some clustering, healthy behaviours do not tend to cluster significantly in the working population of New Zealand. This supports the use of single-issue universal health promotion strategies at the population level and targeted interventions at the clinical level for those with 'unhealthy lifestyles' (PUBMED:17461007).
Instruction: Does the availability of single cigarettes promote or inhibit cigarette consumption? Abstracts: abstract_id: PUBMED:19671535 Does the availability of single cigarettes promote or inhibit cigarette consumption? Perceptions, prevalence and correlates of single cigarette use among adult Mexican smokers. Background: Single cigarette use and its implications have rarely been studied among adults. Objective: To assess perceptions, prevalence and correlates of single cigarette purchase behaviour and its relation to harm reduction. Design: Focus group transcripts and cross-sectional data were analysed. Setting And Participants: Focus groups among convenience samples of adult smokers in two Mexican cities and a population-based sample of 1079 adult smokers from the International Tobacco Control Policy Evaluation Project in four Mexican cities. Main Outcome Measures: Purchase of single cigarettes last time cigarettes were bought, frequency of purchasing single cigarettes in the previous month and intention to quit in the next 6 months. Results: Focus group data indicated that smokers bought single cigarettes as a harm reduction strategy. Survey data indicated that 38% of participants purchased single cigarettes in the last month and 10% purchased them the last time they bought cigarettes, with more frequent consumption among young adults and those with lower income. Purchasing single cigarettes was independently associated with the frequency of using single cigarettes to reduce consumption and, less consistently, with the frequency of being cued to smoke after seeing single cigarettes for sale. Using single cigarettes to reduce consumption was positively associated with quit intention, whereas being cued to smoke by single cigarettes was negatively associated with quit intention. Conclusions: Study results suggest that some adult Mexican smokers purchase single cigarettes as a method to limit, cut down on and even quit smoking. Nevertheless, promotion of the availability of single cigarettes as a harm reduction strategy could provide additional smoking cues that undermine quit attempts and promote youth smoking. abstract_id: PUBMED:30306015 "I'm gonna get me a loosie" Understanding single cigarette purchases by adult smokers in a disadvantaged section of New York City. This study seeks to update and expand our understanding of the perceptions and purchasing patterns of smokers of single cigarettes ('loosies') in disadvantaged urban areas. Semi-structured guides were used in thirteen focus groups with 67 self-identified adult smokers from the South Bronx section of New York City in summer 2013. There is wide availability of single cigarettes in the South Bronx, with legitimate stores overwhelmingly being the preferred venue for purchases. Single cigarettes are sold at higher per-unit prices than illicit packs. However, buyers of single cigarettes can achieve cost savings compared to legal, fully taxed cigarette packs. Apart from cost-savings, smokers opt for single cigarettes to reduce their personal cigarette consumption. There is a general perception of market resilience despite law enforcement intervention. However, law enforcement has a limiting effect on access to single cigarettes outside of an individual smoker's immediate neighborhood. The findings suggest that single cigarette sales are an important element of the illicit cigarette market in disadvantaged communities which should not be ignored in future research on the nature and extent of cigarette tax avoidance and evasion. abstract_id: PUBMED:29975879 Comparing cigarette and e-cigarette dependence and predicting frequency of smoking and e-cigarette use in dual-users of cigarettes and e-cigarettes. Introduction: The 4-item Patient-Reported Outcomes Measurement Information System Nicotine Dependence Item Bank is a psychometrically sound measure for assessing cigarette (PROMIS) and e-cigarette dependence (PROMIS-E). We evaluated whether dual-users of cigarettes and e-cigarettes self-report experiencing different levels of dependence on each product. We subsequently examined whether cigarette and e-cigarette dependence are associated with the frequency of using each product in dual-users. Methods: Dual-users completed an online survey in Summer 2017 (n = 326; 49.7% male, 85.3% White, mean age 38.17 [13.08] years). Measurement invariance of the PROMIS and PROMIS-E was evaluated. Mean differences in cigarette and e-cigarette dependence then were examined. The correlation between cigarette and e-cigarette dependence also was examined. Finally, one-way MANOVA was used to evaluate how cigarette and e-cigarette dependence relate to past-month frequency of e-cigarette use and cigarette smoking. Results: The PROMIS and the PROMIS-E were scalar measurement invariant, and, on average, dual-users reported stronger dependence on cigarettes than on e-cigarettes. Cigarette and e-cigarette dependence were related, yet distinct constructs (r = 0.35), suggesting that dual-users can discriminate between dependence on each product. Stronger cigarette dependence predicted more frequent past-month smoking and less frequent past-month vaping. Stronger e-cigarette dependence predicted more frequent past-month vaping and less frequent smoking. Conclusions: Overall, dual-users reported stronger dependence on cigarettes than on e-cigarettes. However, dependence on each product was associated with increased use of each respective product and with less frequent use of the other product. Future research using the PROMIS can evaluate how potential FDA regulations could reduce nicotine dependence across products. abstract_id: PUBMED:33894795 Dual Use of Cigarettes and JUUL: Trajectory and Cigarette Consumption. Objectives: Many smokers who start using ENDS engage in dual use of cigarettes and ENDS. This paper examines time trends in dual use after a JUUL Starter Kit purchase, and changes in cigarette consumption among those remaining dual users. Methods: In the ADJUSST study, a cohort of adult smokers who purchased a JUUL Starter Kit were followed naturalistically 6 times over 12 months. Trends in dual use and reported complete switching away from cigarettes (ie, no past 30-day smoking), and changes in daily average cigarettes per day from baseline at 6 and 12 months were examined. Results: Dual use declined over time as complete switching increased. By 12 months, 43.2% reported dual using, and 51.2% reported switching away from cigarettes. Among dual users, approximately 60% reduced their cigarette consumption substantially (by ≥ 50%); average reductions in cigarette consumption were &gt; 80%. Only a small minority (3%-10%) substantially increased cigarette consumption. Conclusions: Dual use of smoking and JUUL declines over time as switching away from smoking increases. This suggests that, for most smokers, dual use is a transitional state leading to switching completely away from cigarettes. Furthermore, dual use is often marked by substantial reductions in cigarette consumption. abstract_id: PUBMED:37113447 Exploring Factors That Predict Marketing of e-Cigarette Products on Twitter: Infodemiology Approach Using Time Series. Background: Electronic nicotine delivery systems (known as electronic cigarettes or e-cigarettes) increase risk for adverse health outcomes among naïve tobacco users, particularly youth and young adults. This vulnerable population is also at risk for exposed brand marketing and advertisement of e-cigarettes on social media. Understanding predictors of how e-cigarette manufacturers conduct social media advertising and marketing could benefit public health approaches to addressing e-cigarette use. Objective: This study documents factors that predict changes in daily frequency of commercial tweets about e-cigarettes using time series modeling techniques. Methods: We analyzed data on the daily frequency of commercial tweets about e-cigarettes collected between January 1, 2017, and December 31, 2020. We fit the data to an autoregressive integrated moving average (ARIMA) model and unobserved components model (UCM). Four measures assessed model prediction accuracy. Predictors in the UCM include days with events related to the US Food and Drug Administration (FDA), non-FDA-related events with significant importance such as academic or news announcements, weekday versus weekend, and the period when JUUL maintained an active Twitter account (ie, actively tweeting from their corporate Twitter account) versus when JUUL stopped tweeting. Results: When the 2 statistical models were fit to the data, the results indicate that the UCM was the best modeling technique for our data. All 4 predictors included in the UCM were significant predictors of the daily frequency of commercial tweets about e-cigarettes. On average, brand advertisement and marketing of e-cigarettes on Twitter was higher by more than 150 advertisements on days with FDA-related events compared to days without FDA events. Similarly, more than 40 commercial tweets about e-cigarettes were, on average, recorded on days with important non-FDA events compared to days without such events. We also found that there were more commercial tweets about e-cigarettes on weekdays than on weekends and more commercial tweets when JUUL maintained an active Twitter account. Conclusions: e-Cigarette companies promote their products on Twitter. Commercial tweets were significantly more likely to be posted on days with important FDA announcements, which may alter the narrative about information shared by the FDA. There remains a need for regulation of digital marketing of e-cigarette products in the United States. abstract_id: PUBMED:31876439 Daily Cigarette Consumption and Urine Cotinine Level between Dual Users of Electronic and Conventional Cigarettes, and Cigarette-Only Users. The aim of this study was to identify whether or not the dual users of conventional cigarettes and e-cigs were naturally reducing the amount of their cigarette smoking while they were using e-cigs, and whether their urine cotinine level became similar or less compared to conventional cigarette-only smokers. We used data from the Korean National Health and Nutrition Examination Survey (KNHANES), a nationally representative sample collected in 2014-2016. We selected 2,442 participants who were over 19 years old and were current users of conventional cigarettes and e-cigs. Among the total sample, 12.8% were dual users of e-cigs and conventional cigarettes. Conventional cigarette-only users smoked 14.26 (±23.19) cigarettes a day, while dual users of electronic and conventional cigarettes smoked 14.46 (±7.88) cigarettes a day (p &lt; .001). Means of urine cotinine level in conventional cigarette-only users and dual users were 1250.35 ng/mL (±832.75) and 1364.95 ng/mL (±827.96), respectively (p &lt; .001). The result of a multiple linear regression analysis identified that dual users' cotinine level was 136.80 ng/mL higher, compared to conventional cigarette smokers' one (p = .011). Unlike the argument that e-cigs can be used as a smoking cessation aid, we found that using e-cigs might be less likely to support quitting smoking or reducing cigarette consumption. abstract_id: PUBMED:31276890 What is the association between price and economic activity with cigarette consumption in Cuba from 1980 to 2014? Objectives: Cuba is a tobacco-producing country that has been economically isolated as a consequence of an embargo imposed by the USA. It has also experienced a severe economic depression in the 1990s after the withdrawal of support by the former Soviet Union. These characteristics provide a unique opportunity to study the relation between large changes in economic activity, cigarette price and demand for cigarettes in a relatively isolated socialist economy. Study Design: This is an observational epidemiological study. Methods: Data were obtained on the annual price of a packet of cigarettes and the mean number of cigarettes consumed per adult living in Cuba from 1980 to 2014. Descriptive and regression analysis were used to explore the relationship between cigarette consumption and price in Cuba. Results: In 1980, the mean price of a packet of cigarettes was 1.53 Cuban peso (CUP) in 1997 prices and the mean annual per capita consumption was 2237 cigarettes. In 2014, the mean price had increased to 5.57 CUP (1997 prices) per packet of cigarettes, and consumption had fallen to 1527 cigarettes per capita. There were significant negative associations between annual cigarette consumption and both price and living through an economic depression. The elasticity was approximately -0.31 with price, and living through an economic depression was also associated with lower consumption of cigarettes (a reduction of 9%, 95% confidence intervals -0.18 to -0.001). Conclusions: Higher cigarette pricing, along with other public health interventions, are required to protect the national population from the adverse effects of tobacco smoke exposure. abstract_id: PUBMED:36844153 Effect of cigarette prices on cigarette consumption in Ghana. Introduction: Noncommunicable diseases are on the rise globally, with tobacco consumption being a major risk factor. Reducing tobacco consumption is an important step towards reducing the incidence and prevalence of many noncommunicable diseases. Tax and price measures have been proposed as tobacco control tools. This study investigated the link between cigarette prices and cigarette consumption in Ghana. Methods: Annual time series data for the period 1980-2016 were used. The data came from diverse sources, including WHO, World Bank, and tobacco industry documents. Dynamic Ordinary Least Squares (DOLS), cointegration techniques, and three-stage least squares (3SLS) were used to analyze the data. Results: After controlling for education, income, and population growth, we estimated that the price elasticity of cigarette demand is between -0.35 and -0.52 and statistically significant at 1% level. In the short run, the price elasticity is -0.1. Another variable that significantly reduced cigarette consumption during the period was education, with an elasticity between -1.7 and -2.7. Conclusion: Cigarette demand in Ghana is influenced by cigarette prices and education. We conclude that tobacco taxes that significantly raise retail prices of cigarettes and higher education (including health education) will help reduce cigarette consumption. abstract_id: PUBMED:28780356 Then and now: Consumption and dependence in e-cigarette users who formerly smoked cigarettes. Electronic cigarette use, or vaping, continues to be a focus for regulators and policy makers in public health, particularly since it can compete with or be a substitute for smoking. This study investigated characteristics of nicotine dependence and consumption in a sample of vapers who formerly smoked cigarettes. We recruited 436 (80% male) vapers from several internet discussion forums; 95% of whom previously smoked, but ceased after commencing vaping. These participants completed a retrospective version of the Fagerström Test for Nicotine Dependence (FTND-R), as well as a version modified to suit current vaping (FTND-V), along with measures of consumption. Nicotine dependence appears to reduce markedly when smokers transition to vaping. However, 'decoupling' is observed in the relationship between consumption and dependence in vaping, and the FTND-V showed inadequate psychometric properties. Older and female vapers tend to employ a low-power, higher nicotine-concentration style of vaping. Overall, nicotine concentration tended to increase over time, although this effect was moderated by users' intentions to reduce their intake. Indicators of smoking addiction do not appear to be applicable to vaping, with respect to both internal consistency and relationship to consumption. This suggests that motivations for vaping are less dominated by nicotine delivery (negative reinforcement), and may be driven more by positive reinforcement factors. Nevertheless, e-liquid nicotine concentration was associated, albeit weakly, with dependence among e-cigarette users. Finally, vapers are heterogeneous group with respect to style of consumption, with a high-power/lower nicotine set-up more common among younger men. abstract_id: PUBMED:36853319 Retailer's Density and Single Stick Cigarette's Accessibility among School-Age Children in Indonesia. Objective: The density of single-stick cigarette sales is related to the increase in tobacco epidemic-related diseases. This study aims to provide evidence of retailers' density and radius around the school location, accessibility of single-stick cigarette selling among school-age children, and retailers' response regarding the restriction policy options in urban areas in Indonesia. Methods: It is a cross-sectional study. The retailers' spatial density and the radius around schools in Daerah Khusus Ibukota (DKI) Jakarta Province were investigated using Google Maps and Google Street View (GSV). The coordinates of retailers and schools were geo-coded to Kernel Density Map. The accessibility of single-stick cigarettes among children and restriction policy options for cigarette selling were derived from random sampling using surveys of 64 retailers based on Google Data results. Result: Virtually walking using google maps and GSV found 8,371 retailers in DKI Jakarta. There were ± 15 cigarette retailers every 1 km2, and an average of ± one cigarette retailer in every 1,000 residents. There were 456 (21.67%) retailers with a radius ≤ 100 meters around elementary schools, even an increase around junior high school locations of 167 (26.05%) retailers. The accessibility of cigarettes among children is easy because the price is relatively low, at Rp1,500/ $0.11 per stick. In addition, 58.1% of retailers allowed customers to buy on debt. Eleven percent of cigarette retailers intended to reduce the sale of cigarettes if the prohibition of single-stick cigarette sales were applied. Conclusion: Cigarette retailers were very dense and single-stick cigarettes were still accessible to children in Indonesia. The implementation of the prohibition on single-stick cigarette sales should be added for future tobacco control in developing countries such as Indonesia. Answer: The availability of single cigarettes appears to have a complex relationship with cigarette consumption, with evidence suggesting both promotion and inhibition effects depending on the context and individual smoker behaviors. On one hand, the availability of single cigarettes can promote cigarette consumption. For instance, the perception that single cigarettes are widely available in disadvantaged urban areas, such as the South Bronx, indicates that single cigarettes are an important element of the illicit cigarette market and can facilitate tax avoidance and evasion (PUBMED:30306015). Additionally, the accessibility of single cigarettes among school-age children in Indonesia suggests that the high density of cigarette retailers around schools and the low cost of single sticks make cigarettes easily accessible to children, potentially promoting early initiation and increased consumption (PUBMED:36853319). On the other hand, some smokers purchase single cigarettes as a harm reduction strategy, intending to limit or cut down on their smoking. In Mexico, for example, adult smokers reported buying single cigarettes to reduce consumption, and this behavior was positively associated with the intention to quit smoking (PUBMED:19671535). Similarly, smokers in New York City reported buying single cigarettes to reduce their personal cigarette consumption (PUBMED:30306015). However, the relationship between single cigarette availability and consumption is not straightforward. While some smokers may use single cigarettes to control or reduce their smoking, the promotion of single cigarettes as a harm reduction strategy could also provide additional smoking cues that undermine quit attempts and promote youth smoking (PUBMED:19671535). Moreover, dual users of conventional cigarettes and e-cigarettes did not show a natural reduction in cigarette smoking while using e-cigarettes, and their urine cotinine levels were higher compared to conventional cigarette-only smokers, suggesting that using e-cigarettes might not support quitting smoking or reducing cigarette consumption (PUBMED:29975879; PUBMED:31876439). In summary, the availability of single cigarettes can both promote and inhibit cigarette consumption, with some smokers using them as a means to reduce or control their smoking, while others may find that single cigarettes increase accessibility and cues to smoke, particularly among youth and in disadvantaged communities.
Instruction: Can the Child Health Utility 9D measure be useful in oral health research? Abstracts: abstract_id: PUBMED:26146798 Can the Child Health Utility 9D measure be useful in oral health research? Background: Cost-utility analysis using the quality-adjusted life years (QALY) as an outcome measure to inform healthcare resource allocation is becoming more common. The Child Health Utility 9D (CHU9D) enables the calculation of utility values in children. Children were involved throughout its development to ensure it is child centred. Aim: To determine whether the CHU9D is responsive to the changing components of the dmfs+DMFS index score in children receiving dental care over a 1-year period. Design: The study sample comprised children aged 6 to 9 years old who were enrolled in a split-mouth, placebo-controlled randomised controlled trial. All children had a comprehensive clinical examination including radiographs and health-related quality of life (HRQoL) was measured using the CHU9D at baseline and 1 year. Descriptive statistics was followed by bivariate analyses before effect sizes were calculated. A negative binomial model was fitted to assess whether the utility score predicted the components of decayed and filled teeth (combined primary and permanent teeth). Results: Eighty (92%) children completed the CHU9D at baseline and follow-up. They presented with a mean baseline d3mfs+D3MFS of 6.7 (SD = 7.1), which rose to 7.3 (SD = 7.0) at follow-up. The mean CHU9D score improved from 0.88 to 0.90 from baseline to follow-up. No statistically significant relationships were found between caries status and the CHU9D score. Conclusion: The CHU9D was found to be unresponsive to the changing components of dental caries experience. abstract_id: PUBMED:30157857 Canadian French translation and linguistic validation of the child health utility 9D (CHU9D). Background: Several preference based measures are validated for adults in cost utility analysis, but less are available for children and many researchers have criticized the quality of pediatric economic studies. The objective of this study was to perform a Canadian French translation and linguistic validation of the Child Health Utility 9D (CHU9D) that was conceptually equivalent to the original English version for use in Canada. Methods: The translation and linguistic validation were realized by ICON Clinical Research (UK) Limited in association with the developer of the CHU9D and Canadian collaborators. This was done in accordance with industry standards and the guidance of the Food and Drug Administration (FDA) for patient-reported outcome (PRO) instruments. Five steps were considered: concept elaboration; forward translation; back translation; linguistic validation; proofreading and final verification. Results: The CHU9D Canadian French translation and linguistic validation were realized without any major difficulties. Only 3 changes were made after the forward translation and 5 after the back translation. The result of back translation was very similar to the original English version. Six additional changes suggested by the developer team were accepted and the linguistic validation with five children led to 2 additional changes. Most changes were generally to change one word to better sounding Canadian French. Conclusion: We produced a Canadian French translation and cross-cultural adaptation of the Child Health Utility 9D (CHU9D). Before being used in clinical settings and research projects, the final Canadian French translation needs to be validated for metrological qualities of reliability and validity. abstract_id: PUBMED:25027722 The potential of the Child Health Utility 9D Index as an outcome measure for child dental health. Background: The Child Health Utility 9D (CHU9D) is a relatively new generic child health-related quality of life measure (HRQoL)-designed to be completed by children-which enables the calculation of utility values.The aim is to investigate the use of the CHU9D Index as an outcome measure for child dental health in New Zealand. Method: A survey was conducted of children aged between 6 and 9 years attending for routine dental examinations in community clinics in Dunedin (New Zealand) in 2012. The CHU9D, a HRQoL, was used, along with the Child Perceptions Questionnaire (CPQ), a validated oral health-related quality of life (OHRQoL) measure. Socio-demographic characteristics (sex, age, ethnicity and household deprivation) were recorded. Dental therapists undertook routine clinical examinations, with charting recorded for each child for decayed, missing and filled deciduous teeth (dmft) at the d3 level. Results: One hundred and forty 6-to-9-year-olds (50.7% female) took part in the study (93.3% participation rate). The mean d3mft was 2.4 (SD = 2.6; range 0 to 9). Both CHU9D and CPQ detected differences in the impact of dental caries, with scores in the expected direction: children who presented with caries had higher scores (indicating poorer OHRQoL) than those who were free of apparent caries. Children with no apparent caries had a higher mean CHU9D score than those with caries (indicating better HRQoL). The difference for the CPQ was statistically significant, but for CHU9D the difference was not significant. When the two indices were compared, there was a significant difference in mean CHU9D scores by the prevalence of CPQ and subscale impacts with children experiencing no impacts having mean CHU9D scores closer to 1.0 (representing perfect health). Conclusion: The CHU9D may be useful in dental research. Further exploration in samples with different caries experience is required. The use of the CHU9D in child oral health studies will enable the calculation of quality-adjusted life years (QALYs) for use in economic evaluation. abstract_id: PUBMED:30374777 Scoring the Child Health Utility 9D instrument: estimation of a Chinese child and adolescent-specific tariff. Purpose: To derive children and adolescents' preferences for health states defined by the Chinese version of Child Health Utility 9D (CHU9D-CHN) instrument in China that can be used to estimate quality-adjusted life years (QALYs) for economic evaluation. Methods: A profile case best-worst scaling (BWS) and a time trade-off (TTO) method were combined to derive a Chinese-specific tariff for the CHU9D-CHN. The BWS survey recruited students from primary and high schools using a multi-stage random sampling method and was administered in a classroom setting, whilst the TTO survey adopted an interviewer-administrated conventional TTO task and was administered to a convenience sample of undergraduate students. A latent class modelling framework was adopted for analysing the BWS data. Results: Two independent surveys were conducted in Nanjing, China, including a valid sample of 902 students (mean age 13 years) from the BWS survey and a valid sample of 38 students (mean age 18 years) from the TTO survey. The poolability of the best and the worst responses was rejected and the optimal result based on the best responses only. The optimal model suggests the existence of two latent classes. The BWS estimates were further re-anchored onto the QALY scale using the TTO generated health state values via a mapping approach. Conclusion: This study provides further insights into the use of the BWS method to generate health state values with young people and highlights the potential different decision rules that young people may employ for determining best vs. worst choices in this context. abstract_id: PUBMED:29730851 Psychometric evaluation of the Chinese version of the Child Health Utility 9D (CHU9D-CHN): a school-based study in China. Purpose: The Child Health Utility 9D (CHU9D), a new generic preference-based health-related quality of life (HRQoL) instrument, was developed specifically for the application in cost-effectiveness analyses of treatments and interventions for children and adolescents. The main objective of this study was to examine the psychometric property of the Chinese version of CHU9D (CHU9D-CHN) in a large school-based sample in China. Methods: Data were collected using a multi-stage sampling method from third-to-ninth-grade students in Shaanxi Province, China. Participants self-completed a hard-copy questionnaire including the CHU9D-CHN instrument, the Pediatric Quality of Life Inventory™ 4.0 Generic Core Scales (PedsQL), information on socio-demographic characteristics and self-reported health status. The psychometric properties of the CHU9D-CHN, including the internal consistency, 2-week test-retest reliability, convergent and known-groups validity were studied. Results: A total of 1912 students participated in the survey. The CHU9D-CHN internal consistency and test-retest reliability were good to excellent with a Cronbach's alpha of 0.77 and an intra-class correlation coefficient of 0.65, respectively. The CHU9D utility scores moderately correlated with the PedsQL total scores (r = .57, P &lt; .001), demonstrating good convergent validity. Difference of the CHU9D utility scores among the different participants with levels of self-reported general health, health services utilisation and left-behind status demonstrated good construct validity. Conclusion: The findings demonstrated adequate psychometric performance for the CHU9D-CHN. The CHU9D-CHN was a satisfactory, reliable and valid instrument to measure and value HRQoL for children and adolescents in China. abstract_id: PUBMED:30661626 Pediatric Quality of Life Instruments in Oral Health Research: A Systematic Review. Objective: To identify the generic or disease-specific pediatric quality of life (QoL) instruments used in oral health research among children and adolescents and to provide an overview of these QoL instruments. Methods: A systematic literature search was performed with multiple databases to identify the pediatric QoL instruments used in oral health research. Results: The literature search yielded 872 records; from these, 16 pediatric QoL instruments were identified that had been used among children and adolescents in oral health research. Of these, 11 were oral health-specific QoL instruments and five were generic instruments. Of the 11 oral health-specific QoL instruments, none were multiattribute utility instruments (MAUI), whereas of the five generic instruments, two (Child Health Utility 9D index and EuroQoL-5D youth) were classified as an MAUI. Except for one, all pediatric QoL instruments were published after the year 2000 and the majority originated from the USA (n = 8). Of the 11 oral health-specific QoL instruments, five instruments are designed for the respondent to be a child (i.e., self-report), one uses proxy responses from a parent or guardian, and five instruments have both self and proxy versions. Of the five generic QoL instruments, one uses proxy responses and the other four instruments have both self and proxy versions. Conclusions: This review identified a wide variety of pediatric oral health-specific and generic QoL instruments used in oral health research among children and adolescents. The availability of these QoL instruments provides researchers with the opportunity to select the instrument most suited to address their research question. abstract_id: PUBMED:22788262 Valuation of the Child Health Utility 9D Index. Background And Objectives: The aim of this study was to test the feasibility of estimating preference weights for all health states defined by the Child Health Utility 9D (CHU9D), a new generic measure of health-related quality of life for children aged 7-11 years. The estimation of preference weights will allow the calculation of QALYs for use in paediatric economic evaluation. Methods: Valuation interviews were undertaken with 300 members of the UK adult general population to obtain preference weights for a sample of the health states in the CHU9D descriptive system. Both standard gamble and ranking valuation methods were used. Regression modelling was undertaken to estimate models that could predict a value for every health state defined by the system. A range of models were tested and were evaluated based on their predictive performance. Results: Models estimated on the standard gamble data performed better than the rank model. All models had a few inconsistencies or insignificant levels and so further modelling was done to estimate a parsimonious consistent regression model using the general-to-specific approach, by combining inconsistent levels and removing non-significant levels. The final preferred model was an ordinary least squares (OLS) model. All the coefficients in this model were significant, there were no inconsistencies and the model had the best predictive performance and a low mean absolute error. Conclusion: This research has demonstrated it is feasible to value the CHU9D descriptive system, and preference weights for each health state can be generated to allow the calculation of QALYs. The CHU9D can now be used in the economic evaluation of paediatric healthcare interventions. Further research is needed to investigate the impact of children's preferences for the health states and what methods could be used to obtain these preferences. abstract_id: PUBMED:9385319 Oral health component of child health services research. The June 11 JAMA article outlines a rationale for developing a focus on child health services research and offers several strategies for facilitating the process. Developing a child health research agenda is essential to advancing dental care and professional education and, thus, is relevant to educators as well as researchers and policy makers. Oral health considerations are not likely to have high visibility or priority within that agenda unless dental health services researchers and dental schools become involved. An obvious question that is bound to be raised is "does it make sense to attempt to direct some portion of the already scarce resources available for dental health services research to children's issues?" Many individuals in leadership positions will probably think not; however, with an appropriate strategy, a small number of organized individuals and institutions can make a significant difference. Given the nature of dental diseases and the historical values of the profession, what better place to start? abstract_id: PUBMED:37811777 Comparison of the Early Childhood Oral Health Impact Scale (ECOHIS-4D) and Child Health Utility Index (CHU-9D) in children with oral diseases. Objective: Accurate assessment of child oral health is important for guiding economic evaluations and informing healthcare decision-making. Early Childhood Oral Health Impact Scale (ECOHIS-4D) is a preference-based instrument that measures the oral health-related quality of life of young children. The aim of this study was to compare the utility scores of ECOHIS-4D and Child Health Utility Index (CHU-9D), against an oral health indicator to evaluate which utility score corresponds better with the oral health indicator. Method: The ECOHIS-4D and CHU-9D were applied to 314 parent/child dyads from preschools in a primary healthcare setting in Perth, Western Australia. Four parameters were used to assess which instrument corresponds better with the oral health indicator (decayed, missing and filled teeth score-dmft score): (i) discrimination, the ability to discriminate between different clinical severity groups, (ii) external responsiveness, how much the utility values relate to the changes in dmft scores, (iii) correlation, the association between the two instruments and the related dimensions and (iv) differences in the utility values across the two instruments. Results: Most participants (81%) were 2-6 years old, and nearly 50% had a dmft score &lt;3. ECOHIS-4D demonstrated a superior ability to differentiate between dmft severity groups and respond to changes in dmft scores. A significant weak correlation was observed between dmft and ECOHIS-4D (-0.26, 95%, CI -0.36 to -0.15) compared to a non-significant very poor correlation between dmft and CHU-9D (0.01, 95% CI -0.12 to 0.10). The utility scores of the two instruments had relatively good agreement towards good health and weak agreement towards poor health. Conclusions: ECOHIS-4D, the oral health-specific instrument, is more sensitive in assessing children's oral health-related quality of life than the generic CHU-9D. Thus, ECOHIS-4D is more appropriate for utility estimates in economic evaluations of oral health-related interventions and resource allocation decision-making. abstract_id: PUBMED:33046101 Association of academic performance, general health with health-related quality of life in primary and high school students in China. Purpose: To explore the association of academic performance and general health status with health-related quality of life (HRQoL) in school-aged children and adolescents in China. Methods: In this cross-sectional study conducted in 2018, students (grade 4-12) were randomly chosen from primary and high schools in Nanjing, China. HRQoL, the outcome measure, was recorded using the Child Health Utility 9D, while self-rated academic performance and general health were the independent variables. Mixed-effects regression models were applied to compute mean difference (MD) and 95% confidence interval (CI) of HRQoL utility score between students with different levels of academic performance and general health. Results: Totally, 4388 participants completed the study, with a response rate of 97.6%. The mean HRQoL utility score was 0.78 (SD: 0.17). After adjustment for socio-demographic attributes, physical activity, sedentary behavior, dietary patterns, body weight status and class-level clustering effects, students with fair (MD = 0.048, 95% CI 0.019, 0.078) and good (MD = 0.082, 95% CI 0.053, 0.112) self-rated academic performance reported higher HRQoL utility scores than those with poor academic performance, respectively. Meanwhile, students with fair (MD = 0.119, 95% CI 0.083, 0.154) and good (MD = 0.183, 95% CI 0.148, 0.218) self-assessed general health also recorded higher HRQoL utility scores than those with poor health, separately. Consistent findings were observed for participants by gender, school type and residential location. Conclusions: Both self-rated academic performance and general health status were positively associated with HRQoL among Chinese students, and such relationships were independent of lifestyle-related behaviors and body weight status. Answer: The Child Health Utility 9D (CHU9D) measure has been explored for its utility in oral health research, with mixed findings. On one hand, a study found that the CHU9D was unresponsive to changing components of dental caries experience, suggesting it may not be sensitive enough to detect changes in oral health status in children receiving dental care over a 1-year period (PUBMED:26146798). However, another study indicated that the CHU9D may be useful in dental research, as it detected differences in the impact of dental caries on health-related quality of life (HRQoL), although the differences were not statistically significant (PUBMED:25027722). Further research is needed to validate the CHU9D in different populations and contexts. For instance, a Canadian French translation and linguistic validation of the CHU9D were completed, but the instrument still requires validation for reliability and validity before being used in clinical settings and research projects in Canada (PUBMED:30157857). Additionally, a Chinese child and adolescent-specific tariff for the CHU9D was derived, which could be used to estimate quality-adjusted life years (QALYs) for economic evaluation, suggesting potential for broader application in health research (PUBMED:30374777). The psychometric evaluation of the Chinese version of the CHU9D (CHU9D-CHN) demonstrated satisfactory reliability and validity, indicating that it could be a reliable and valid instrument to measure and value HRQoL for children and adolescents in China (PUBMED:29730851). Moreover, a systematic review identified the CHU9D as one of the generic pediatric quality of life (QoL) instruments used in oral health research, highlighting its potential use alongside other instruments (PUBMED:30661626). However, a comparison between the Early Childhood Oral Health Impact Scale (ECOHIS-4D) and the CHU9D found that the ECOHIS-4D, an oral health-specific instrument, was more sensitive in assessing children's oral health-related quality of life than the generic CHU9D (PUBMED:37811777). This suggests that while the CHU9D can be useful, oral health-specific instruments may be more appropriate for certain applications in oral health research. In conclusion, the CHU9D has potential for use in oral health research, but its sensitivity and responsiveness to changes in oral health status may vary depending on the context and the population being studied. Further validation and comparison with oral health-specific measures are necessary to determine its full utility in this field.
Instruction: Hypermethylation of the CpG islands in the promoter region of the GSTP1 gene in prostate cancer: a useful diagnostic and prognostic marker? Abstracts: abstract_id: PUBMED:15530476 Hypermethylation of the CpG islands in the promoter region of the GSTP1 gene in prostate cancer: a useful diagnostic and prognostic marker? Background: Recently, many studies have focused on the potential diagnostic value of the promoter hypermethylation of the GSTP1 gene in prostate cancer. Method: A total of 144 patients, undergoing eight-core prostatic biopsies for a clinically suspected prostate cancer, was analyzed. Two different tissue samples were collected from the same area of the prostate and then divided for both genomic DNA extraction and pathological examination. In order to perform molecular analysis, prostatic tissue samples were digested with the methylation-sensitive restriction enzyme HpaII and then amplified by conventional polymerase chain reaction (PCR). Results: Prostate cancer was diagnosed in 42/144 patients, and promoter hypermethylation of GSTP1 gene was detected in 31/42 of prostate cancer (sensitivity=74%) and in 2/102 of negative specimens (specificity=98%). A significant association between GSTP1 promoter hypermethylation both with a Gleason score &gt;or=7 (Fisher's exact P=0.01) and the presence of Gleason grade 4 and/or grade 5 (Fisher's exact P=0.03) was found. Conclusion: Promoter hypermethylation of the GSTP1 gene is a highly specific--but not a very sensitive--marker of prostate cancer. Our data showed a significant association between the methylation status of the GSTP1 gene and Gleason score and grade, suggesting a potential prognostic value of this epigenetic DNA alteration. abstract_id: PUBMED:15026333 Hypermethylation of CpG islands in primary and metastatic human prostate cancer. Aberrant DNA methylation patterns may be the earliest somatic genome changes in prostate cancer. Using real-time methylation-specific PCR, we assessed the extent of hypermethylation at 16 CpG islands in DNA from seven prostate cancer cell lines (LNCaP, PC-3, DU-145, LAPC-4, CWR22Rv1, VCaP, and C42B), normal prostate epithelial cells, normal prostate stromal cells, 73 primary prostate cancers, 91 metastatic prostate cancers, and 25 noncancerous prostate tissues. We found that CpG islands at GSTP1, APC, RASSF1a, PTGS2, and MDR1 were hypermethylated in &gt;85% of prostate cancers and cancer cell lines but not in normal prostate cells and tissues; CpG islands at EDNRB, ESR1, CDKN2a, and hMLH1 exhibited low to moderate rates of hypermethylation in prostate cancer tissues and cancer cell lines but were entirely unmethylated in normal tissues; and CpG islands at DAPK1, TIMP3, MGMT, CDKN2b, p14/ARF, and CDH1 were not abnormally hypermethylated in prostate cancers. Receiver operator characteristic curve analyses suggested that CpG island hypermethylation changes at GSTP1, APC, RASSF1a, PTGS2, and MDR1 in various combinations can distinguish primary prostate cancer from benign prostate tissues with sensitivities of 97.3-100% and specificities of 92-100%. Hypermethylation of the CpG island at EDNRB was correlated with the grade and stage of the primary prostate cancers. PTGS2 CpG island hypermethylation portended an increased risk of recurrence. Furthermore, CpG island hypermethylation patterns in prostate cancer metastases were very similar to the primary prostate cancers and tended to show greater differences between cases than between anatomical sites of metastasis. abstract_id: PUBMED:15029477 GSTP1 CpG island hypermethylation as a molecular marker of prostate cancer Background: Prostate cancer is the most commonly diagnosed cancer in men in Europe and North America. Despite its high prevalence, the molecular mechanism of its underlying development and progression is poorly understood. Many studies have revealed multiple molecular alterations during prostate cancer carcinogenesis. GSTP1 CpG island hypermethylation is one of the molecular changes that occur during carcinogenesis. Methods: We evaluated the role of GSTP1 CpG island hypermethylation in prostatic cancers and discussed its possible role as a molecular biomarker of prostate cancer. Results: Studies haven shown that GSTP1 CpG island hypermethylation is present in about 90% of prostatic carcinomas. The DNA alteration was also detectable in body fluids such as blood, urine, ejaculate, or prostatic secretions. One study showed hypermethylation in histologically unsuspicious lymph nodes in surgical specimens in patients with biochemical PSA (prostate-specific antigen) recurrence. Additionally, it is possible to distinguish between normal prostatic tissue, benign prostatic hyperplasia, and prostate cancer. Conclusions: The detection of GSTP1 CpG island hypermethylation serves as a molecular marker in prostate cancer screening, detection, and diagnosis. It may even provide information on prostate cancer prognosis. However, prospective trials to evaluate its predictive value are necessary. abstract_id: PUBMED:17291695 Hypermethylation of the CpG islands in the promoter region flanking GSTP1 gene is a potential plasma DNA biomarker for detecting prostate carcinoma. Objectives: To investigate the possibility of identifying DNA hypermethylation in the circulation of prostate cancer patients. Methods: Plasma DNA samples were extracted from 36 prostate cancer patients and 27 benign prostate hyperplasia (BPH) cases. After extensive methylation-sensitive restriction enzyme digestion, the DNA samples were subjected to the real-time quantitative PCR amplification. Dissociation curve analysis was applied to determine if hypermethylation occurred in the promoter region flanking the GSTP1 gene, a well-documented epigenetic event among prostate cancer cells, in these plasma DNA samples. Results: 11 of 36 prostate cancer patients showed positive peak pattern, indicating methylation changes occurred. Concordant data were obtained from the corresponding paraffin-embedded tissue samples available from the Tumor Bank. Twenty-five of the 27 BPH cases showed negative results, suggesting no methylation changes happened in the CpG islands in these cases. Conclusions: We have successfully identified prostate cancer genome hypermethylation in the peripheral circulation in prostate cancer patients with this protocol. This method can effectively distinguish BPH from prostate neoplasm. Although a larger number of samples are necessary to validate the capability of the protocol in practice, using plasma DNA sample is an ideal non-invasive approach for prostate neoplasm detection. abstract_id: PUBMED:17139617 Hypermethylation of CpG island loci and hypomethylation of LINE-1 and Alu repeats in prostate adenocarcinoma and their relationship to clinicopathological features. Promoter CpG island hypermethylation is an important carcinogenic event in prostate adenocarcinoma. Regardless of tissue type, human cancers have in common both focal CpG island hypermethylation and global genomic hypomethylation. The present study evaluated CpG island loci hypermethylation and LINE-1 and Alu repeat hypomethylation in prostate adenocarcinoma, analysed the relationship between them, and correlated these findings with clinicopathological features. We examined 179 cases of prostate adenocarcinoma and 30 cases of benign prostate hypertrophy for the methylation status of 22 CpG island loci and the methylation levels of LINE-1 and Alu repeats using methylation-specific polymerase chain reaction and combined bisulphite restriction analysis, respectively. The following 16 CpG island loci were found to display cancer-related hypermethylation: RASSF1A, GSTP1, RARB, TNFRSF10C, APC, BCL2, MDR1, ASC, TIG1, RBP1, COX2, THBS1, TNFRSF10D, CD44, p16, and RUNX3. Except for the last four CpG island loci, hypermethylation of each of the remaining 12 CpG island loci displayed a close association with one or more of the prognostic parameters (ie preoperative serum prostate specific antigen level, Gleason score sum, and clinical stage). Prostate adenocarcinoma with hypermethylation of each of ASC, COX2, RARB, TNFRSF10C, MDR1, TIG1, RBP1, NEUROG1, RASSF1A, and GSTP1 showed a significantly lower methylation level of Alu or LINE-1 than prostate adenocarcinoma without hypermethylation. In addition, hypomethylation of Alu or LINE-1 was closely associated with one or more of the above prognostic parameters. These data suggest that in tumour progression a close relationship exists between CpG island hypermethylation and the hypomethylation of repetitive elements, and that CpG island hypermethylation and DNA hypomethylation contribute to cancer progression. abstract_id: PUBMED:27593931 Methyl-CpG-binding protein MBD2 plays a key role in maintenance and spread of DNA methylation at CpG islands and shores in cancer. Cancer is characterised by DNA hypermethylation and gene silencing of CpG island-associated promoters, including tumour-suppressor genes. The methyl-CpG-binding domain (MBD) family of proteins bind to methylated DNA and can aid in the mediation of gene silencing through interaction with histone deacetylases and histone methyltransferases. However, the mechanisms responsible for eliciting CpG island hypermethylation in cancer, and the potential role that MBD proteins play in modulation of the methylome remain unclear. Our previous work demonstrated that MBD2 preferentially binds to the hypermethylated GSTP1 promoter CpG island in prostate cancer cells. Here, we use functional genetic approaches to investigate if MBD2 plays an active role in reshaping the DNA methylation landscape at this locus and genome-wide. First, we show that loss of MBD2 results in inhibition of both maintenance and spread of de novo methylation of a transfected construct containing the GSTP1 promoter CpG island in prostate cancer cells and Mbd2-/- mouse fibroblasts. De novo methylation was rescued by transient expression of Mbd2 in Mbd2-/- cells. Second, we show that MBD2 depletion triggers significant hypomethylation genome-wide in prostate cancer cells with concomitant loss of MBD2 binding at promoter and enhancer regulatory regions. Finally, CpG islands and shores that become hypomethylated after MBD2 depletion in LNCaP cancer cells show significant hypermethylation in clinical prostate cancer samples, highlighting a potential active role of MBD2 in promoting cancer-specific hypermethylation. Importantly, co-immunoprecipiation of MBD2 shows that MBD2 associates with DNA methyltransferase enzymes 1 and 3A. Together our results demonstrate that MBD2 has a critical role in 'rewriting' the cancer methylome at specific regulatory regions. abstract_id: PUBMED:15800905 Ethnic group-related differences in CpG hypermethylation of the GSTP1 gene promoter among African-American, Caucasian and Asian patients with prostate cancer. The incidence and mortality of prostate cancer (PC) is approximately 2-fold higher among African-Americans as compared to Caucasians and very low in Asian. We hypothesize that inactivation of GSTP1 genes through CpG methylation plays a role in the pathogenesis of PC, and its ability to serve as a diagnostic marker that differs among ethnic groups. GSTP1 promoter hypermethylation and its correlation with clinico-pathological findings were evaluated in 291 PC (Asian = 170; African-American = 44; Caucasian = 77) and 172 benign prostate hypertrophy samples (BPH) (Asian = 96; African-American = 38; Caucasian = 38) using methylation-specific PCR. In PC cells, 5-aza-dC treatment increased expression of GSTP1 mRNA transcripts. The methylation of all CpG sites was found in 191 of 291 PC (65.6%), but only in 34 of 139 BPH (24.5%). The GSTP1 hypermethylation was significantly higher in PC as compared to BPH in each ethnic group (p &lt; 0.0001). Logistic regression analysis (PC vs. BPH) showed that African-Americans had a higher hazard ratio (HR) (13.361) compared to Caucasians (3.829) and Asian (8.603). Chi-square analysis showed correlation of GSTP1 hypermethylation with pathological findings (pT categories and higher Gleason sum) in Asian PC (p &lt; 0.0001) but not in African-Americans and Caucasian PC. Our results suggest that GSTP1 hypermethylation is a sensitive biomarker in African-Americans as compared to that in Caucasians or Asian, and that it strongly influences tumor progression in Asian PC. Ours is the first study investigating GSTP1 methylation differences in PC among African-American, Caucasian and Asian. abstract_id: PUBMED:11912447 The use of real-time quantitative polymerase chain reaction to detect hypermethylation of the CpG islands in the promoter region flanking the GSTP1 gene to diagnose prostate carcinoma. Purpose: We developed a real-time, quantitative, methylation sensitive polymerase chain reaction (PCR) protocol to analyze hypermethylation of the CpG islands in the promoter region of the pi class glutathione-S-transferase gene GSTP1 in prostate cancer tissue. Materials And Methods: A total of 21 prostate cancer and 72 benign prostate hyperplasia (BPH) tissue samples were analyzed. Genomic DNA was digested with restriction enzyme, followed by real-time quantitative PCR amplification. Cycle threshold values were used to determine whether cancer genome was present in these tissues. A cutoff cycle threshold value of 35 was arbitrarily assigned. Samples with a cycle threshold of 35 or less were considered positive for prostate cancer. Conventional nested PCR was also performed for comparison. Results: The mean cycle threshold values plus or minus standard deviation in prostate cancer and BPH cases were 30.12 +/- 2.88 and 37.77 +/- 2.72, respectively. All prostate cancer samples analyzed showed positive results, while 5 of the 72 BPH samples tested positive. Conventional nested PCR data indicated that 19 of 21 prostate cancer cases were positive for the methylation change, while 71 of 72 BPH cases tested negative. The test limitations of real-time PCR and the nested PCR protocols were determined to be 0.048 and 0.64 ng. DNA, respectively. Conclusions: We established a novel protocol for detecting the methylation change in the 5' regulatory sequence flanking the GSTP1 gene. The sensitivity of this protocol was superior to that of conventional nested PCR. The data also suggest that this novel protocol may accurately discriminate prostate carcinoma from BPH. abstract_id: PUBMED:24324057 DNA hypermethylation as a predictor of PSA recurrence in patients with low- and intermediate-grade prostate cancer. Background: DNA CpG island hypermethylation causes gene silencing and is a common event in prostate carcinogenesis and progression. We investigated its role as a possible prognostic marker in patients with PCA Gleason score ≤7. Patients And Methods: We used a quantitative, methylation-specific PCR to analyze methylation patterns at five gene loci (APC, GSTP1, PTGS2, RARbeta and TIG1) in 84 prostate cancer (PCA) tissues (Gleason Score ≤7). Methylation was correlated with established clinico-pathological parameters (preoperative PSA, pathological Gleason score, extraprostatic extension, seminal vesicle penetration, lymph node involvement, surgical margins and age) and PSA recurrence. Results: DNA hypermethylation was frequently detected at APC (95.2%), GSTP1 (84.5%), PTGS2 (100%), RAR-beta (81.0%) and TIG1 (95.2%). DNA hypermethylation was correlated with Gleason Score (p=0.027; PTGS2) and lymph node involvement (p=0.024; RARbeta). High methylation levels at RARbeta (p=0.023) was a significant predictor of PSA recurrence following radical prostatectomy. Conclusion: The analysis of DNA hypermethylation provides prognostic information in prognosis of low- and intermediate-grade PCA. abstract_id: PUBMED:16956712 Prognostic value of CpG island hypermethylation at PTGS2, RAR-beta, EDNRB, and other gene loci in patients undergoing radical prostatectomy. Objectives: To evaluate CpG island hypermethylation in a set of candidate genes in prostate cancer (pCA) and its relationship to clinicopathologic parameters and a nomogram predicting prostate-specific antigen (PSA) recurrence after radical prostatectomy. Materials And Methods: Tissues of 78 prostate carcinomas, 32 benign prostate hyperplasias (BPHs), and prostate cell lines (LNCaP, DU145, PC3, BPH-1) were examined with MethyLight polymerase chain reaction at 13 gene loci (APC, CDC6, CTNNB1, E-Cadherin, EDNRB, FGFR2, GSTP1, NAB2, PKCmu, PTGS2, RAR-beta, RASL11A, WWOX). Results: APC, RAR-beta, PTGS2, GSTP1, EDNRB, and CTNNB1 (83%, 71%, 65%, 33%, 14%, 9%, respectively) were methylated in pCA but rarely or not methylated in BPH. NAB2 and CDC6 were hypermethylated frequently in pCA (92%, 67%, respectively) and in BPH (91%, 59%, respectively). FGFR2, WWOX, E-Cadherin, PKCmu, and RASLL1A did not display noteworthy methylation in pCA (0-1%) or in BPH. CpG island hypermethylation at APC, retinoic acid receptor beta (RAR-beta), and PTGS2 discriminated with a sensitivity of 65-83% and a specificity of 97-100% between BPH and pCA. The combination of various genes increased the diagnostic expressiveness. PTGS2 hypermethylation correlated with seminal vesicle infiltration (p=0.047), capsular penetration (p=0.004), and pT stage (p=0.014). RAR-beta methylation was accompanied by a higher cumulative Gleason score (p=0.042). The probability of PSA-free-survival calculated with a Kattan nomogram correlated inversely with CpG island hypermethylation at EDNRB, RAR-beta, and PTGS2. All prostate cancer cell lines displayed a varying degree of demethylation after 5-aza-2'deoxycytidine treatment. Conclusions: CpG island hypermethylation at various gene loci is frequent in prostate cancer and can distinguish between neoplastic and noncancerous tissue. Furthermore, hypermethylation at PTGS2, RAR-beta, and EDNRB inversely correlated with PSA-free-survival according to a Kattan nomogram and has potential prognostic value. Answer: Hypermethylation of the CpG islands in the promoter region of the GSTP1 gene has been extensively studied as a potential diagnostic and prognostic marker in prostate cancer. Research has shown that GSTP1 CpG island hypermethylation is present in about 90% of prostatic carcinomas and can be detected in body fluids such as blood, urine, ejaculate, or prostatic secretions, suggesting its utility as a molecular marker in prostate cancer screening, detection, and diagnosis (PUBMED:15029477). This alteration is highly specific to prostate cancer, with a specificity of 98%, although its sensitivity is not very high (74%) (PUBMED:15530476). The methylation status of the GSTP1 gene has also been significantly associated with a higher Gleason score and grade, indicating a potential prognostic value (PUBMED:15530476). Furthermore, studies have demonstrated that hypermethylation patterns at GSTP1 and other genes can distinguish primary prostate cancer from benign prostate tissues with high sensitivity (97.3-100%) and specificity (92-100%) (PUBMED:15026333). The hypermethylation of the GSTP1 gene promoter region has also been identified as a potential plasma DNA biomarker for detecting prostate carcinoma, offering a non-invasive approach for prostate neoplasm detection (PUBMED:17291695). Additionally, hypermethylation of CpG island loci and hypomethylation of LINE-1 and Alu repeats in prostate adenocarcinoma have been correlated with clinicopathological features, suggesting that these epigenetic alterations contribute to cancer progression (PUBMED:17139617). The role of MBD2, a methyl-CpG-binding protein, in the maintenance and spread of DNA methylation at CpG islands and shores in cancer, including the GSTP1 promoter CpG island in prostate cancer cells, further underscores the importance of DNA methylation in the regulation of gene expression in cancer (PUBMED:27593931). Ethnic group-related differences in GSTP1 gene promoter hypermethylation have also been observed, with African-American patients showing a higher sensitivity for this biomarker compared to Caucasians or Asians, and strong influences on tumor progression in Asian prostate cancer (PUBMED:15800905). In conclusion, hypermethylation of the CpG islands in the promoter region of the GSTP1 gene is a useful diagnostic marker for prostate cancer due to its high specificity and ability to distinguish between malignant and benign prostate tissues. It also holds potential as a prognostic marker, as its presence has been associated with higher Gleason scores and more advanced disease stages (PUBMED:15530476; PUBMED:15026333). The detection of GSTP1 hypermethylation in plasma DNA further enhances its clinical utility by providing a non-invasive diagnostic option (PUBMED:17291695). Moreover, the prognostic value of DNA hypermethylation has been explored, with studies indicating that hypermethylation at specific gene loci, including GSTP1, may predict PSA recurrence in patients with low- and intermediate-grade prostate cancer (PUBMED:24324057). Additionally, hypermethylation at PTGS2, RAR-beta, and EDNRB has been shown to inversely correlate with PSA-free survival, suggesting that these epigenetic changes could be used to inform prognosis and potentially guide treatment decisions (PUBMED:16956712). Overall, the body of evidence supports the use of GSTP1 CpG island hypermethylation as a molecular marker with significant implications for the diagnosis and prognosis of prostate cancer. However, further research and prospective trials are necessary to fully establish its predictive value and to integrate it into clinical practice (PUBMED:15029477).
Instruction: Does practicing hatha yoga satisfy recommendations for intensity of physical activity which improves and maintains health and cardiovascular fitness? Abstracts: abstract_id: PUBMED:18053143 Does practicing hatha yoga satisfy recommendations for intensity of physical activity which improves and maintains health and cardiovascular fitness? Background: Little is known about the metabolic and heart rate responses to a typical hatha yoga session. The purposes of this study were 1) to determine whether a typical yoga practice using various postures meets the current recommendations for levels of physical activity required to improve and maintain health and cardiovascular fitness; 2) to determine the reliability of metabolic costs of yoga across sessions; 3) to compare the metabolic costs of yoga practice to those of treadmill walking. Methods: In this observational study, 20 intermediate-to-advanced level yoga practitioners, age 31.4 +/- 8.3 years, performed an exercise routine inside a human respiratory chamber (indirect calorimeter) while wearing heart rate monitors. The exercise routine consisted of 30 minutes of sitting, 56 minutes of beginner-level hatha yoga administered by video, and 10 minutes of treadmill walking at 3.2 and 4.8 kph each. Measures were mean oxygen consumption (VO2), heart rate (HR), percentage predicted maximal heart rate (%MHR), metabolic equivalents (METs), and energy expenditure (kcal). Seven subjects repeated the protocol so that measurement reliability could be established. Results: Mean values across the entire yoga session for VO2, HR, %MHR, METs, and energy/min were 0.6 L/kg/min; 93.2 beats/min; 49.4%; 2.5; and 3.2 kcal/min; respectively. Results of the ICCs (2,1) for mean values across the entire yoga session for kcal, METs, and %MHR were 0.979 and 0.973, and 0.865, respectively. Conclusion: Metabolic costs of yoga averaged across the entire session represent low levels of physical activity, are similar to walking on a treadmill at 3.2 kph, and do not meet recommendations for levels of physical activity for improving or maintaining health or cardiovascular fitness. Yoga practice incorporating sun salutation postures exceeding the minimum bout of 10 minutes may contribute some portion of sufficiently intense physical activity to improve cardio-respiratory fitness in unfit or sedentary individuals. The measurement of energy expenditure across yoga sessions is highly reliable. abstract_id: PUBMED:22398344 Influence of Hatha yoga on physical activity constraints, physical fitness, and body image of breast cancer survivors: a pilot study. Breast cancer survivors often experience changes in their perception of their bodies following surgical treatment. These changes in body image may increase self-consciousness and perceptions of physical activity constraints and reduce participation in physical activity. While the number of studies examining different types of yoga targeting women with breast cancer has increased, studies thus far have not studied the influence that Hatha yoga has on body image and physical activity constraints. The objective of this study was to explore the changes that occur in breast cancer survivors in terms of body image, perceived constraints, and physical fitness following an 8-week Hatha yoga intervention. This study used a nonrandomized two-group pilot study, comparing an 8-week Hatha yoga intervention with a light exercise group, both designed for women who were at least nine months post-treatment for breast cancer. Both quantitative and qualitative data were collected in the areas of body image, physical activity constraints, and physical fitness. Findings indicated that quantitatively, yoga participants experienced reductions in physical activity constraints and improvements in lower- and upper-body strength and flexibility, while control participants experienced improvements in abdominal strength and lower-body strength. Qualitative findings support changes in body image, physical activity constraints, and physical fitness for the participants in the yoga group. In conclusion, Hatha yoga may reduce constraints to physical activity and improve fitness in breast cancer survivors. More research is needed to explore the relationship between Hatha yoga and improvements in body image. abstract_id: PUBMED:25757725 Effects of Modified Hatha Yoga in Industrial Rehabilitation on Physical Fitness and Stress of Injured Workers. Purpose: The purpose of this study was to assess the effects of 8 weeks of modified hatha yoga training on physical fitness and stress level in injured workers. Methods: Eighteen male and female injured workers, age between 18 and 55 years, participated in this study. They were divided into two groups: an additive hatha yoga training to routine industrial rehabilitation program group (HYG: n = 9) and a control group with no yoga training (CG: n = 9). A modified hatha yoga protocol was designed for this population by two certified yoga instructors, approved by a physical therapist, and conducted for 1 h, three times weekly for 8 weeks. Physical fitness variables including flexibility of lower back and hamstrings, hand grip strength and lung capacity and scores of sensitivity to stress were evaluated at the time of recruitment and after 8 weeks of intervention. Results: The values of all physical fitness variables and stress scores were no significant difference between the two groups at baseline. Significant post-yoga improvements for HYG group were noted in flexibility, hand grip strength, and vital capacity (p &lt; 0.05). In contrast, there was no significant change in the CG group. Stress scores did not change as a result of hatha yoga training. Conclusion: An 8-week modified hatha yoga training experience exerted therapeutic effects on physical fitness variables including flexibility of lower back and hamstrings, hand grip strength and vital capacity, but not on stress level in injured workers. These findings indicate that hatha yoga can be a beneficial adjunct to routine physical therapy treatment in industrial rehabilitation programs. abstract_id: PUBMED:34733813 Is Weekly Frequency of Yoga Practice Sufficient? Physiological Effects of Hatha Yoga Among Healthy Novice Women. Beneficial physical and physiological health outcomes of yoga practice are well-supported by empirical data. However, whether weekly frequency of training is sufficient to evoke positive changes, is still an open question. The present intervention study investigated the effects of 10 weekly sessions of beginner level hatha yoga with respect to indicators of physical fitness and physiological markers. 82 young women (mean age of 22.0 ± 3.83 years) participated in the study. The yoga group (N = 49) attended a yoga course consisting of 10 sessions (1.5 h each) on a weekly basis. The control group (N = 33) did not receive any intervention. BMI, body fat percentage, balance (one-leg-stand test with open and closed eyes, functional reach test), flexibility (side bend test, modified sit and reach test) core muscle strength (plank test) as well as resting heart rate (HR), and heart rate variability (HRV) were assessed 1 week before and after the course. Both frequentist and Bayesian analysis showed an improvement in flexibility and balance in the yoga group compared to the control group. The yoga group showed also increased core muscle strength. No changes with respect to BMI, body fat percentage, resting HR and HRV were found. Ninety minute beginner level hatha yoga classes were characterized by 93.39 HR and 195 kcal energy consumption on average. The present findings suggest that weekly setting of a 10-session long hatha yoga training leads to improvements in balance, flexibility and core muscle strength among healthy young women. However, for changes in BMI, body fat percentage, resting HR and HRV longer, and/or more intense interventions are needed. abstract_id: PUBMED:25253113 Exploring different types of Hatha yoga for patients with cancer. Yoga has been practiced for more than 5,000 years and is based on the collective experiences of yoga practitioners over time. Western countries and sophisticated medical facilities use this practice as a complementary therapy with standard medical treatments. Yoga has been shown to improve quality of life. Several types of yoga potentially can benefit people with cancer, including Hatha yoga. The type of recommended Hatha yoga is dependent on the physical conditions and fitness level of patients. This article explores the impact of different types of Hatha yoga on various cancer-related symptoms in patients with cancer. The article also provides guidelines for healthcare personnel-particularly nurses-to help choose the right kind of Hatha yoga that suits their patients' needs and interests. Additional information is provided on measures and instructions that are essential for healthcare providers to know before recommending any yoga type to their patients. Evidence of the feasibility and potential efficacy of yoga for patients with cancer is provided. abstract_id: PUBMED:32731011 The effect of Hatha yoga intervention on students' creative ability. There is increasing demand for individual creativity as organizations seek innovative ways to remain relevant. Higher education institutions, particularly business schools, are sensitive to this demand and are constantly in search for innovative ways to enhance the creative ability of their students. Prior studies have shown encouraging results for physical activity-oriented interventions. Building on this research, this study uses Randomized Controlled Trial (RCT) to understand if an acute combinatory intervention, involving both physical and mental exercises embodied in Hatha yoga can improve individual creativity. This study uses 92 MBA student participants to investigate the impact of a 20-minute Hatha yoga session intervention against a short 20-minute case study session for the control group. Creative ability of the participants is operationalized through divergent and convergent thinking, which are then assessed through counter-balanced forms of Guilford Alternate Uses tasks and Remote Associate Test, respectively. The results show that while Hatha yoga significantly improves divergent thinking, the control group shows deterioration in divergent thinking. There is no effect on convergent thinking. These findings lend some support to the executive function hypothesis. The study also finds that prodding a person to be more creative on a routine academic task may not enhance their creative ability. abstract_id: PUBMED:26565690 Effects of High-Intensity Hatha Yoga on Cardiovascular Fitness, Adipocytokines, and Apolipoproteins in Healthy Students: A Randomized Controlled Study. Background: Yoga exercises are often used as a form of body and mind exercise to increase performance. However, knowledge about the physiologic effects of performing high-intensity Hatha yoga exercises over a longer time period remains limited. Objective: To investigate the effects of high-intensity yoga (HIY) on cardiovascular fitness (maximal oxygen consumption, estimated from the Cooper running test), ratings of perceived exertion (RPE), heart rate (HR), heart rate recovery (HRR), blood pressure (BP), adipocytokines, apolipoprotein A1 (ApoA1), apolipoprotein B (ApoB), and glycosylated hemoglobin (HbA1c) in healthy students. Methods: The 44 participants (38 women and 6 men; median age, 25 years [range, 20-39 years]) were randomly assigned to an HIY or a control group. The HIY program was held for 6 weeks (60 minutes once a week). Cardiovascular fitness, RPE, HR, HRR, BP, adipocytokines, HbA1c, ApoA1, and ApoB were measured at baseline and after 6 weeks in both groups. Results: HIY had no significant effects on cardiovascular fitness (mean dose: 390 minutes [range, 210-800 minutes]), HR, HRR, BP, or any of the blood parameters. However, secondary findings showed that [corrected] ApoA1 (1.47 ± 0.17 to 1.55 ± 0.16 g/L; p = 0.03) and adiponectin (8.32 ± 3.32 to 9.68 ± 3.83 mg/L; p = 0.003) levels increased significantly in the HIY group after 6 weeks. Conclusions: Six weeks of HIY did not significantly improve cardiovascular fitness. However, secondary findings showed that [corrected] ApoA1 and adiponectin levels increased significantly in the HIY group. This finding suggests that HIY may have positive effects on blood lipids and an anti-inflammatory effect. abstract_id: PUBMED:35784178 Hatha yoga training improves standing balance but not gait in Parkinson's disease. Background And Purpose: Complementary therapies, such as yoga, have been proposed to address gait and balance problems in Parkinson's disease (PD). However, the effects of yoga on gait and static balance have not been studied systematically in people with PD (PWP). Here we evaluated the effects of a 12-week long Hatha yoga intervention on biomechanical parameters of gait and posture in PWP. Methods: We employed a pilot randomized controlled trial design with two groups of mild-to-moderate PWP (immediate treatment, waitlist control; N = 10 each; Mean Hoehn and Yahr score = 2 for each group). Baseline Unified Parkinson's Disease Rating Scale (UPDRS) motor scores, and gait and postural kinematics including postural sway path length, cadence, walking speed, and turning time were obtained. The immediate treatment group received a 60-min Hatha yoga training twice a week for 12 weeks, while the waitlisted control group received no training. After 12 weeks, gait and postural kinematics were assessed (post-test for treatment group and second-baseline for waitlist group). Then, the waitlist group received the same yoga training and was evaluated post-training. Results: After Hatha yoga training, UPDRS motor scores improved with an 8-point mean decrease which is considered as a moderate clinically important change for mild-moderate PD. Sway path length during stance decreased significantly (mean reduction: -34.4%). No significant between-group differences or improvements in gait kinematics were observed. Conclusion: This study showed that a 12-week Hatha yoga training can improve static balance in PWP. We found no evidence that it systematically improves gait performance in PWP. abstract_id: PUBMED:28953821 Practicing Hatha-Yoga, Sense of Coherence and Sense of Agency. Neurophenomenological Approach. Background: A growing body of evidence supports the belief that yoga benefits physical and mental health. The aim of the study is to investigate whether the sense of coherence and sense of agency are more developed in people practicing hatha-yoga than in the group of people who have never practiced yoga. Methods: TAHE, SOC-29 questionnaire and short microphenomenological interview conducted on a group of 15 people (8 yoga instructors, 7 in the control group). Results: It has been shown that the study group has significantly higher scores in both the sense of agency and sense of coherence than the control group. In addition, a statistically significant correlation was observed between the part of the values mentioned above. Conclusions: There are indications that the hatha-yoga exercise increases the sense of agency, which in the long run can contribute to better mental health. In order to find a clear and certain link between the sense of coherence and the sense of agency additional research is necessary. abstract_id: PUBMED:16095417 The metabolic cost of hatha yoga. To determine the metabolic and heart rate (HR) responses of hatha yoga, 26 women (19-40 years old) performed a 30-minute hatha yoga routine of supine lying, sitting, and standing asanas (i.e., postures). Subjects followed identical videotaped sequences of hatha yoga asanas. Mean physiological responses were compared to the physiological responses of resting in a chair and walking on a treadmill at 93.86 m.min(-1) [3.5 miles per hour (mph)]. During the 30-minute hatha yoga routine, mean absolute oxygen consumption (Vo(2)), relative Vo(2), percentage maximal oxygen consumption (%Vo(2)R), metabolic equivalents (METs), energy expenditure, HR, and percentage maximal heart rate (%MHR) were 0.45 L.min(-1), 7.59 ml.kg(-1).min(-1), 14.50%, 2.17 METs, 2.23 kcal.min(-1), 105.29 b.min(-1), and 56.89%, respectively. When compared to resting in a chair, hatha yoga required 114% greater O(2) (L.min(-1)), 111% greater O(2)(ml.kg(-1).min(-1)), 4,294% greater %Vo(2)R, 111% greater METs, 108% greater kcal.min(-1), 24% greater HR, and 24% greater %MHR. When compared to walking at 93.86 m.min(-1), hatha yoga required 54% lower O(2)(L.min(-1)), 53% lower O(2)(ml.kg(-1).min(-1)), 68% lower %Vo(2)R, 53% lower METs, 53% lower kcal.min(-1), 21% lower HR, and 21% lower %MHR. The hatha yoga routine in this study required 14.50% Vo(2)R, which can be considered a very light intensity and significantly lighter than 44.8% Vo(2)R for walking at 93.86 m.min(-1) (3.5 mph). The intensity of hatha yoga may be too low to provide a training stimulus for improving cardiovascular fitness. Although previous research suggests that hatha yoga is an acceptable form of physical activity for enhancing muscular fitness and flexibility, these data demonstrate that hatha yoga may have little, if any, cardiovascular benefit. Answer: Practicing hatha yoga does not appear to satisfy the recommendations for intensity of physical activity required to improve and maintain health and cardiovascular fitness. A study observing intermediate-to-advanced level yoga practitioners found that the metabolic costs of yoga averaged across the entire session represent low levels of physical activity, similar to walking on a treadmill at 3.2 kph, and do not meet the recommendations for levels of physical activity for improving or maintaining health or cardiovascular fitness (PUBMED:18053143). Additionally, another study indicated that the intensity of hatha yoga may be too low to provide a training stimulus for improving cardiovascular fitness, as it was considered a very light intensity and significantly lighter than walking at 3.5 mph (PUBMED:16095417). However, hatha yoga may contribute some portion of sufficiently intense physical activity to improve cardio-respiratory fitness in unfit or sedentary individuals if it incorporates sun salutation postures exceeding the minimum bout of 10 minutes (PUBMED:18053143). Moreover, while hatha yoga may not meet the intensity levels for cardiovascular fitness, it has been shown to have other health benefits, such as reducing constraints to physical activity and improving fitness in breast cancer survivors (PUBMED:22398344), exerting therapeutic effects on physical fitness variables in injured workers (PUBMED:25757725), and improving standing balance in individuals with Parkinson's disease (PUBMED:35784178). In summary, while hatha yoga may not fulfill the intensity criteria for cardiovascular fitness improvements, it does offer other physical and mental health benefits and may be a beneficial adjunct to other forms of physical activity.
Instruction: Can incident reporting improve safety? Abstracts: abstract_id: PUBMED:31673644 Video intervention to improve incident reporting among medical trainees. Introduction/objective: Improving graduate medical trainee involvement with patient safety and incident reporting is an important task in teaching hospitals that has been recognised across the country and led to numerous efforts to address barriers to incident reporting. A variety of studies have started to define the reasons why trainees are not optimally involved and interventions that may be helpful. The present study aims to add to this literature by primarily addressing barriers that can be considered 'non-technical' such as fears surrounding potential professional repercussions after submitting a report, perceptions that reporting incidents is not useful, and concerns about anonymity. Methods: Barriers to incident reporting were previously analysed at our institution. A video was produced to directly target the barriers discovered. A 2-hour educational session was delivered which included the video intervention. The educational session was part of the standard patient safety curriculum at our institution. Paper surveys were used to capture changes in perceived barriers to incident reporting. Baseline and postintervention surveys were analysed for changes using t-tests and a p value of &lt;0.05 to determine significance. Survey development included literature review, patient safety expert discussion and cognitive interviews. Results: Perceived knowledge about the reporting process significantly improved after the intervention (t=-4.49; p&lt;0.05). Attitudes about reporting also significantly improved with reduction in fear of negative consequences and anonymity. Perceptions of reporting being a futile activity were also diminished after the intervention. Conclusions: This study demonstrates that targeting non-technical barriers to incident reporting with a video intervention is an effective way to improve perceived knowledge and attitude about incident reporting. abstract_id: PUBMED:26246282 Safety incident reporting in emergency radiology: analysis of 1717 safety incident reports. The aim of this article is to describe the incidence and types of safety reports logged in the radiology safety incident reporting system in our emergency radiology section over an 8-year period. Electronic incident reporting system of our institute was searched for the variables in emergency radiology. All reports from April 2006 to June 2014 were included and deindentified. The following event classifications were investigated in radiography, CT, and MRI modalities: diagnostic test orders, ID/documentation/consent, safety/security/conduct, service coordination, surgery/procedure, line/tube, fall, medication/IV safety, employee general incident, environment/equipment, adverse drug reaction, skin/tissue, and diagnosis/treatment. A total of 881,194 emergency radiology examinations were performed during the study period, 1717 (1717/881,194 = 0.19 %) of which resulted in safety reports. Reports were classified into 14 different categories, the most frequent of which were "diagnostic test orders" (481/1717 = 28 % total incident reports), "medication/IV safety" (302/1717 = 18 % total incident reports), and "service coordination" (204/1717 = 12 % total incident reports). X-ray had the highest report rate (873/1717 = 50 % total incident reports), followed by CT (604/1717 = 35 % total incident reports) and MRI (240/1717 = 14 % total incident reports). Forty-six percent of safety incidents (789/1717) caused no harm and did not reach the patient, 36 % (617/1717) caused no harm but reached the patient, 18 % (308/1717) caused temporary or minor harm/ damage, and less than 1 % caused permanent or major harm/ damage or death. Our study shows an overall safety incident report rate of 0.19 % in emergency radiology including radiography, CT, and MRI modalities. The most common safety incidents were diagnostic test orders, medication/IV safety, and service coordination. abstract_id: PUBMED:31849549 Patient Safety Incident Reporting In Indonesia: An Analysis Using World Health Organization Characteristics For Successful Reporting. Background: Incident reporting is widely acknowledged as one of the ways of improving patient safety and has been implemented in Indonesia for more than ten years. However, there was no significant increase in the number of reported incidents nationally. The study described in this paper aimed at assessing the extent to which Indonesia's patient safety incident reporting system has adhered to the World Health Organization (WHO) characteristics for successful reporting. Methods: We interviewed officials from 16 organizations at national, provincial and district or city levels in Indonesia. We reviewed several policies, guidelines and regulations pertinent to incident reporting in Indonesia and examined whether the WHO characteristics were covered in these documents. We used NVivo version 9 to manage the interview data and applied thematic analysis to organize our findings. Results: Our study found that there was an increased need for a non-punitive system, confidentiality, expert-analysis and timeliness of reporting, system-orientation and responsiveness. The existing guidelines, policies and regulations in Indonesia, to a large extent, have not satisfied all the required WHO characteristics of incident reporting. Furthermore, awareness and understanding of the reporting system amongst officials at almost all levels were lacking. Conclusion: Despite being implemented for more than a decade, Indonesia's patient safety incident reporting system has not fully adhered to the WHO guidelines. There is a pressing need for the Indonesian Government to improve the system, by putting specific regulations and by creating a robust infrastructure at all levels to support the incident reporting. abstract_id: PUBMED:34754717 Incident reporting among physicians-in-training in Japan: A national survey. Background: Incident reporting can inform hospital safety. However, under-reporting is preventing this. Methods: We conducted a nationwide survey among Japanese physicians-in-training by including a questionnaire in the General Medicine In-Training Examination to assess incident reporting behavior and participation in patient safety lectures. Results: Responses of 6,164 physicians-in-training indicated that although 78% had attended patient safety lectures, 44% had not submitted an incident report in the previous year and 40.6% did not know how to submit an incident report. Conclusions: The discrepancy between attendance at safety courses and incident reporting behavior must be addressed to improve hospital safety. abstract_id: PUBMED:23335058 Can incident reporting improve safety? Healthcare practitioners' views of the effectiveness of incident reporting. Objective: Recent critiques of incident reporting suggest that its role in managing safety has been over emphasized. The objective of this study was to examine the perceived effectiveness of incident reporting in improving safety in mental health and acute hospital settings by asking staff about their perceptions and experiences. Design: Qualitative research design using documentary analysis and semi-structured interviews. Setting: Two large teaching hospitals in London; one providing acute and the other mental healthcare. Participants: Sixty-two healthcare practitioners with experience of reporting and analysing incidents. Results: Incident reporting was perceived as having a positive effect on safety, not only by leading to changes in care processes but also by changing staff attitudes and knowledge. Staff discussed examples of both instrumental and conceptual uses of the knowledge generated by incident reports. There are difficulties in using incident reports to improve safety in healthcare at all stages of the incident reporting process. Differences in the risks encountered and the organizational systems developed in the two hospitals to review reported incidents could be linked to the differences we found in attitudes to incident reporting between the two hospitals. Conclusion: Incident reporting can be a powerful tool for developing and maintaining an awareness of risks in healthcare practice. Using incident reports to improve care is challenging and the study highlighted the complexities involved and the difficulties faced by staff in learning from incident data. abstract_id: PUBMED:36793096 Ten years of online incident reporting and learning using CPiRLS: implications for improved patient safety. Background: Safety incident (SI) reporting and learning via incident reporting systems (IRSs) is used to identify areas for patient safety improvement. The chiropractic patient incident reporting and learning system (CPiRLS) is an online IRS that was launched in the UK in 2009 and, from time to time, has been licensed for use by the national members of the European Chiropractors' Union (ECU), members of Chiropractic Australia and a Canada-based research group. The primary aim of this project was to analyse the SIs submitted to CPiRLS over a 10-year period to identify key areas for patient safety improvement. Method: All SIs reported to CPiRLS between April 2009 and March 2019 were extracted and analysed. Descriptive statistics were used to describe: (1) the frequency of SI reporting and learning by the chiropractic profession, and (2) the character of reported SIs. Key areas for patient safety improvement were developed following a mixed methods approach. Results: A total of 268 SIs were recorded on the database over the 10-year period, 85% of which originated from the UK. Evidence of learning was documented in 143 (53.4%) SIs. The largest subcategory of SIs related to post-treatment distress or pain (n = 71, 26.5%). Seven key areas for patient improvement were developed including: (1) patient trip/fall, (2) post treatment distress/pain, (3) negative effects during treatment, (4) significant post-treatment effects, (5) syncope, (6) failure to recognize serious pathology, and (7) continuity of care. Conclusion: The low number of SIs reported over a 10-year period suggests significant under-reporting, however, an upward trend was identified over the 10-year period. Several key areas for patient safety improvement have been identified for dissemination to the chiropractic profession. Improved reporting practice needs to be facilitated to improve the value and validity of reporting data. CPiRLS is important in identifying key areas for patient safety improvement. abstract_id: PUBMED:23537917 Patient safety incident-reporting items in Korean hospitals. Objective: To examine incident-reporting items in tertiary hospitals using the framework of the World Health Organization's International Classification for Patient Safety (ICPS). Design: Cross-sectional survey. Setting And Participants: Forty acute-care tertiary hospitals in Korea (response rate = 90.9%). Methods: Data were collected using a semistructured questionnaire during on-site interviews or via e-mail. Items were extracted from incident-reporting forms that required a reporter's input, and were analyzed using the ICPS framework. After removing redundant items, unique reporting items were mapped onto ICPS elements. The data are summarized using descriptive statistics. Results: On average, hospitals used 2.4 incident-reporting forms (range = 1-9) and 136.7 reporting items (range = 31-310). All of the hospitals had incident-reporting items that described 'incident type' and 'incident characteristics'; however, only 7 hospitals (17.5%) had reporting items on incident 'detection', and 18 hospitals (45.0%) collected information on the 'organizational outcomes'. Of the 1145 unique reporting items, 297 (25.9%) were completely mapped onto ICPS elements at different levels of granularity, and 12.7% (n = 145) were mapped onto ICPS elements that had more granular subcategories. Conclusions: The ICPS framework is a useful reference model for the classification of incident-reporting items. However, further refinements to both the ICPS framework and incident-reporting items are needed in order to better represent data on patient safety. Furthermore, the use of a common reporting form at the national level is recommended for reducing variations in reporting items and facilitating the efficient collection and analysis of patient safety data. abstract_id: PUBMED:31022895 The Role of Governments in the Implementation of Patient Safety and Patient Safety Incident Reporting in Indonesia: A Qualitative Study. (1) Background: A patient safety incident reporting system was introduced in Indonesian hospitals in 2006; however, under-reporting of patient safety incidents is evident. The government plays a vital role in the implementation of a national system. Therefore, this study focuses on how the Indonesian government has been undertaking its role in patient safety at provincial and city/district levels, including incident reporting according to the National Guideline for Hospital Patient Safety. (2) Methods: This study employed a qualitative approach with interviews of 16 participants from seven organizations. The data were managed using NVivo and thematically analyzed. (3) Results: The findings revealed several problems at the macro-, meso-, and micro-level as the government was weak in monitoring and evaluation. The District Health Office (DHO) and Provincial Health Office (PHO) were not involved in incident reporting, and there was a lack of government support for the hospitals. (4) Conclusions: The DHO and PHO have not carried out their roles related to patient safety as mentioned in the national guidelines. Lack of commitment to and priority of patient safety, the complexity of the bureaucratic structure, and a lack of systematic partnership and collaboration are problems that need to be addressed by systematic improvement. To ensure effective and efficient national outcomes, the three levels of government need to work more closely. abstract_id: PUBMED:29284714 Development of a theoretical framework of factors affecting patient safety incident reporting: a theoretical review of the literature. Objectives: The development and implementation of incident reporting systems within healthcare continues to be a fundamental strategy to reduce preventable patient harm and improve the quality and safety of healthcare. We sought to identify factors contributing to patient safety incident reporting. Design: To facilitate improvements in incident reporting, a theoretical framework, encompassing factors that act as barriers and enablers ofreporting, was developed. Embase, Ovid MEDLINE(R) and PsycINFO were searched to identify relevant articles published between January 1980 and May 2014. A comprehensive search strategy including MeSH terms and keywords was developed to identify relevant articles. Data were extracted by three independent researchers; to ensure the accuracy of data extraction, all studies eligible for inclusion were rescreened by two reviewers. Results: The literature search identified 3049 potentially eligible articles; of these, 110 articles, including &gt;29 726 participants, met the inclusion criteria. In total, 748 barriers were identified (frequency count) across the 110 articles. In comparison, 372 facilitators to incident reporting and 118 negative cases were identified. The top two barriers cited were fear of adverse consequences (161, representing 21.52% of barriers) and process and systems of reporting (110, representing 14.71% of barriers). In comparison, the top two facilitators were organisational (97, representing 26.08% of facilitators) and process and systems of reporting (75, representing 20.16% of facilitators). Conclusion: A wide range of factors contributing to engagement in incident reporting exist. Efforts that address the current tendency to under-report must consider the full range of factors in order to develop interventions as well as a strategic policy approach for improvement. abstract_id: PUBMED:26483127 Rates of safety incident reporting in MRI in a large academic medical center. Purpose: To describe our multiyear experience in incident reporting related to magnetic resonance imaging (MRI) in a large academic medical center. Materials And Methods: This was an Institutional Review Board (IRB)-approved, Health Insurance Portability and Accountability Act (HIPAA)-compliant study. Incident report data were collected during the study period from April 2006 to September 2012. The incident reports filed during the study period were searched for all reports related to MRI. Incident reports were classified with regard to the patient type (inpatient vs. outpatient), primary reason for the incident report, and the severity of patient harm resulting from the incident. Results: A total of 362,090 MRI exams were performed during the study period, resulting in 1290 MRI-related incident reports. The rate of incident reporting was 0.35% (1290/362,090). MRI-related incident reporting was significantly higher in inpatients compared to outpatients (0.74% [369/49,801] vs. 0.29% [921/312,288], P &lt; 0.001). The most common reason for incident reporting was diagnostic test orders (31.5%, 406/1290), followed by adverse drug reactions (19.1%, 247/1290) and medication/IV safety (14.3%, 185/1290). Approximately 39.6% (509/1290) of reports were associated with no patient harm and did not affect the patient, followed by no patient harm but did affect the patient (35.8%, 460/1290), temporary or minor patient harm (23.9%, 307/1290), permanent or major patient harm (0.6%, 8/1290) and patient death (0.2%, 2/1290). Conclusion: MRI-related incident reports are relatively infrequent, occur at significantly higher rates in inpatients, and usually do not result in patient harm. Diagnostic test orders, adverse drug reactions, and medication/IV safety were the most frequent safety incidents. Answer: Yes, incident reporting can improve safety. Healthcare practitioners view incident reporting as having a positive effect on safety, not only by leading to changes in care processes but also by changing staff attitudes and knowledge (PUBMED:23335058). The process of reporting incidents can be instrumental in developing and maintaining an awareness of risks in healthcare practice, although using incident reports to improve care is challenging due to the complexities involved and the difficulties faced by staff in learning from incident data (PUBMED:23335058). Moreover, video interventions targeting non-technical barriers to incident reporting have been shown to be effective in improving perceived knowledge and attitude about incident reporting among medical trainees (PUBMED:31673644). This suggests that educational and supportive interventions can enhance the willingness of healthcare professionals to report incidents, thereby potentially improving patient safety. However, it is important to note that the effectiveness of incident reporting systems depends on several factors, including the presence of a non-punitive system, confidentiality, expert analysis, timeliness of reporting, system orientation, and responsiveness (PUBMED:31849549). The World Health Organization's characteristics for successful reporting provide a framework for assessing and improving incident reporting systems (PUBMED:31849549). Additionally, incident reporting rates and the types of incidents reported can vary across different healthcare settings and modalities, such as emergency radiology (PUBMED:26246282) and MRI (PUBMED:26483127). Understanding these variations can help identify specific areas for improvement in patient safety. Despite the potential benefits, under-reporting of patient safety incidents remains a significant issue (PUBMED:34754717; PUBMED:36793096). Factors affecting patient safety incident reporting have been identified, including fear of adverse consequences and issues with the process and systems of reporting (PUBMED:29284714). Addressing these barriers is crucial for the successful implementation of incident reporting systems and the improvement of patient safety. In summary, incident reporting can improve safety by identifying risks and informing changes in healthcare practices. However, the effectiveness of incident reporting systems is contingent upon addressing barriers to reporting and ensuring that the systems align with best practices for successful reporting.
Instruction: Do required preclinical courses with family physicians encourage interest in family medicine? Abstracts: abstract_id: PUBMED:12947521 Do required preclinical courses with family physicians encourage interest in family medicine? Background: Many medical schools, including the University of California, San Francisco (UCSF), added required preclinical course work with family physicians in the 1990s. We examined whether current UCSF students interested in family medicine noted more contact with family physicians and more faculty support of their interest than current Stanford students and 1993 UCSF students, neither of whom had required preclinical course work with family physicians. Methods: A questionnaire was administered to students interested in family medicine at UCSF and Stanford in February 2001, with response rates of 84% and 90%, respectively. Previously published 1993 data from UCSF were also used for comparison. Data were analyzed using chi-square and t statistics as appropriate. Results: UCSF students in 2001, despite exposure to required preclinical course work with family physicians, did not perceive greater contact with family physicians than Stanford students. Stanford students perceived greater encouragement from their family medicine faculty but less from faculty overall, compared with 2001 UCSF students. UCSF students in 2001 perceived no more overall faculty encouragement than did UCSF students in 1993. Conclusions: Required preclinical course work with family physicians was not consistently associated with greater student perception of faculty support for students' interest in family medicine, nor was it demonstrated to increase the amount or quality of interested students' interaction with family medicine faculty. abstract_id: PUBMED:30828609 What influence do courses at medical school and personal experience have on interest in practicing family medicine? - Results of a student survey in Hessia. Aim: Against the background of an impending shortage of family practitioners, it is important to investigate the factors influencing the choice to become one. The aim of this study was to identify factors that encourage medical students to choose to practice family medicine. Method: Using a questionnaire, students in the fourth and fifth years of their studies in the Federal State of Hesse were asked about the factors that had influenced their choice of medical specialty and their experience of courses in family medicine. Predictors of an interest in practicing family medicine were calculated using multiple logistic regression. Results: 361 questionnaires were returned, representing a response rate of 70.9%. Confirmation of personal strengths, an interest in the field, and practical experience of the subject generally turned out to be important factors influencing the choice of medical specialty. 49.3% of students expressed an interest in practicing family medicine. A link existed between an interest in working as a family doctor and the opportunity to take over an existing practice, experience of medicine in rural areas, and an appreciation of the conditions of work. With regard to education at medical school, positive experiences during a clinical traineeship in family medicine and positive role models among teachers of general practice were identified as predictors. Conclusion: Almost half the medical students were open to the idea of practicing family medicine. Experience of medicine in rural areas and positive experiences of courses in general practice were linked to an increased interest in working as a family doctor. To promote this interest, it may be a promising approach to increase opportunities to collect experience of medicine in rural areas, and to encourage highly motivated teaching practices. abstract_id: PUBMED:33054269 Current status of family medicine in Kenya; family physicians' perception of their role. Background: Family medicine (FM) was introduced in Kenya in 2005. Up to date (August 2019), 42 family physicians have graduated from Kenyan Universities. Aim: The aim of the study was to establish the current state of FM in Kenya and identify areas for more research and actions to support and improve FM in Kenya. Setting: Interviews were conducted at the different work sites of the participants, four of them in hospitals, one at a University and one in a hotel where a FM conference was held. Methods: An online questionnaire (response rate = 56%) and six semi-structured interviews were conducted amongst family physicians who completed their studies in Kenya. However, the focus was on the interviews. Results: Family physicians have different ideas of how FM should look like ideally, but all agree that family physicians should be team leaders of a primary healthcare team, taking care of a defined population. Lack of policies, low numbers of family physicians and the misunderstanding of FM by all stakeholders are the major challenges. Sixty-four percent of the participants work in rural areas, and 77% perceive their current work as FM. Conclusion: Family medicine must be defined and properly promoted. Various areas have been identified that require further research: assessing required number of family physicians, reasons and solutions for the low number of family physicians, funding possibilities, and research the most suitable definition of a Kenyan family physician. abstract_id: PUBMED:27390371 Research in Family Medicine by Family Physicians for the Practice of Family Medicine. This issue lays out challenges for family medicine researchers. Each article increases our understanding of solutions to common problems in family medicine, yet with each, one can readily see the next challenge based on the newly gained knowledge. One of the goals of the JABFM is to encourage research in family medicine for family medicine. Here we combine our usual editors' notes with thoughts about what the next research studies could, and hopefully will, be. abstract_id: PUBMED:10489641 Spiritual perspectives and practices of family physicians with an expressed interest in spirituality. Background And Objectives: Among a growing number of articles about spirituality and medicine, there are no open-ended empirical inquiries about family physicians' understanding of spirituality and what it might mean to incorporate spirituality into family practice. We used a qualitative methodology to investigate family physicians' perceptions of spirituality in clinical care, the roles of their own personal spirituality, and implications for medical education. Methods: We used qualitative content analysis on transcripts of semi-structured interviews that had been conducted with 12 family physicians, in three regions of the country, with an expressed interest in spirituality. Results: This group of physicians reported 1) taking a vital clinical role as encouragers of patients' spiritual resources, 2) a vital role of their personal spirituality as an underpinning of the vocation and practice of family medicine, and 3) the key roles of respectful dialogue and mentoring in medical education about spirituality. Conclusions: Results affirm the significance of spirituality in clinical family practice for the subjects interviewed and support a tripartite model that embraces clinical approaches to the spirituality of patients and families, the spirituality of caregivers, and the qualities of spirituality in health care organizations. abstract_id: PUBMED:16127950 Osteopathic physicians and the family medicine workforce. Historically, osteopathic physicians have made an important contribution to the primary care workforce. More than one half of osteopathic physicians are primary care physicians, and most of these are family physicians. However, the proportion of osteopathic students choosing family medicine, like that of their allopathic peers, is declining, and currently is only one in five. abstract_id: PUBMED:36483907 Implementation of the Principles of Family Medicine in Modern Family Medicine Education Needing System-Specific Approaches. Family medicine is a key specialty in primary healthcare worldwide. Medical students and residents learn family medicine in medical schools and hospitals. Modern family medicine education curricula have changed because of the increase in the number of organ specialists. Family physicians have more comprehensive training in patients, medicine, and social issues than in the past. Family physicians need to adapt to changing circumstances, but he needs to practice comprehensive and holistic medicine, as most of the time, family physicians are the first point of contact both in urban and rural setup. In 1979, McWhinney proposed nine principles of family medicine to which family physicians should adhere when providing care to patients in their communities. A review of and reflection on these principles has clarified that the principles should be more emphasized and practiced. Besides, respectful approaches by different family physicians are essential. To adapt to changing healthcare conditions, family medicine education can focus more on person-centered care and healthcare systems as system-specific specialists, respect the differences in healthcare provision between urban and rural areas, and motivate medical students and residents to become family physicians. abstract_id: PUBMED:14872357 Psychosocial support services for family medicine resident physicians. Background And Objectives: The stress of residency is well documented. Some residency programs recognize the importance of addressing resident stress and provide psychosocial support services. This study assesses the current state of support services offered to family medicine residents and documents historical trends of support. Methods: All US family medicine residency programs were surveyed about program characteristics and the presence or absence of 21 psychosocial support services. The prevalence of current services was compared to that of 10 and 20 years ago. Results: The percentage of family medicine programs offering 17 of 19 support services increased over the previous decades. However, percentages of some key services, especially those that address family life, are still quite low. Discussion: Increases in services may be due to programs' desire to offer more positive and supportive educational experiences. Offering supportive and reflective opportunities may lessen stress, increase flexibility and balance, create enthusiasm for learning, encourage compassion for patients, and promote future well-being. In times of decreasing interest in family medicine, the presence of effective psychosocial support services may be important for attracting and training the best possible family physicians. abstract_id: PUBMED:18327349 Family medicine education in Singapore: a long-standing collaboration between specialists and family physicians. In many countries, family medicine (FM) training has been conducted mainly by senior family physicians alone. However, FM training in Singapore in the last 30 years has involved specialists working in close collaboration with family physicians. The areas in which specialists are currently involved include the training of FM trainees in tertiary hospitals, the Master of Medicine in Family Medicine [MMed (FM)] and Graduate Diploma in Family Medicine (GDFM) programmes. This close relationship has been crucial in the continuing vocational and professional development of family physicians and in fostering closer collaboration between family physicians and specialists, thus ultimately benefiting patient care. abstract_id: PUBMED:37321654 "They Go Hand in Hand": Perspectives on the Relationship Between the Core Values of Family Medicine and Abortion Provision Among Family Physicians Who Do Not Oppose Abortion. Introduction: Most family physicians do not provide abortion care, despite an apparent alignment between the defined values of family medicine and provision of abortion in primary care. This study seeks to understand how family physicians themselves perceive the relationship between their specialty's values and abortion provision. Methods: We conducted in-depth interviews in 2019 with 56 family physicians who do not oppose abortion in the United States. We employed a deductive-inductive content analysis approach with memos to identify key themes. This analysis focuses on participants' beliefs in the core values of family medicine and how those values relate to abortion in family medicine. Results: Participants identified and described six values of the specialty they prioritized, which included relationships, care across the lifespan, whole-person care, nonjudgmental care, meeting community needs, and social justice. Family physicians in the study overwhelmingly believed that abortion aligned with family medicine values, regardless of whether they themselves provided abortion care. Conclusions: Providing abortion care in primary care settings gives family physicians an opportunity to provide comprehensive care while improving access to meet community needs. As abortion care becomes increasingly restricted in the United States, family physicians can manifest the values of family medicine through integrating abortion care into their practices in states where abortion remains legal. Answer: The evidence from the provided abstracts suggests that required preclinical courses with family physicians do not consistently encourage interest in family medicine. A study examining whether students at the University of California, San Francisco (UCSF), who were exposed to required preclinical coursework with family physicians, noted more contact with family physicians and more faculty support of their interest in family medicine compared to Stanford students and 1993 UCSF students, found that UCSF students in 2001 did not perceive greater contact with family physicians than Stanford students. Furthermore, UCSF students in 2001 perceived no more overall faculty encouragement than did UCSF students in 1993 (PUBMED:12947521). This suggests that required preclinical coursework with family physicians was not associated with a greater perception of faculty support or increased interaction with family medicine faculty. However, another study from Hessia indicated that positive experiences during a clinical traineeship in family medicine and positive role models among teachers of general practice were identified as predictors of an interest in practicing family medicine (PUBMED:30828609). This implies that while required preclinical courses alone may not be sufficient, the quality of the experience and the presence of positive role models during these courses can influence students' interest in family medicine. In conclusion, the impact of required preclinical courses with family physicians on encouraging interest in family medicine is not clear-cut and may depend on various factors, including the quality of the experience and the presence of supportive and inspiring faculty members.
Instruction: Does vaccine dose predict response to the monovalent pandemic H1N1 influenza a vaccine in children with acute lymphoblastic leukemia? Abstracts: abstract_id: PUBMED:23682005 Does vaccine dose predict response to the monovalent pandemic H1N1 influenza a vaccine in children with acute lymphoblastic leukemia? A single-centre study. Background: Vaccination against influenza is an important strategy in preventing severe infection among children with acute lymphoblastic leukemia (ALL). Successful vaccination depends on both vaccine and host-related factors. We conducted a study on factors predicting the immunogenicity of the monovalent pandemic H1N1 (pH1N1) influenza A vaccine in children with ALL. Methods: Children with ALL in our hospital were recruited and received two doses of the inactivated split-virion AS03-adjuvanted vaccine. The serological response was measured before each vaccine dose (Day 0 and 28) and 3 months after the second dose. Antibody titres were measured using a hemagglutination-inhibition assay. Seroconversion was defined as a ≥fourfold increase in antibody titre and a post-vaccination titre ≥1:40. Results: Pre and post-vaccination titres were available from 45 children with ALL after one dose of the vaccine and 39 children after two doses. The seroconversion rate was 11.1% after one dose and 25.6% after the second dose. Univariate analysis demonstrated a significantly higher (P = 0.01) seroconversion rate among children who received the adult dose (0.5 ml) of the vaccine and a trend towards increased seroconversion (P = 0.07) by multivariate analysis. Factors including age, gender, lymphocyte count, treatment phase and regimen did not significantly affect the seroconversion rate. Children who received the adult dose demonstrated a significantly greater magnitude of serological response after both one dose (P = 0.04) and two doses (P = 0.001). Conclusions: These data suggest that the immunogenicity of the pH1N1 vaccine among children with ALL is improved by repeated and adult doses of the vaccine. abstract_id: PUBMED:21067352 Immunogenicity of pandemic (H1N1) 2009 vaccine in children with cancer in the United Kingdom. Background: Children with cancer have an increased susceptibility to influenza infection. The objective of this study was to assess the immunogenicity of pandemic (H1N1) 2009 vaccine in children with cancer. Methods: Children were recruited from the Royal Marsden Hospital, England, during November 2009. The vaccination schedule consisted of 2 doses of an AS03(B)-adjuvanted vaccine given at days 0 and 21. Serological analysis was performed on blood samples obtained at day 0 and day 42. The primary immunological end point was the seroconversion rate, which was defined as the proportion of subjects with an individual 4-fold increase in hemagglutination inhibition titer and a postvaccination hemagglutination inhibition titer ≥1:32. Results: Fifty-four children with a median age of 6.3 years (range, 1.4-16.6 years) were vaccinated and had samples taken for serological analysis. Twenty-four (44.4%) of 54 children demonstrated seroconversion. Seroconversion rates were 33.3% (9 of 27) among children with acute lymphoblastic leukemia, 36.4% (4 of 11) among those with lymphoma or other leukemias, 66.7% (6 of 9) among those with brain tumors, and 71.4% (5 of 7) among those with other solid tumors. Seroconversion occurred in 4 (28.6%) of 14 children receiving acute lymphoblastic leukemia maintenance therapy. Univariate analysis showed significantly higher responses among children with solid tumors, compared with those with hematological malignancies (11 [68.8%] of 16 vs 13 [34.2%] of 38; P = .03), and among those not receiving treatment, compared with those receiving treatment (7 [87.5%] of 8 vs 17 [37.0%] of 46; P = .02). Multivariable analysis showed that age, cancer type, and lymphopenia did not influence seroconversion rates. Conclusion: These data suggest that this AS03(B)-adjuvanted pandemic (H1N1) 2009 vaccine can induce limited but useful protective immune responses in children with cancer. abstract_id: PUBMED:12517044 Response to influenza vaccine in children with leukemia undergoing chemotherapy. Background And Purpose: To assess the ability of children with acute lymphoblastic leukemia (ALL) to develop an antibody response after influenza vaccination. Methods: A total of 65 children under 15 years old were studied, including 25 children with ALL undergoing chemotherapy, 30 with asthma in remission who were regularly followed at clinics, and 10 healthy children. The influenza vaccine contained antigens B/Yamanashi/166/98, A/New Caledonia/20/99 (H1N1), and A/Panama/2007/99 (H3N2). Results: Children with ALL developed significant antibody titers to A/Panama /2007/ 99 antigen 4 weeks after the second immunization. Seroconversion rates after two doses of vaccine were 57.1 to 84.6% and seroresponse rates were between 24 and 60% in children with ALL. Compared to children with asthma in remission, who were regarded as immunocompetent individuals, the ALL children had less seroconversion and lower seroresponse rates to A/New Caledonia/20/99 (H1N1). The seroconversion and seroresponse rates to B/Yamanashi/166/98 and A/ Panama/2007/99(H3N2) antigens were comparable in asthmatic and leukemic children. On the other hand, the antibody response in children with ALL who received reinduction chemotherapy suggests that the therapy did not impair seroresponse rates. Conclusion: Our data suggest that the influenza vaccine is safe and effective in children with either ALL or asthma in Taiwan. abstract_id: PUBMED:27718310 Protection against vaccine preventable diseases in children treated for acute lymphoblastic leukemia. Background: The objective of this retrospective study was to assess protection against vaccine preventable diseases (VPDs) in children treated for acute lymphoblastic leukemia (ALL). Procedure: Clinical characteristics and vaccination records were collected. Antibodies against VPDs were measured after completion of chemotherapy and after a booster dose of vaccine. Immunization status of household members was evaluated. Results: Sixty children were included. Median interval between the end of chemotherapy and enrolment in the study was 13 months (range 1-145). At ALL diagnosis, 81.3% of the children were up to date with their vaccination schedule. This proportion decreased to 52.9% at enrolment. Among the parents, 21% were up to date with their immunization schedule and 42% had received seasonal influenza vaccination. After chemotherapy, less than 50% of the patients were seroprotected against tetanus, diphtheria, polio 3, Haemophilus influenzae type b (Hib), and mumps and no more than 80% were seroprotected against polio 1 and 2, measles, rubella, and varicella. After a booster dose of vaccine, the rate of protection increased to over 90% for each of the following antigens: TT, DT, polio 1, Hib, measles, and rubella. Nevertheless, polio 3, mumps, and varicella-zoster virus antibodies titers/concentrations remained below seroprotective thresholds in over 20% of the patients. Conclusions: After chemotherapy for ALL, most of the children were not protected against VPDs. As the majority mounted a robust response to booster vaccines, efforts need to be done to improve protection against VPDs by implementing a systematic vaccine booster schedule. This could also be helped by reinforcing household members' immunization. abstract_id: PUBMED:24249544 Safety of high dose trivalent inactivated influenza vaccine in pediatric patients with acute lymphoblastic leukemia. Background: Although children with acute lymphoblastic leukemia (ALL) mount immune responses after vaccination with the trivalent influenza vaccine (TIV), these responses are lower compared to controls. Recently, a high dose (HD) TIV was found to increase the level of antibody response in elderly patients compared to the standard dose (SD) TIV. We hypothesized that the HD TIV would be well-tolerated and more immunogenic compared to the SD TIV in pediatric subjects with ALL. Procedure: This was a randomized, double-blind, phase I safety trial comparing the HD to the SD TIV in children with ALL. Our secondary objective was immunogenicity. Subjects were randomized 2:1 to receive either the HD (60 µg) or the SD (15 µg) TIV. Local and systemic reactions were solicited, hemagglutinin inhibition titers to influenza virus antigens were measured, and monitoring labs were collected prior to and/or after each vaccination. Results: Fifty subjects were enrolled (34 HD, 16 SD). Mean age was 8.5 years; 63% were male, and 80% were in maintenance therapy. There were no significant differences reported in local or systemic symptoms. No severe adverse events were attributed to vaccination. No significant differences between the HD and SD TIV groups were noted for immune responses. Conclusions: No differences were noted between the HD and SD TIV groups for solicited systemic and local reactions. Since this study was not powered for immunogenicity, a phase II trial is needed to determine the immunogenicity of HD versus SD TIV in the pediatric ALL population. abstract_id: PUBMED:12189546 Haemophilus influenzae type b vaccine formulation and risk of childhood leukaemia. Incidence of childhood leukaemia was studied among subjects of a vaccine trial in Finland comparing the polysaccharide-diptheria toxoid conjugate and oligosaccharide-CRM197 conjugate Haemophilus influenzae type b conjugate vaccine formulations. Eighty cases of childhood leukaemia were detected: 35 among children on the polysaccharide-diptheria toxoid conjugate arm, and 45 among children on the oligosaccharide-CRM197 conjugate arm, which was not statistically significant. abstract_id: PUBMED:29173310 The Effectiveness of Trivalent Inactivated Influenza Vaccine in Children with Acute Leukemia. Objective: The objective of this study was to determine the effectiveness of trivalent inactivated influenza vaccine (TIV) for the prevention of laboratory-confirmed influenza and influenza-like illnesses (ILI) among children and adolescents receiving therapy for acute leukemia. Study Design: A retrospective review of the demographic and clinical characteristics of 498 patients at a pediatric cancer center who received therapy for acute leukemia during 3 successive influenza seasons (2010-2011 through 2012-2013). Results: In 498 patient seasons with a known immunization history (median age, 6 years; range, 1-21), 354 patients (71.1%) were immunized with TIV and 98 (19.7%) received a booster dose of vaccine. Vaccinated and unvaccinated patients had generally similar demographic characteristics. There were no differences in the overall rates of influenza or ILI between vaccinated and unvaccinated patients overall, or in any individual season. There was no difference in the rates of influenza or ILI between patients who received 1 dose of vaccine and those who received 2 doses. Time to first influenza infection and time to first ILI in vaccinated and unvaccinated patients were not different. Conclusion: TIV did not protect children and adolescents with acute leukemia against laboratory-confirmed influenza or ILI. Future prospective studies should assess TIV effectiveness in high-risk subpopulations and alternative strategies to prevent influenza should be considered in this population. abstract_id: PUBMED:32730479 Humoral immune response induced by influenza vaccine in children with acute lymphoblastic leukemia Background: Patients with acute lymphoblastic leukemia (ALL) have high risk of severe influenza infection and vaccination is highly recommended. The immunogenicity and effectiveness of vaccination are lower than in healthy people. Aim: To evaluate the immune response induced by influenza vaccine in children with ALL and observe effectiveness. Method: Children with ALL in maintenance phase and healthy children were recruited. Blood samples were taken at vaccination day (D0) and at day 28 (D28). Humoral response was evaluated by hemaglutination inhibition test (HAI) against H1N1. Patients were followed up for one year, clinical data and influenza episodes were recorded. Results: 34 children with ALL and 9 healthy children were included. Concerning HAI on D28, 12/34 patients and 5/8 healthy children had titers ≥ 1/40, with seroprotection rates of 35 and 63% respectively. Seroprotected children were older than non-seroprotected ones. During follow-up, only 3 patients non seroprotected, presented influenza infection, without oxygen supplementation or critical care support. Discussion: Children with ALL had a lower seroprotection rate than healthy children. Nevertheless, none of the seroprotected children presented influenza infection, reinforcing the annual vaccination recommendation. abstract_id: PUBMED:20205258 Immunogenicity of trivalent influenza vaccine in children with acute lymphoblastic leukemia during maintenance therapy. Purpose: The aim of this study was to assess the immune response of children with acute lymphoblastic leukemia (ALL) to influenza vaccine and to compare it with healthy controls. Procedure: Thirty-two children aged 1-18 years with ALL on maintenance therapy and 30 healthy sibling controls were enrolled in the study. All children were vaccinated with trivalent inactivated influenza vaccine. Hemagglutinin-inhibition (HI) antibody titers were determined in sera of both patient and control groups just before and 4 weeks after vaccination. The ability of each group to mount a protective (&gt; or =40) and/or fourfold titer was measured. Results: The protective response for virus subunits among patients and healthy controls were 43.4% versus 88% for H1N1 (P = 0.04), 63.3% versus 80% for H3N2 antigens (P = 0.06), and 26% versus 73% for B antigen (P = 0.001). Responses for H1N1 and B subunits were significantly lower in patients than controls. In the patient group, the significant response to each virus was demonstrated in the analysis of pre- and post-vaccination geometric mean titer (GMT) (P = 0.001). The percentage of patients and controls with fourfold increase in HI titers were 56.2% versus 80% for H1N1 (P = 0.04), 40.6% versus 53.3% for H3N2 (P = 0.31), and 59.4% versus 83.3% for B (P = 0.038). Immune responses for H1N1 and B subunits were significantly lower in patients than controls. Conclusions: Influenza vaccine is tolerated well in ALL patients with acceptable but limited immune response compared to healthy controls. These findings support the recommendation for annual influenza vaccination in children with ALL. abstract_id: PUBMED:9493808 Immunogenicity of subunit trivalent influenza vaccine in children with acute lymphoblastic leukemia. Background: The aim of this study was to assess humoral response to influenza vaccine in children with acute lymphoblastic leukemia. Methods: Studies were performed in 25 patients previously vaccinated against influenza (Group A) and in 20 children who had never been immunized before (Group B). In Autumn, 1996, they were vaccinated with subunit trivalent influenza vaccine containing 15 microg of hemagglutinin of A/Singapore/6/86, A/Wuhan/359/95 and B/Beijing(184/93. Antihemagglutinin (HI) and antineuraminidase antibody titers were determined before immunization and 3 weeks and 6 months after vaccination by the hemagglutinin inhibition test and the neuraminidase inhibition test. All results were presented as the geometric mean titer of antibodies, mean fold increase of antibody titer, protection rate and response rate. Results: In Group A mean fold increase of HI antibodies ranged from 17.2 to 26.7 three weeks after vaccination and from 22.1 to 38.2 six months after vaccination, while in Group B it ranged from 15.7 to 22.6 and from 30.3 to 39.3, respectively. In the case of neuraminidase, mean fold increases for Group A varied from 9.2 to 13.2 three weeks after immunization and from 15.6 to 21.1 six months after vaccination, whereas for Group B they varied from 5.5 to 8.3 and from 14.4 to 23.4, respectively. Six months after vaccination the proportion of subjects with HI antibodies &gt; or = 1:40, as well as those with at least 4-fold increase of HI antibody titers, ranged from 68 to 100% in Group A and from 90 to 100% in Group B. No vaccinated child was infected with the influenza virus; the vaccine was well-tolerated and did not cause any adverse reactions. Conclusions: The results obtained in this study indicate that influenza vaccine is immunogenic in patients with acute lymphoblastic leukemia, despite their serious disease. Answer: Yes, vaccine dose does predict the response to the monovalent pandemic H1N1 influenza A vaccine in children with acute lymphoblastic leukemia (ALL). A study found that children with ALL who received the adult dose of the vaccine demonstrated a significantly higher seroconversion rate and a greater magnitude of serological response after both one dose and two doses of the vaccine, compared to those who received a smaller dose (PUBMED:23682005). This suggests that the immunogenicity of the pH1N1 vaccine among children with ALL is improved by administering repeated and adult doses of the vaccine.
Instruction: Arterial hypertension and glycemia in non-diabetic subjects: is there an association independent of obesity? Abstracts: abstract_id: PUBMED:10362457 Arterial hypertension and glycemia in non-diabetic subjects: is there an association independent of obesity? Background: A possible association of glycemia with arterial hypertension has been suggested by the frequent co-occurrence of impaired glucose tolerance or Type 2 diabetes mellitus with arterial hypertension. The objective was to examine the relationship of glycated hemoglobin (HbA1c) concentration with arterial hypertension status in non-diabetic subjects. Methods: A cross-sectional analysis of baseline data from the EPIC-Potsdam Cohort Study, Germany, was performed. The study population comprised 1846 non-diabetic subjects, 772 men and 1074 women, age 35-65. Blood pressure was measured three times consecutively. Level of HbA1c was determined by an assay based on monoclonal antibodies. Body height, weight and circumferences were obtained. Arterial hypertension status was either determined through blood pressure measurement (blood pressure &gt; or = 160/95 mmHg) or based on antihypertensive drug use. HbA1c was divided into sex-specific quintiles and logistic regression was used to estimate the odds of being hypertensive and the corresponding confidence intervals. Results: The highest compared to the lowest quintiles of HbA1c were in univariate analysis associated with being hypertensive. Adjustment for age and body mass index completely removed any significant association with arterial hypertension status. The odds ratio in men was 1.1 (95% CI 0.7-1.8), and in women it was 0.9 (95% CI 0.5-1.4). Repeating the analysis with systolic and diastolic blood pressure among untreated hypertensives yielded similar results. Conclusion: Unlike previous studies, our data do not support an association of HbA1c with arterial hypertension that is statistically independent of age and body mass index. Whether these established arterial hypertension risk factors are truly confounders of the HbA1c-arterial hypertension association or rather potentially antecedent factors requires further study. abstract_id: PUBMED:10101928 Some parameters of hemostasis and fibrinolysis in diabetic patients The aim of our study was to estimate selected parameters of hemostasis and fibrinolysis in diabetic patients with vascular complications and obesity. The investigation was carried out in 23 type 1 diabetic subjects aged 17-56 ys, in 25 type 2 diabetic patients aged 41-69 ys and in 38 healthy persons: 16 "young"--aged 32.5 +/- 13.2 ys and 22 "old"--aged 56.2 +/- 9.4 ys. The following parameters were determined: glycaemia, HbA1c, blood level fibrinogen, euglobulin clot lysis time, plasminogen activator inhibitor (PAI-1) activity, microalbuminuria, triglyceride, total, HDL- and LDL-cholesterol concentration. Plasma fibrinogen level was elevated in type 2 diabetic subjects, and the highest concentrations were noted in patients with retinopathy or arterial hypertension, in overweight persons and--surprisingly--in type 1 diabetic subjects with nephropathy and coronary vascular disease (CVD). There were also positive correlations between fibrinogen level and systolic blood pressure (r = 0.3413, p &lt; 0.02), diastolic blood pressure (r = 0.3809, p &lt; 0.002) and microalbuminuria (r = 0.3552, p &lt; 0.05). The mean euglobulin clot lysis time was prolonged in type II diabetics in comparison to the control group, especially in obese subjects. The highest activity of PAI-1 was found in overweight controls (28.87 +/- 6.24 Au/ml, p &lt; 0.002). PAI-1 activity was also slightly increased in type 1 diabetic patients, especially with the symptoms of diabetic neuropathy, nephropathy or CHD, in comparison to the other groups. Our results seem to confirm the disturbed balance between coagulation and fibrinolysis--towards and increased risk of a prothrombotic state --in both--obese and diabetic patients--especially with advanced vascular complications. abstract_id: PUBMED:27314413 Prevalence of diabetic retinopathy in screening-detected diabetes mellitus: results from the Gutenberg Health Study (GHS). Aims/hypothesis: Individuals with type 2 diabetes mellitus may experience an asymptomatic period of hyperglycaemia, and complications may already be present at the time of diagnosis. We aimed to determine the prevalence of diabetic retinopathy in patients with newly diagnosed (screening-detected) type 2 diabetes. Methods: The Gutenberg Health Study is a population-based study with 15,010 participants aged between 35 and 74 years. We determined the weighted prevalence of diabetic retinopathy by assessing fundus photographs. Screening-detected type 2 diabetes was defined as an HbA1c concentration of 6.5% (47.5 mmol/mol) or more, no medical diagnosis of diabetes and no intake of insulin or oral glucose-lowering agents. Results: Of 14,948 participants, 1377 (9.2%) had diabetes mellitus. Of these, 347 (25.2%) had newly diagnosed type 2 diabetes detected by the screening. Overall, the weighted prevalence of screening-detected type 2 diabetes was 2.1%. Fundus photos were evaluable for 285 (82.1%) participants with newly diagnosed diabetes. The weighted prevalence of diabetic retinopathy in screening-detected type 2 diabetes was 13.0%; 12% of participants had a mild non-proliferative diabetic retinopathy and 0.6% had a moderate non-proliferative diabetic retinopathy. Diabetic retinopathy was proliferative in 0.3%. No cases of severe non-proliferative diabetic retinopathy or diabetic maculopathy were found. Thirty (14.9%) of 202 and six (7.2%) of 83 individuals with and without concomitant arterial hypertension, respectively, had diabetic retinopathy (OR 2.54, 95% CI 1.06, 7.14). Visual acuity did not differ between individuals with and without diabetic retinopathy . Conclusions/interpretation: In this large European study, the prevalence of diabetic retinopathy in screening-detected type 2 diabetes was 13%. Only a very small proportion of participants with detected diabetic retinopathy needed treatment. abstract_id: PUBMED:37223297 Diabetic neuropathy: Past, present, and future. Background: A sedentary lifestyle and an unhealthy diet have considerably increased the incidence of diabetes mellitus worldwide in recent decades, which has generated a high rate of associated chronic complications. Methods: A narrative review was performed in MEDLINE, EMBASES and SciELO databases, including 162 articles. Results: Diabetic neuropathy (DN) is the most common of these complications, mainly producing two types of involvement: sensorimotor neuropathy, whose most common form is symmetric distal polyneuropathy, and autonomic neuropathies, affecting the cardiovascular, gastrointestinal, and urogenital system. Although hyperglycemia is the main metabolic alteration involved in its genesis, the presents of obesity, dyslipidemia, arterial hypertension, and smoking, play an additional role in its appearance. In the pathophysiology, three main phenomena stand out: oxidative stress, the formation of advanced glycosylation end-products, and microvasculature damage. Diagnosis is clinical, and it is recommended to use a 10 g monofilament and a 128 Hz tuning fork as screening tools. Glycemic control and non-pharmacological interventions constitute the mainstay of DN treatment, although there are currently investigations in antioxidant therapies, in addition to pain management. Conclusions: Diabetes mellitus causes damage to peripheral nerves, being the most common form of this, distal symmetric polyneuropathy. Control of glycemia and comorbidities contribute to prevent, postpone, and reduce its severity. Pharmacological interventions are intended to relieve pain. abstract_id: PUBMED:8218950 Hypertension and diabetes mellitus. The association of arterial hypertension and diabetes mellitus is frequent: one third of patients attending a diabetic clinic. Excess hypertension frequency is marked in type II, non insulin-dependent diabetes, a condition often associated with other vascular risk factors such as obesity and lipid disorders. Insulin resistance is a common feature between type II diabetes, hypertension and other risk factors. In type I, insulin-dependent diabetes, hypertension is often linked to diabetic nephropathy. There is a genetic basis for diabetic nephropathy, which may share a common background with familial hypertension. Apart from possible genetic predispositions to hypertension diabetes association, chronic hyperglycaemia can lead to alteration in functional and structural properties of blood and vessels, which both contribute to elevated vascular resistance and blood pressure. From a therapeutic viewpoint, blood pressure values above 140/90 mmHg are not tolerable in diabetic subjects under 40 years of age. Due to their renal haemodynamic effects, angiotensin I converting enzyme inhibitors may be of special interest to protect kidney function in diabetic subjects. abstract_id: PUBMED:8894487 In vivo glucose metabolism, insulin secretion and, insulin action in Europids with non-insulin-dependent diabetes mellitus (NIDDM) and their first-degree relatives. In this review we will mainly concentrate on the most common form of NIDDM in Europe, namely the form linked to overweight, arterial hypertension, dyslipoproteinaemia and coronary heart disease (CHD)-the Insulin Resistance Syndrome (IRS). This form of NIDDM seems to be growing epidemically world wide following the industrial growth or the 'cocacolanization', as it has been mentioned. Around 2-3% of the population in Europe suffers from this disease, but for subjects beyond 60 years of age the prevalence is 5-20%. Thus, we face an enormous economical, social, and humanitarian challenge. Therefore it is important to continue the research on aetiology and pathophysiology of this syndrome. The results of treatment of NIDDM patients (often started at 60 years of age) have been rather disappointing; even properly treated NIDDM patients develop substantial complications, especially macroangiopathy. Coronary heart disease is the main cause of death in these patients and the overall mortality rate in NIDDM patients is 3-4 times higher than in comparable non diabetic-subjects. Furthermore, several complications may already be present at the diagnosis of the disease, which indicate that macroangiopathy may not be secondary to the diabetic state itself, but rather a part of the NIDDM phenotype. Based on these findings it seems obvious that the IRS must be diagnosed in the prediabetic states before macroangiopathy has started or reached a clinically manifest level. In this review, therefore, we will discuss the metabolic background of NIDDM and especially focus on the pathophysiological mechanisms leading to hyperglycaemia, i.e. alterations in glucose effectiveness, insulin action and insulin secretion in prediabetic states. abstract_id: PUBMED:20369609 Cardiovascular disorders and possibilities of their therapy in patients with type 2 diabetes mellitus and diabetic nephropathy The aim of the study was to assess specific cardiovascular lesions in patients with type 2 diabetes mellitus and diabetic nephropathy (DN) and search for the methods of their correction. It included 182 overweight or obese (abdominal type) women above 55 yr with arterial hypertension (AH) divided into groups with normal or low (less than 30 ml/day) albuminuria (n = 87), albuminuria (30-300 mg/day, n = 59), proteinuria (above 30 mg/day, n = 21), and stage I-IIa chronic renal insufficiency (CRI, n = 15). It was shown that structural geometric changes in the left ventricle (LV) with the prevalence of myocardial concentric hypertrophy and diastolic dysfunction (DD), enhanced myocardial hardness, and preserved systolic function undergo progression with increasing severity of DN and decreasing glomerular filtration rate combined with poorly controlled DM2, abnormal lipid profile, long history of AH in the absence of adequate AP control, signs of vascular atherosclerosis (thickening of intima and media in carotid arteries), and large number of macrovascular complications. DN-related insulin resistance (IR) was a factor influencing LV remodeling and DD. Long-term combined therapy affecting IR and markers of cardiovascular disorders (AH, chronic hyperglycemia, dyslipidemia) promoted improvement of LV diastolic function, reverse remodeling of LV myocardium, decrease of atherosclerotic lesions and albuminurea in patients presenting with both low albuminuria and DN; in addition, it improved prognosis of the disease. abstract_id: PUBMED:32019315 Metabolic syndrome, non-alcoholic fatty liver disease and hepatocarcinoma. The term "metabolic syndrome" refers to a group of alterations comprising central obesity reduced high-density lipoprotein cholesterol concentrations, elevated triglyceride concentrations, arterial hypertension, and hyperglycemia. This syndrome has established itself as one of the epidemics of the 21st century. Among its causative agents are insulin resistance, leptin and adiponectin, changes in microbiota, and epigenetics. Its incidence in the European population is estimated to be around 25%. Non-alcoholic fatty liver disease is the hepatic manifestation of metabolic syndrome; its prevalence parallels that of obesity, and it has increased exponentially in recent decades. Recently, several publications have linked metabolic risk factors with the onset and development of hepatocarcinoma, and so it is essential to determine whether patients with non-alcoholic fatty liver disease should follow a protocol for hepatocarcinoma screening. At present, the worldwide incidence of hepatocarcinoma in patients with non-alcoholic fatty liver disease without cirrhosis is only 2.7%. Screening for hepatocarcinoma in patients with non-alcoholic fatty liver disease and cirrhosis is mandatory, but the low incidence of hepatocarcinoma in patients without cirrhosis does not justify the systematic monitoring of this patient population. Current efforts are based on identifying subgroups of patients with non-alcoholic fatty liver disease and a higher-than-average risk of developing hepatocarcinoma. abstract_id: PUBMED:15955375 Can the atherosclerosis prevention targets be achieved in type 2 diabetes? Type 2 diabetes mellitus is beside hyperglycemia frequently accompanied by dyslipidemia, arterial hypertension, obesity and other atherosclerosis risks. The article summarizes current status demonstrating that we are far from recommended targets, especially with diabetes, blood pressure and lipid controls. More aggressive drug treatment together with life style changes will be necessary to reach our goals in Type 2 diabetic patients. abstract_id: PUBMED:15499476 Prevalence of micro and macroangiopatic chronic complications and their risk factors in the care of out patients with type 2 diabetes mellitus Background: Type 2 diabetes (DM2) has been related to the development of macroangiopatic [coronary heart disease (CHD), peripheral vascular disease (PVD) and stroke] and microangiopatic [retinopathy, nephropathy, and distal sensory neuropathy (DSN)] complications. The aims of this study were to analyze prevalence of complications in DM2 patients and to estimate their associated risk factors. Methods: Cross-sectional study, including 927 out patients with DM2 from three medical centers in Rio Grande do Sul: Hospital de Clinicas de Porto Alegre (n = 475), Grupo Hospitalar Conceicao (n = 229) and Hospital Sao Vicente de Paula (n = 223). Of the patients 42% were male, mean age was 59 +/- 10 years and the median known duration of DM2 was 11 (5-43) years. Retinopathy was identified by direct fundoscopy; CHD by WHO questionnaire and/or abnormal ECG and/or perfusion abnormalities on myocardial scintigraphy; DSN by compatible symptoms and absent sensation on 10 g monofilament and/or tune fork; PVD by the presence of claudication and absent foot pulses; stroke by presence of sequels and history; and nephropathy by the urinary albumin excretion rate (&gt;20 microg/min). Hypertension was defined by blood pressure (&gt;140/90 mmHg) and/or use of antihypertensive drugs. Body mass index (BMI, kg/m2) and waist-to-hip ratio (WHR) were calculated. Results: CHD was present in 36% and PVD in 33% of the patients. Among the microvascular, 37% had nephropathy (12% with macroalbuminuria); 48% retinopathy (15% proliferative retinopathy). DSN was present in 36%. Seventy three percent of the patients presented arterial hypertension. Cholesterol levels were &gt;200 mg/dl in 64% and BMI &gt; 30 kg/m2 in 36%. Twenty two percent of patients were smokers and 21% ex-smokers. Conclusion: Diabetic complications are frequent among out patients referring to general hospitals. Almost all patients presented at least one risk factor for cardiovascular disease, justifying the efforts for identification and adequate control. Answer: The association between arterial hypertension and glycemia in non-diabetic subjects has been explored, but the evidence suggests that any observed association may not be independent of obesity. A study from the EPIC-Potsdam Cohort Study in Germany, which included 1846 non-diabetic subjects, found that the highest quintiles of glycated hemoglobin (HbA1c) were initially associated with being hypertensive in univariate analysis. However, after adjusting for age and body mass index (BMI), the significant association with arterial hypertension status was completely removed. The odds ratio for men was 1.1 (95% CI 0.7-1.8), and for women, it was 0.9 (95% CI 0.5-1.4), indicating no significant independent association between HbA1c levels and arterial hypertension after accounting for age and BMI (PUBMED:10362457). This finding suggests that while there may be a co-occurrence of higher glycemia and arterial hypertension, the relationship is not statistically independent of other factors such as age and obesity. Therefore, based on this study, there is no clear evidence of an independent association between arterial hypertension and glycemia in non-diabetic subjects when obesity is taken into account. Further research is required to determine whether established arterial hypertension risk factors are confounders of the HbA1c-arterial hypertension association or potentially antecedent factors (PUBMED:10362457).
Instruction: Brain-derived proteins in the CSF: do they correlate with brain pathology in CJD? Abstracts: abstract_id: PUBMED:16989662 Brain-derived proteins in the CSF: do they correlate with brain pathology in CJD? Background: Brain derived proteins such as 14-3-3, neuron-specific enolase (NSE), S 100b, tau, phosphorylated tau and Abeta1-42 were found to be altered in the cerebrospinal fluid (CSF) in Creutzfeldt-Jakob disease (CJD) patients. The pathogenic mechanisms leading to these abnormalities are not known, but a relation to rapid neuronal damage is assumed. No systematic analysis on brain-derived proteins in the CSF and neuropathological lesion profiles has been performed. Methods: CSF protein levels of brain-derived proteins and the degree of spongiform changes, neuronal loss and gliosis in various brain areas were analyzed in 57 CJD patients. Results: We observed three different patterns of CSF alteration associated with the degree of cortical and subcortical changes. NSE levels increased with lesion severity of subcortical areas. Tau and 14-3-3 levels increased with minor pathological changes, a negative correlation was observed with severity of cortical lesions. Levels of the physiological form of the prion protein (PrPc) and Abeta1-42 levels correlated negatively with cortical pathology, most clearly with temporal and occipital lesions. Conclusion: Our results indicate that the alteration of levels of brain-derived proteins in the CSF does not only reflect the degree of neuronal damage, but it is also modified by the localization on the brain pathology. Brain specific lesion patterns have to be considered when analyzing CSF neuronal proteins. abstract_id: PUBMED:25420158 Effective treatment of cerebral mucormycosis associated with brain surgery. Isolated cerebral mucormycosis is rare. We describe a patient with a brain tumor and a surgically-related Rhizopus oryzae brain abscess. Her abscess was effectively treated with posaconazole, micafungin and colony-stimulating factor followed by posaconazole alone. To our knowledge, cerebral mucormycosis secondary to brain surgery had not been previously reported. abstract_id: PUBMED:7763326 Microglia, the first line of defence in brain pathologies. Microglial cells account for approximately 20% of the total glial population in the central nervous system. They are distributed with no significant local differences in the white and grey matters. In contrast to astrocytes they cover non-overlapping territories. They belong to the mononuclear phagocyte system and form the resident macrophages in the brain tissue, the spinal cord and the retina. Their function in the normal neural parenchyma is unknown. However, in various pathologies they form a most reactive sensor to threats to the nervous system. Within a few hours they exhibit an activation program that we have studied in seven different experimental paradigms, e.g. following nerve section, direct brain trauma, toxic lesion, spreading depression, ischemic lesion, fiber degeneration, autoimmune diseases. Activated microglial cells become immuno-competent and are MHC (major histocompatibility complex) class 1 and class 2 positive. They express the amyloid precursor protein, APP. The complement receptor CR3bi is quickly upregulated. The mitotic activity depends on the colony stimulating factors M-CSF and GM-CSF and the appropriate receptors. Molecules discussed as signals in the activation process of microglia are cytokines such as IL-1, IL-2, IL-6, TGF beta 1. An important role could also be attributed to the unique potassium channel of microglia. Brain macrophages of microglial origin have a strong respiratory burst activity, meaning that they produce oxygen radicals. They also possess Cathepsin B and L and thus are potentially cytotoxic. Taken together, microglia are highly reactive, mobile and multifunctional immune cells of the CNS that can play a universal role in the defence of the neural parenchyma. abstract_id: PUBMED:6743398 Clinical value of the sequential study of the uric acid in CSF in patients with cerebral diseases: Part I. Brain tumor and the effect of irradiation Uric acid is the end product of the purine metabolism in the human and is mainly excreted to the urine. The studies on cerebrospinal fluid (CSF) uric acid in patients with various neurological diseases were reported in the literature. In the present study the authors discussed the clinical value of the sequential study of the CSF uric acid content in patients with brain tumors. CSF was investigated for uric acid in 23 controls and 30 cases of brain tumor. The results were as follows: The mean value and standard deviation of the uric acid in CSF in controls was 0.23 +/- 0.13 mg/dl. The uric acid in CSF increased in patients with malignant brain tumor (0.49 +/- 0.22 mg/dl, p less than 0.005), but was in normal range in patients with benign brain tumor (0.32 +/- 0.13 mg/dl, 0.10 less than p less than 0.25). There was no significant correlation between CSF uric acid and CSF protein contents. Uric acid in the lumbar CSF was approximately 4 times higher than in the ventricular CSF in patients with brain tumor. The CSF uric acid had progressively increased during irradiation to the whole brain. The factors contributing to increase of the uric acid in CSF were thought to be increased permeability of blood-CSF barrier, global damage of brain tissue, increased nucleic acid catabolism in the central nervous system (CNS) for example in tumor, inflammation or immunoreaction, increased of xanthine, hypoxanthine or xanthine oxidase activity in the CNS, directly increased of plasma components into the CSF due to such as subarachnoid hemorrhage, intraventricular hemorrhage, bleeding in the tumor or surgical operation, dysfunction of the CSF dynamics.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:19556771 Association of white-matter lesions with brain atrophy markers: the three-city Dijon MRI study. Background: Brain atrophy and white-matter lesions (WML) are common features at cerebral MRI of both normal and demented elderly people. In a population-based study of 1,792 elderly subjects aged 65-80 years, free of dementia, who had a cerebral MRI at entry, we investigated the relationship between WML volume and brain atrophy markers estimated by hippocampal, gray matter (GM) and cerebrospinal fluid (CSF) volumes. Methods: An automated algorithm of detection and quantification of WML was developed, and voxel-based morphometry methods were used to estimate GM, CSF and hippocampal volumes. To evaluate the relation between those volumes and WML load, we used analysis of covariance and multiple linear regression models adjusting for potential confounders and total intracranial volumes. Results: Age was highly correlated with WML load and all brain atrophy markers. Total WML volume was negatively associated with both GM (beta = -0.03, p &lt; 0.0001) and hippocampal volumes (beta = -0.75, p = 0.0009) and positively with CSF volumes (beta = 0.008, p = 0.02) after controlling for sex, age, education level, hypertension and apolipoprotein E genotype. Evidence for a relationship between brain atrophy markers and WML was stronger for periventricular WML. We found that the relationship between WML and hippocampal volumes was independent of other brain tissue volumes. Conclusion: These results suggest that, in the brain of nondemented elderly subjects, degenerative processes and vascular changes co-occur and are related independently of vascular risk factors. abstract_id: PUBMED:33249438 Cerebrospinal Fluid Features in Patients With Coronavirus Disease 2019 and Neurological Manifestations: Correlation with Brain Magnetic Resonance Imaging Findings in 58 Patients. Background: Neurological manifestations are common in patients with coronavirus disease 2019 (COVID-19), but little is known about pathophysiological mechanisms. In this single-center study, we examined neurological manifestations in 58 patients, including cerebrospinal fluid (CSF) analysis and neuroimaging findings. Methods: The study included 58 patients with COVID-19 and neurological manifestations in whom severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) reverse-transcription polymerase chain reaction screening and on CSF analysis were performed. Clinical, laboratory, and brain magnetic resonance (MR) imaging data were retrospectively collected and analyzed. Results: Patients were mostly men (66%), with a median age of 62 years. Encephalopathy was frequent (81%), followed by pyramidal dysfunction (16%), seizures (10%), and headaches (5%). CSF protein and albumin levels were increased in 38% and 23%, respectively. A total of 40% of patients displayed an elevated albumin quotient, suggesting impaired blood-brain barrier integrity. CSF-specific immunoglobulin G oligoclonal band was found in 5 patients (11%), suggesting an intrathecal synthesis of immunoglobulin G, and 26 patients (55%) presented identical oligoclonal bands in serum and CSF. Four patients (7%) had a positive CSF SARS-CoV-2 reverse-transcription polymerase chain reaction. Leptomeningeal enhancement was present on brain MR images in 20 patients (38%). Conclusions: Brain MR imaging abnormalities, especially leptomeningeal enhancement, and increased inflammatory markers in CSF are frequent in patients with neurological manifestations related to COVID-19, whereas SARS-CoV-2 detection in CSF remained scanty. abstract_id: PUBMED:28289935 Repeated intravenous administration of gadobutrol does not lead to increased signal intensity on unenhanced T1-weighted images-a voxel-based whole brain analysis. Objectives: To identify a possible association between repeated intravenous administration of gadobutrol and increased signal intensity in the grey and white matter using voxel-based whole-brain analysis. Methods: In this retrospective single-centre study, 217 patients with a clinically isolated syndrome underwent baseline brain magnetic resonance imaging and at least one annual follow-up examination with intravenous administration of 0.1 mmol/kg body weight of gadobutrol. Using the "Diffeomorphic Anatomical Registration using Exponentiated Lie algebra" (DARTEL) normalisation process, tissue templates for grey matter (GM), white matter (WM), and cerebrospinal fluid (CSF) were calculated, as were GM-CSF and WM-CSF ratios. Voxel-based whole-brain analysis was used to calculate the signal intensity for each voxel in each data set. Paired t-test was applied to test differences to baseline MRI for significance. Results: Voxel-based whole-brain analysis demonstrated no significant changes in signal intensity of grey and white matter after up to five gadobutrol administrations. There was no significant change in GM-CSF and grey WM-CSF ratios. Conclusion: Voxel-based whole-brain analysis did not demonstrate increased signal intensity of GM and WM on unenhanced T1-weighted images after repeated gadobutrol administration. The molecular structure of gadolinium-based contrast agent preparations may be an essential factor causing SI increase on unenhanced T1-weighted images. Key Points: • Repeated administration of gadobutrol does not lead to increased signal intensity. • Voxel-based whole-brain analysis allows assessment of subtle changes in signal intensity. • Macrocyclic contrast agents in a proven dosage are safe. abstract_id: PUBMED:24094581 Effects of cerebrospinal fluid proteins on brain atrophy rates in cognitively healthy older adults. Biomarkers associated with Alzheimer's disease (AD)-like brain atrophy in healthy individuals may identify mechanisms involved in early stage AD. Aside from cerebrospinal fluid (CSF) β-amyloid42 (Aβ42) and tau, no studies have tested associations between CSF proteins and AD-like brain atrophy. We studied 90 healthy elders, who underwent lumbar puncture at baseline, and serial magnetic resonance imaging scans for up to 4 years. We tested statistical effects of baseline CSF proteins (N = 70 proteins related to Aβ42-metabolism, microglial activity, and synaptic/neuronal function) on atrophy rates in 7 AD-related regions. Besides the effects of Aβ42 and phosphorylated tau (P-tau) that were seen in several regions, novel CSF proteins were found to have effects in inferior and middle temporal cortex (including apolipoprotein CIII, apolipoprotein D, and apolipoprotein H). Several proteins (including S100β and matrix metalloproteinase-3) had effects that depended on the presence of brain Aβ pathology, as measured by CSF Aβ42. Other proteins (including P-tau and apolipoprotein D) had effects even after adjusting for CSF Aβ42. The statistical effects in this exploratory study were mild and not significant after correction for multiple comparisons, but some of the identified proteins may be associated with brain atrophy in healthy persons. Proteins interacting with CSF Aβ42 may be related to Aβ brain pathology, whereas proteins associated with atrophy even after adjusting for CSF Aβ42 may be related to Aβ-independent mechanisms. abstract_id: PUBMED:8482973 Cerebrospinal fluid proteins in patients with leucoaraiosis: possible abnormalities in blood-brain barrier function. Some CSF protein abnormalities have been proposed as a possible marker for vascular dementia. We studied the CSF protein levels and albumin ratio in 21 patients (mean age 64.04 +/- 7.5) with progressive bilateral motor impairment, and a CT picture of leucoaraiosis. Seven of these patients also presented with dementia. Twenty-seven Alzheimer's disease patients (mean age 59.59 +/- 5.30) without leucoaraiosis were taken as controls. We also evaluated the correlations of the albumin ratio values with the diagnosis of dementia, the severity of cognitive impairment, the degree of cerebral atrophy and presence of infarcts on CT, and the abnormalities in CSF circulation, found on isotopic cisternography, in the leucoaraiosis group. After controlling for age and sex, the patients with leucoaraiosis showed greater CSF albumin levels (0.27 g/l +/- 0.11 vs. 0.21 g/l +/- 0.06; covariance analysis P = 0.066), CSF IgG values (4.68 mg/100 ml +/- 1.45 vs. 2.85 mg/100 ml +/- 1.03; covariance analysis P &lt; 0.001), and a higher albumin ratio (0.0078 +/- 0.0027 vs. 0.0058 +/- 0.0019; covariance analysis P = 0.013) than those with Alzheimer's disease. The variations of these parameters were not apparently related to the presence of dementia in the leucoaraiosis group. A significantly higher albumin ratio was observed in patients with a slowed CSF circulation compared to those with normal CSF circulation (0.0086 +/- 0.0028 vs. 0.0059 +/- 0.0019; covariance analysis P = 0.05). We conclude that, independently from the presence of dementia, patients with leucoaraiosis have CSF abnormalities consistent with functional blood-brain barrier alterations. abstract_id: PUBMED:33897370 Validation of Induced Microglia-Like Cells (iMG Cells) for Future Studies of Brain Diseases. Microglia are the primary resident immune cells of the central nervous system that maintain physiological homeostasis in the brain and contribute to the pathogenesis of many psychiatric disorders and neurodegenerative diseases. Due to the lack of appropriate human cellular models, it is difficult to study the basic pathophysiological processes linking microglia to brain diseases. In this study, we adopted a microglia-like cellular model derived from peripheral blood monocytes with granulocyte-macrophage colony-stimulating factor (GM-CSF) and interleukin-34 (IL-34). We characterized and validated this in vitro cellular model by morphology, immunocytochemistry, gene expression profiles, and functional study. Our results indicated that the iMG cells developed typical microglial ramified morphology, expressed microglial specific surface markers (P2RY12 and TMEM119), and possessed phagocytic activity. Principal component analyses and multidimensional scaling analyses of RNA-seq data showed that iMG cells were distinct from monocytes and induced macrophages (iMacs) but clustered closer to human microglia and hiPSC-induced microglia. Heatmap analyses also found that iMG cells, but not monocytes, were closely clustered with human primary microglia. Further pathway and relative expression analysis indicated that unique genes from iMG cells were involved in the regulation of the complement system, especially in the synapse and ion transport. Overall, our data demonstrated that the iMG model mimicked many features of the brain resident microglia, highlighting its utility in the study of microglial function in many brain diseases, such as schizophrenia and Alzheimer's disease (AD). Answer: Yes, brain-derived proteins in the cerebrospinal fluid (CSF) do correlate with brain pathology in Creutzfeldt-Jakob disease (CJD). A study analyzed CSF protein levels of brain-derived proteins and the degree of spongiform changes, neuronal loss, and gliosis in various brain areas in 57 CJD patients. The results showed three different patterns of CSF alteration associated with the degree of cortical and subcortical changes. Specifically, neuron-specific enolase (NSE) levels increased with lesion severity in subcortical areas. Tau and 14-3-3 levels increased with minor pathological changes, but a negative correlation was observed with the severity of cortical lesions. Levels of the physiological form of the prion protein (PrPc) and Abeta1-42 levels correlated negatively with cortical pathology, particularly with temporal and occipital lesions. These findings indicate that the alteration of levels of brain-derived proteins in the CSF reflects not only the degree of neuronal damage but is also modified by the localization of the brain pathology. Therefore, specific lesion patterns in the brain must be considered when analyzing CSF neuronal proteins in CJD patients (PUBMED:16989662).
Instruction: Does dopexamine influence regional vascular tone and oxygenation during intestinal hypotension? Abstracts: abstract_id: PUBMED:12421194 Does dopexamine influence regional vascular tone and oxygenation during intestinal hypotension? Background: Local effects of dopexamine on intestinal vascular tone and oxygenation were investigated during intestinal hypotension. To this end, we employed an experimental model, in which the superior mesenteric arterial pressure (PSMA) was controlled by an adjustable perivascular clamp. This approach enabled us to keep the intestinal perfusion pressure (IPP) constant in the face of any systemic circulatory alterations. Methods: In 11 barbiturate-anesthetized pigs, we instrumented the superior mesenteric circulation for assessments of vascular resistance (RMES), IPP, jejunal mucosal perfusion (Laser Doppler) and intestinal tissue oxygenation (microoximetry). Measurements were carried out before and during dopexamine infusions (0.5 and 1.0 micro g.kg-1.min-1) at a freely variable PSMA (i.e. the perivascular clamp fully open) and at a PSMA of 50 mmHg and 30 mmHg. Results: At a constant PSMA of 50 mmHg, dopexamine had no significant intestinal vascular effects. However, at a constant PSMA of 30 mmHg, both doses of dopexamine were associated with decreases in RMES. Effects of dopexamine on intestinal oxygen delivery and extraction were minimal during these procedures, while a minor decrease in intestinal tissue oxygen tension was observed during dopexamine administration at the lowest IPP level. Conclusion: At very low intestinal perfusion pressures (approximately 30 mmHg) dopexamine produces intestinal vasodilation in excess of what is produced by intrinsic autoregulation. This suggests that there is a vasodilatory reserve in the intestine under such conditions and that a pharmacological vasodilator like dopexamine may improve intestinal circulation during regional severe hypotension. abstract_id: PUBMED:14770037 Local metabolic effects of dopexamine on the intestine during mesenteric hypoperfusion. This self-controlled experimental study was designed to test the hypothesis that dopexamine, a synthetic catecholamine that activates dopaminergic (DA-1) and beta2-adrenergic receptors, improves oxygenation in the jejunal mucosa during intestinal hypotension. In six normoventilated barbiturate-anesthetized pigs, controlled reductions in superior mesenteric arterial pressure (PSMA) was obtained by an adjustable clamp around the artery. Dopexamine infusions (0.5 and 1.0 microg.kg(-1).min(-1)) were administered at a freely variable PSMA (i.e., with the perivascular clamp fully open) and at a PSMA of 50 mmHg and 30 mmHg. We continuously measured superior mesenteric venous blood flow (QMES; transit-time ultrasonic flowmetry), jejunal mucosal perfusion (laser Doppler flowmetry), and tissue oxygen tension (PO2TISSUE; microoximetry). Jejunal luminal microdialysate of lactate, pyruvate, and glucose were measured every 5 min. Measurements of mucosal PCO2 (air tonometry), together with blood sampling and end-tidal PCO2 measurements, enabled calculations of pHi and PCO2 gap. Dopexamine reduced mesenteric vascular resistance and increased QMES at a PSMA of 50 mmHg and 30 mmHg. At a PSMA of 30 mmHg, dopexamine increased mesenteric oxygen delivery but did not influence mesenteric oxygen uptake or extraction. In this situation, dopexamine had no beneficial effect on jejunal mucosal blood flow. On the contrary, dopexamine increased mesenteric net lactate production and PCO2 gap, whereas PO2TISSUE and pHi decreased. Jejunal luminal microdialysate data demonstrated an increased lactate concentration and a pattern of decreased glucose concentration and increased luminal lactate-pyruvate ratio. These negative metabolic effects of dopexamine should be taken into account in situations of low perfusion pressures. abstract_id: PUBMED:4027482 The effects of dopexamine on the cardiovascular system of the dog. The cardiovascular effects of dopexamine and dopamine were compared in the anaesthetized and conscious dog by the use of intravenous infusions over the dose range 3 X 10(-9) - 10(-7)mol kg-1 min-1. In the anaesthetized dog, dopexamine produced a dose-related fall in blood pressure due to peripheral vasodilatation and a small rise in heart rate and contractility. By contrast, dopamine did not significantly reduce blood pressure but produced a larger dose-related increase in contractility. At the highest infusion rate (10(-7)mol kg-1 min-1) blood pressure and heart rate were increased by dopamine. Dopexamine dilated the renal and mesenteric vascular beds with a potency similar to that of dopamine. Femoral vascular responses produced by both agents were inconsistent but the highest infusion rate of dopamine did produce vasoconstriction. With the aid of selective receptor antagonists (haloperidol, propranolol and bulbocapnine) the vasodepressor activity of dopexamine was shown to be mediated by stimulation of DA2-, beta- and DA1-receptors. The cardiac stimulation and renal vasodilatation produced by both compounds were due to stimulation of beta-adrenoceptors and DA1-receptors respectively. In the conscious dog, intravenous infusion of dopexamine caused a dose-related fall in blood pressure, renal vasodilatation and an increase in cardiac contractility and heart rate. Dopamine also increased cardiac contractility, and renal blood flow due to renal vasodilatation but without affecting heart rate. At the highest infusion rate, blood pressure was increased. Dopexamine and dopamine produced a similar incidence of panting and repetitive licking at 3 X 10(-8)mol kg-1 min-1 and emesis at 10(-7)mol kg-1 min-1, due to stimulation of dopamine receptors in the chemoreceptor trigger zone. Dopexamine produces a different cardiovascular profile from dopamine in the anaesthetized and conscious dog. Both compounds reduce renal vascular resistance, but in contrast to dopamine, dopexamine reduces afterload and produces only mild inotropic stimulation. These differences reflect contrasting activity at adrenoceptors. abstract_id: PUBMED:14633761 Dopexamine reverses colonic but not gastric mucosal perfusion defects in lethal endotoxin shock. Background: Whilst dopexamine appears to increase overall splanchnic blood flow in postoperative and septic patients, the effects on gastric mucosal perfusion are controversial and based on concomitantly increasing mucosal to arterial PCO(2) gradients (PdCO(2)). We hypothesized that dopexamine alters splanchnic blood flow distribution and metabolism during experimental endotoxin shock and modifies the inflammatory response induced by endotoxin. Methods: In an experiment with anaesthetized normovolaemic, normoventilated pigs, 21 animals were randomized into: (i). subacute lethal endotoxin shock for 14 h (n=7 at baseline); (ii). endotoxin shock with dopexamine infusion (aiming to exceed baseline cardiac output, n=7); or (iii). controls (n=7). Regional blood flow and metabolism were monitored. Results: Endotoxin produced a hypodynamic phase followed by a normo/hyperdynamic, hypotensive phase. Despite increasing systemic blood flow in response to dopexamine, proportional splanchnic blood flow decreased during the hypodynamic phase. Dopexamine gradually decreased fractional coeliac trunk flow, while fractional superior mesenteric arterial flow increased. Dopexamine induced early arterial hyperlactataemia and augmented the gastric PdCO(2) gradient while colonic luminal lactate release and colonic PdCO(2) gradient were reversed. Dopexamine did not modify the inflammatory response as evaluated by arterial IL-1beta and IL-6 concentrations. Conclusions: Dopexamine protects colonic, but not gastric mucosal epithelium in experimental endotoxin shock. This may be related to redistribution of blood flow within the splanchnic circulation. abstract_id: PUBMED:8025917 Regional blood flow effects of dopexamine versus enalaprilat during propofol anaesthesia in rabbits with experimental chronic heart failure. Objective: Chronic congestive heart failure (CHF) was induced in rabbits with doxorubicin in order to evaluate: (1) haemodynamic and regional blood flow responses to propofol anaesthesia; (2) modification of these cardiovascular responses with background intravenous infusions of enalaprilat or dopexamine. Methods: Rabbits received either doxorubicin, 2 mg.kg-1 weekly intravenously for seven weeks (CHF, n = 6), or saline (controls, n = 6). Doppler flow probes were implanted on the ascending aorta, left renal artery, and lower abdominal aorta. In three separate studies propofol was infused for 40 min periods at 0.6 and then 1.2 mg.kg-1.min-1 after background infusions of either saline, enalaprilat (0.2 mg.kg-1 + 0.003 mg.kg-1.min-1), or dopexamine (0.008 mg.kg-1.min-1). Results: In normal rabbits propofol (1.2 mg.kg-1.min-1) reduced mean arterial pressure from awake control by 33(SEM 3)%, cardiac output by 24(4)%, and hindlimb blood flow (HBF) by 10(2)%, but did not change renal blood flow. In rabbits with CHF, although resting mean blood pressure was lower, propofol did not alter blood pressure or hindlimb blood flow, but renal blood flow was reduced by 37(6)%. Conclusions: Both enalaprilat and dopexamine increased renal blood flow in the control and CHF groups. Enalaprilat caused marked hypotension during anaesthesia in the CHF group. Dopexamine increased mean arterial pressure, heart rate, and hindlimb blood flow during anaesthesia in controls, but not in CHF. abstract_id: PUBMED:1284046 Systemic and regional oxygen uptake and lactate flux in endotoxic dogs resuscitated with dextran and dopexamine or dextran alone. We examined how volume resuscitation with and without dopexamine, a potent beta 2-adrenergic and dopaminergic agonist, would affect O2 uptake (VO2), O2 delivery (DO2), and lactate metabolism in whole body, skeletal muscle, and gut of 18 anesthetized dogs after they were infused for 1 hr with 2 mg/kg endotoxin. Six dogs were then infused with dextran for 2 hr at 0.5 ml/kg/min (DEX); another six were given the same volume with 12 micrograms/kg/min dopexamine (DPX); and a third group received no treatment (NT). After endotoxin, all animals were hypotensive; cardiac output decreased 30%; and DO2 to whole body, skeletal muscle, and gut were at or below critical for VO2. DO2 was raised above critical in whole body and gut of DEX and DPX but not in muscle of DPX, which produced lactate. Whole body DO2 approached preendotoxin level by 2 hr in DEX and DPX but stayed near critical in NT. At that time, gut VO2 was higher in DPX than in DEX or NT, and lactate efflux became less than in DEX. Vascular reactivity was present in the gut at the end of the experiment in six of six DPX studies, in four of six DEX studies, and in none of the NT studies. These results suggest that resuscitation with dopexamine + dextran after induction of endotoxic shock may be more beneficial than dextran alone, particularly for the gut mucosa. abstract_id: PUBMED:3407595 Changes in regional blood flow distribution induced by infusions of dopexamine hydrochloride or dobutamine in anesthetized dogs. Systemic blood flow distribution was determined using radionuclide-labeled microspheres in anesthetized dogs during infusions of dopexamine hydrochloride or dobutamine. Three doses of the drugs were administered intravenously, in the form of 22-minute infusions (3 X 10(-9), 10(-8), 3 X 10(-8) mol/kg/min [1.3 to 12.9 micrograms/kg/min] dopexamine hydrochloride and 10(-8), 3 X 10(-8), 10(-7) mol/kg/min [3.4 to 34.0 micrograms/kg/min] dobutamine). Both drugs induced dose-dependent acceleration of the heart rate. Blood pressure was maintained during infusions of dobutamine, while dopexamine hydrochloride induced arterial hypotension. As a result, the increments in heart rate-blood pressure product were smaller with dopexamine hydrochloride than with dobutamine. Dopexamine hydrochloride induced blood flow increments in the heart, several sections of the gastrointestinal tract, kidney and skeletal muscle. Quantitatively, these changes were different from those induced by dobutamine in only 3 organs. The myocardial blood flow increments during infusions of dopexamine hydrochloride were smaller than those of dobutamine, whereas in the stomach and in skeletal muscle, the flow increments induced by dopexamine hydrochloride were significantly greater than those induced by dobutamine. The findings suggest that dopexamine hydrochloride, by virtue of its agonist activity at beta 2-adrenergic and DA1- and DA2-dopaminergic receptors, is a powerful vasodilator. abstract_id: PUBMED:8097568 Evaluation of the effects of dopexamine, a novel DA1 receptor and beta 2-adrenoceptor agonist, on cardiac function and splanchnic circulation in a canine model of hemorrhagic shock. In the present studies, the efficacy of dopexamine hydrochloride, a novel DA1-receptor and beta 2-adrenoceptor agonist in preventing deterioration of cardiovascular function in a canine model of hemorrhagic shock was investigated. Pentobarbital-anesthetized dogs were allowed to bleed into a height-regulated reservoir and the hypotensive state (about 40 mmHg) was maintained for a period of 150 min. Subsequently, blood was reinfused and recoveries in various hemodynamic variables were monitored for an additional period of 120 min. Either aqueous solvent or dopexamine HCl was randomly selected for i.v. infusion beginning 30 min before reinfusion of the blood and until the termination of the experiment. In the solvent-treated control group, various cardiovascular variables such as cardiac output, stroke volume, celiac and superior mesenteric arterial blood flows progressively declined to 50% or less of the basal values; these changes were associated with sustained increases in the regional as well as systemic vascular resistances. Dopexamine infusion lowered vascular resistances and facilitated recoveries in various hemodynamic variables to 80% to 100% of the basal values after reinfusion of the shed blood. With the exception of a transient inotropic effect during reinfusion in the dopexamine treated group, there were no essential alterations in the myocardial contractility, during the hypotensive state and/or after reinfusion of the blood. Hence, the results indicate that the efficacy of dopexamine to reduce vascular resistance by actions at DA1-receptors and beta 2-adrenoceptors would account for its ability to improve myocardial performance (secondary to reductions in afterload) and restoration of mesenteric and celiac hemodynamics.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:22929242 Epinephrine, norepinephrine, dobutamine, and dopexamine effects on free flap skin blood flow. Background: The optimal sympathomimetic drug to support blood pressure without adverse vasoconstriction of free flap circulation remains unknown. This study examined the effects of four agents (epinephrine, norepinephrine, dobutamine, and dopexamine) on free flaps following resection of head and neck cancer. Methods: Twenty-four patients (25 data sets) were recruited into the study. Each patient received an infusion of the four drugs in a random order, with an intervening washout period between drugs, at four infusion rates. Continuous free flap skin blood flow monitoring was performed using laser Doppler velocimetry, with a second sensor on normal skin acting as a control. Global cardiovascular variables were monitored using the LiDCO Rapid Pulse Contour Analysis System (LiDCO Ltd., Cambridge, United Kingdom). Results: Dose-dependent, increased free flap skin blood flow was observed with norepinephrine and dobutamine. Both dopexamine and epinephrine infusions decreased blood flow. Flap skin blood conductance decreased (vasoconstriction) with norepinephrine, but markedly less than in control tissue, so overall the flap skin blood flow increased with increasing arterial blood pressure. Dobutamine increased flap skin conductance, without significantly increasing blood pressure, and modestly increased flap blood flow. Conclusions: Both dobutamine and norepinephrine had beneficial effects on flap skin blood flow. The maximal improvement in flow occurred with norepinephrine, making it the optimal pressor to use in patients with hypotension after free flap surgery. abstract_id: PUBMED:10909887 Gastric-arterial PCO2 gradient does not reflect systemic and splanchnic hemodynamics or oxygen transport after cardiac surgery. Gastric mucosal-arterial PCO2 gradient (P(g-a)CO2) is used to assess splanchnic perfusion and oxygenation. We evaluated whether P(g-a)CO2 reflects whole body (Q) and splanchnic (Qsp) blood flow, oxygen delivery (DO2) and consumption (VO2) after coronary artery by pass graft (CABG) operation. Thirty patients received dobutamine or dopexamine to increase cardiac index, 15 patients enalapril or sodium nitroprusside to lower blood pressure, and 30 patients were controls. We measured Q, Qsp (hepatic vein catheter and indocyanine green), and gastric mucosal PCO2 (nasogastric tonometer) before and after interventions. Multiple linear regression model showed that none of the changes in Q, Qsp, and splanchnic or systemic DO2 and VO2 significantly explained changes in P(g-a)CO2 (deltaP(g-a)CO2). All independent variables together explained only 7% of deltaP(g-a)CO2. Increased splanchnic blood flow (0.65 +/- .19 vs. 0.94 +/- .31 L/min/m2, P &lt; 0.001) and increased splanchnic DO2 (101 +/- 28 vs. 143 +/- 42 mL/min/m2, P &lt; 0.001) during catecholamine infusions were associated with increased P(g-a)CO2 (8 +/- 8 vs. 11 +/- 7 mmHg, P = 0.003). P(g-a)CO2 does not reflect whole body or splanchnic blood flow, DO2 or VO2 after CABG operations. The physiology of P(g-a)CO2 is complex and therefore it is difficult for clinicians to interpret changes in gastric mucosal-arterial PCO2 gradient in individual patients after cardiac surgery. Answer: Yes, dopexamine does influence regional vascular tone and oxygenation during intestinal hypotension. Studies have shown that dopexamine, a synthetic catecholamine that activates dopaminergic (DA-1) and beta2-adrenergic receptors, can have varying effects on intestinal vascular tone and oxygenation under conditions of intestinal hypotension. In an experimental model with pigs experiencing controlled reductions in superior mesenteric arterial pressure (PSMA), dopexamine infusions at low PSMA levels (30 mmHg) were associated with decreases in mesenteric vascular resistance (RMES), suggesting that dopexamine produces intestinal vasodilation in excess of what is produced by intrinsic autoregulation. This indicates a vasodilatory reserve in the intestine under conditions of severe regional hypotension, and that dopexamine may improve intestinal circulation during such events (PUBMED:12421194). However, another study found that while dopexamine reduced mesenteric vascular resistance and increased mesenteric oxygen delivery at a PSMA of 30 mmHg, it did not influence mesenteric oxygen uptake or extraction. In this situation, dopexamine had no beneficial effect on jejunal mucosal blood flow and was associated with negative metabolic effects, such as increased mesenteric net lactate production and a decrease in tissue oxygen tension (PUBMED:14770037). In a broader context, dopexamine has been shown to produce a dose-related fall in blood pressure due to peripheral vasodilatation in dogs, dilating the renal and mesenteric vascular beds with a potency similar to that of dopamine (PUBMED:4027482). Additionally, dopexamine has been found to reverse colonic but not gastric mucosal perfusion defects in lethal endotoxin shock in pigs, suggesting that it can protect colonic mucosal epithelium in such conditions (PUBMED:14633761). Overall, dopexamine appears to have the potential to improve intestinal circulation and oxygenation during regional severe hypotension by causing vasodilation, but its effects may vary depending on the specific conditions and the region of the intestine being considered.
Instruction: Does the target dose of neurohormonal blockade matter for outcome in Systolic heart failure in octogenarians? Abstracts: abstract_id: PUBMED:25880407 Does the target dose of neurohormonal blockade matter for outcome in Systolic heart failure in octogenarians? Background: In elderly patients with chronic heart failure (CHF), a gap exists between widespread use of lower doses of angiotensin-converting enzyme inhibitors (ACEIs)/angiotensin-receptor blockers (ARBs) and β-blockers (BBs) and guideline recommendations. Therefore, the aim of the present study was to investigate whether patients receiving ≥ 50% target dose outperform those receiving &lt;50% target dose, despite maximum up-titration, and whether the target dose outperforms all other doses. Methods And Results: Patients (n=185) aged ≥ 80 years with CHF and left ventricular ejection fraction ≤ 40% referred (between January 2000 and January 2008) to two CHF outpatient clinics at two university hospitals, were included and retrospectively studied. Of the study population, 53% received the target dose of ACEIs/ARBs, whereas 26% received &lt;50% of the target dose. Half received &lt;50% of the target dose of BBs and 21% received the target dose. After ≥ 5 years of follow-up, all-cause mortality was 76.8%. Patients who received the target dose of ACEIs/ARBs had higher survival rates from all-cause mortality than those receiving &lt;50% of target dose (HR=0.6, 95%CI 0.4-0.9, P=0.033), but those receiving ≥ 50% of target dose did not statistically differ from those who achieved target dose. This dose-survival relationship was not the case for BBs. Conclusions: Target dose of ACEIs/ARBs is associated with reduced all-cause five-year mortality in very old patients with systolic heart failure, despite that this was achievable in only about half of the patients. However, the clinical outcome of BB therapy is independent of BB dose when the target heart rate is achieved. abstract_id: PUBMED:28785471 Neurohormonal Blockade in Heart Failure. A key feature of chronic heart failure (HF) is the sustained activation of endogenous neurohormonal systems in response to impaired cardiac pumping and/or filling properties. The clinical use of neurohormonal blockers has revolutionised the care of HF patients over the past three decades. Drug therapy that is active against imbalance in both the autonomic and renin-angiotensin-aldosterone systems consistently reduces morbidity and mortality in chronic HF with reduced left ventricular ejection fraction and in sinus rhythm. This article provides an assessment of the major neurohormonal systems and their therapeutic blockade in patients with chronic HF. abstract_id: PUBMED:36065771 Hyperkalemia as a limiting factor of neurohormonal blockade/modulation in everyday clinical practice. Introduction And Objectives: Neurohormonal blockade (NB)/modulation is the combination of two renin-angiotensin-aldosterone system inhibitors (RAASi) with a beta blocker. It is the core therapy for heart failure with reduced ejection fraction (HFrEF). While improving long term prognosis, it also induces hyperkalemia (serum K+ &gt;5.0 mEq/L) due to RAASi effects. This may cause lethal arrhythmias and increase mortality in the short term. Thus, hyperkalemia frequently leads to withholding or reducing the intensity of neurohormonal blockade/modulation, which is associated with worsening long term prognosis. We assessed the relevance of hyperkalemia as a limiting factor of neurohormonal blockade/modulation in real life clinical conditions. Methods: We reviewed the medical records of HFrEF patients attending a HF clinic at a tertiary Portuguese hospital during 2018 (n=240). The number of patients not tolerating maximal neurohormonal blockade/modulation due to hyperkalemia was determined. The incidence and characteristics of hyperkalemia episodes were also assessed. Results: Only six patients (3%) achieved maximal doses of neurohormonal blockade/modulation. Hyperkalemia was the limiting factor in 48 (20%) patients. A total of 185 hyperkalemia episodes occurred in 100 (42%) patients. Forty-five (24%) episodes were moderate or severe (serum K+ &gt;5.5 mEq/L). In these HFrEF patients, the co-existence of hypertension, diabetes or renal failure was associated with the occurrence of hyperkalemia. Conclusions: In daily clinical practice, hyperkalemia is frequent and limits neurohormonal blockade/modulation by leading to the withholding or reducing of the intensity of RAAS inhibition. Considering the negative prognostic impact associated with sub-optimal neurohormonal blockade/modulation, addressing hyperkalemia is an important issue when treating HFrEF patients. abstract_id: PUBMED:36676199 The Neurohormonal Overactivity Syndrome in Heart Failure. Heart failure (HF) is categorized arbitrarily based on the left ventricular ejection fraction (LVEF) in HF with reduced (HFrEF; LVEF &lt; 40%), mildly reduced (HFmrEF; LVEF 40−49%), or preserved ejection fraction (HFpEF; LVEF ≥ 50%). In this opinion paper, based on (patho)physiological considerations, we contend that the neurohormonal overactivity syndrome (NOHS), which is present in all symptomatic HF patients irrespective of their LVEF, not only contributes to the development of signs and symptoms but it is also a major determinant of patients’ outcomes. In this regard, NHOS is the only currently available treatment target in HF and should be combatted in most patients with the combined use of diuretics and neurohormonal inhibitors (β-blockers, angiotensin receptor-neprilysin inhibitor/angiotensin-converting enzyme inhibitors/angiotensin receptor blockers, mineralocorticoid antagonists, and sodium-glucose co-transporter 2 inhibitors). Unfortunately, despite the advances in therapeutics, HF mortality remains high. Probably machine learning approaches could better assess the multiple and higher-dimension interactions leading to the HF syndrome and define clusters of HF treatment efficacy. abstract_id: PUBMED:32838804 Effects of neurohormonal antagonists on blood pressure in patients with heart failure with reduced ejection fraction (HFrEF): a systematic review protocol. Background: Several cardiovascular pathologies cause heart failure. Heart failure with reduced ejection fraction (HFrEF) is deteriorated by neurohormonal activation, so neurohormonal antagonists are recommended in HFrEF patients. They improve morbidity, mortality, and quality of life and reduce hospital admissions. Heart failure treatment guidelines recommend achieving target doses of those drugs. However, many clinicians prescribe suboptimal doses for the fear of inducing hypotension. The aim of this systematic review and meta-analysis is to understand whether it is still beneficial to uptitrate the doses of those drugs even if the patient is at the risk of developing hypotension. Methods: The primary outcome is symptomatic or asymptomatic hypotension in patients on neurohormonal antagonist drugs for HFrEF. Secondary outcomes are blood pressure reduction, New Yok Heart Association functional class deterioration, non-fatal cardiovascular events, cardiovascular mortality, all-cause mortality, heart failure hospitalizations, and adverse events. Randomized controlled trials involving adults with HFrEF will be included. Comprehensive literature search will be done in MEDLINE, Scopus, Web of Science, WHO Global Index Medicus, and the Cochrane Central Register of Controlled Trials. MEDLINE will be searched first using controlled vocabulary and free text terms and then adapted to other databases. Linear and nonlinear dose-response meta-analyses will be conducted. Publication bias and statistical heterogeneity will be tested by Egger's regression and Cochran's Q tests, respectively. Sensitivity, subgroup, and meta-regression analyses will be performed. Grading of Recommendations Assessment, Development and Evaluation approach will be used to judge the quality of evidence. Discussion: This systematic review and meta-analysis will provide information about the risk of hypotension in patients on neurohormonal antagonist drugs for HFrEF. The results will be published in a peer-reviewed journal. The implications for further research will be discussed. Systematic Review Registration: PROSPERO CRD42019140307. abstract_id: PUBMED:31029493 Neurohormonal modulation: The new paradigm of pharmacological treatment of heart failure. The current paradigm of medical therapy for heart failure with reduced ejection fraction (HFrEF) is triple neurohormonal blockade with an angiotensin-converting enzyme inhibitor (ACEI), a beta-blocker (BB) and a mineralocorticoid receptor antagonist (MRA). However, three-year mortality remains over 30%. Stimulation of counter-regulatory systems in addition to neurohormonal blockade constitutes a new paradigm, termed neurohormonal modulation. Sacubitril/valsartan is the first element of this new strategy. PARADIGM-HF was the largest randomized clinical trial conducted in HFrEF. It included 8442 patients and compared the efficacy and safety of sacubitril/valsartan versus enalapril. The primary endpoint was the composite of cardiovascular mortality and hospitalization due to HF, which occurred in 914 (21.8%) patients receiving sacubitril/valsartan and in 1117 (26.5%) patients receiving enalapril (HR 0.8, 95% CI 0.73-0.87, p=0.0000002; NNT 21). Sacubitril/valsartan reduced both primary endpoint components, as well as sudden cardiac death, death due to worsening HF, and death from all causes. Patients on sacubitril/valsartan reported less frequent deterioration of HF and of quality of life, and discontinued study medication less frequently because of an adverse event. PARADIGM-HF demonstrated the superiority of sacubitril/valsartan over enalapril, with a 20% greater impact on cardiovascular mortality compared to ACEIs. Accordingly, in 2016, the European (ESC) and American (ACC/AHA/HFSA) cardiology societies simultaneously issued a class I recommendation for the replacement of ACEIs by sacubitril/valsartan in patients resembling PARADIGM-HF trial participants. abstract_id: PUBMED:32063405 The historical evolution of knowledge of the involvement of neurohormonal systems in the pathophysiology and treatment of heart failure. Our knowledge of the pathophysiology of heart failure (HF) underwent profound changes during the 1980s. Once thought to be of exclusively structural origin, HF began to be seen as the consequence of hormonal imbalance. A number of seminal studies were published in that decade focusing on the impact of neurohormonal activation in HF. Presently, eight neurohormonal systems are known to have a key role in HF development: four stimulate vasoconstriction and sodium/water retention (the sympathetic nervous system, the renin-angiotensin-aldosterone system [RAAS], endothelin, and the vasopressin-arginine system), while the other four stimulate vasodilation and natriuresis (the prostaglandin system, nitric oxide, the dopaminergic system, and the natriuretic peptide system [NPS]). These systems are strongly interconnected and are subject to intricate regulation, functioning together in a delicate homeostasis. Disruption of this homeostasis is characteristic of HF. This review explores the historical development of knowledge on the impact of the neurohormonal systems on HF pathophysiology, from the first studies to current understanding. In addition, the therapeutic potential of each of these systems is discussed, and currently used neurohormonal antagonists are characterized. Special emphasis is given to the latest drug approved for use in HF with reduced ejection fraction, sacubitril/valsartan. This drug combines two different molecules, acting on two different systems (RAAS and NPS) simultaneously. abstract_id: PUBMED:37427009 Pathophysiological Rationale and Clinical Evidence for Neurohormonal Modulation in Heart Failure with Preserved Ejection Fraction. Heart failure with preserved ejection fraction (HFpEF) is a heterogeneous syndrome resulting from the interaction between cardiac diseases, comorbidities and ageing. HFpEF is characterised by the activation of neurohormonal axes, namely of the renin-angiotensin-aldosterone system and the sympathetic nervous system, although to a lesser extent compared with heart failure with reduced ejection fraction. This provides a rationale for neurohormonal modulation as a therapeutic approach for HFpEF. Nonetheless, randomised clinical trials have failed to demonstrate a prognostic benefit from neurohormonal modulation therapies in HFpEF, with the sole exception of patients with left ventricular ejection fraction in the lower range of normality, for whom the American guidelines suggest that such therapies may be considered. In this review, the pathophysiological rationale for neurohormonal modulation in HFpEF is summarised and the clinical evidence on pharmacological and nonpharmacological approaches backing current recommendations discussed. abstract_id: PUBMED:28980368 Similar clinical benefits from below-target and target dose enalapril in patients with heart failure in the SOLVD Treatment trial. Aims: To examine associations of below-target and target dose of enalapril, an angiotensin-converting enzyme (ACE) inhibitor, with outcomes in patients with heart failure and reduced ejection fraction (HFrEF) in the Studies of Left Ventricular Dysfunction (SOLVD) Treatment trial. Methods And Results: Two thousand five hundred and sixty-nine patients with HFrEF (ejection fraction ≤35%) were randomized to below-target (5-10 mg/day) dose placebo (n = 1284) or enalapril (n = 1285). One month post-randomization, blind up-titration to target (20 mg/day) dose was attempted for both study drugs in 2458 patients. Among the 1444 patients who achieved dose up-titration (placebo, n = 748; enalapril, n = 696; mean dose for both groups, 20.0 mg/day), target dose enalapril (vs. target dose placebo) was associated with a 9% absolute lower risk of the combined endpoint of heart failure hospitalization or all-cause mortality [adjusted hazard ratio (HR) 0.70; 95% confidence interval (CI) 0.60-0.81; P &lt; 0.001] during 4 years of follow-up. Among the 1014 patients who could not achieve target dose (placebo, n = 486; enalapril, n = 528; mean dose for both groups, 8.8 mg/day), below-target dose enalapril (vs. below-target dose placebo) was associated with a 12% absolute lower risk of the combined endpoint of heart failure hospitalization or all-cause mortality (adjusted HR 0.68; 95% CI 0.57-0.81; P &lt; 0.001). Among the 1224 patients receiving enalapril, target (vs. below-target) dose had no association with the combined endpoint of heart failure hospitalization or all-cause mortality (adjusted HR 1.04; 95% CI 0.87-1.23; P = 0.695). Conclusion: In patients with HFrEF, the clinical benefits of ACE inhibitors appear to be similar at both below-target and target doses. abstract_id: PUBMED:27568454 Treatment with Optimal Dose Angiotensin-Converting Enzyme Inhibitors/Angiotensin Receptor Blockers Has a Positive Effect on Long-Term Survival in Older Individuals (Aged &gt;70 Years) and Octogenarians with Systolic Heart Failure. Background: Heart failure with reduced ejection fraction (HFrEF) is a disease of older people, but the target doses of angiotensin-converting enzyme inhibitors/angiotensin receptor blockers (ACEIs/ARBs) are unknown. Objective: To evaluate the association of ACEI/ARB dose level with long-term survival in stable older patients (aged &gt;70 years) and octogenarian outpatients with HFrEF. Population And Methods: A total of 138 outpatients aged &gt;70 years (35.5 % &gt; 80 years), with an LVEF &lt;40 % and who were clinically stable on optimal therapy were followed up for 3 years. The ACEI/ARB doses were categorized as: none (0), low (1-50 % target dose), and high (50-100 % target dose). The Cox regression survival model was adjusted for age, ischemic etiology, and renal function. Results: ACEIs/ARBs were prescribed to 91.3 % of patients, and 52.9 % received the high dose. Survival improved with increasing ACEI/ARB dose level in the total population (Hazard Ratio [HR] = 0.67; 95 % confidence interval [CI] 0.55-0.82; p &lt; 0.001), older patients aged &gt;70 years (HR = 0.65; 95 % CI 0.51-0.83; p &lt; 0.001), and octogenarians (HR = 0.71; 95 % CI 0.51-0.99; p = 0.045). The low (HR = 0.35; 95 % CI 0.16-0.76; p = 0.008) and high doses (HR = 0.13; 95 % CI 0.06-0.32; p &lt; 0.001) improved survival compared with not receiving ACEIs/ARBs. The high dose was associated with a better survival than the low dose in the total population (HR = 0.35; 95 % CI 0.19-0.67; p = 0.001) and in a propensity score-matched cohort (HR = 0.41; 95 % CI 0.16-1.02; p = 0.056). In octogenarians, all dose levels were associated with improved survival compared with not receiving ACEIs/ARBs, but there was no difference between ACEI/ARB doses. Conclusion: The achieved optimal dose of ACEIs/ARBs in ambulatory older people with HFrEF is associated with long-term survival. Answer: The target dose of neurohormonal blockade does appear to matter for outcomes in systolic heart failure in octogenarians. A study found that very old patients with systolic heart failure who received the target dose of ACEIs/ARBs had higher survival rates from all-cause mortality than those receiving less than 50% of the target dose (HR=0.6, 95%CI 0.4-0.9, P=0.033). However, there was no statistical difference in survival between patients who received at least 50% of the target dose and those who achieved the target dose. This dose-survival relationship was not observed for beta-blockers (BBs), as the clinical outcome of BB therapy was independent of the dose when the target heart rate was achieved (PUBMED:25880407). Additionally, another study indicated that the achieved optimal dose of ACEIs/ARBs in ambulatory older people with heart failure with reduced ejection fraction (HFrEF) is associated with long-term survival. This was true for the total population studied, older patients aged over 70 years, and octogenarians. The study showed that both low (1-50% target dose) and high (50-100% target dose) doses improved survival compared with not receiving ACEIs/ARBs, with the high dose associated with better survival than the low dose in the total population. In octogenarians, all dose levels were associated with improved survival compared with not receiving ACEIs/ARBs, but there was no difference between ACEI/ARB doses (PUBMED:27568454). These findings suggest that while achieving the target dose of ACEIs/ARBs is associated with reduced mortality in very old patients with systolic heart failure, even receiving doses below the target but above 50% can be beneficial. However, for octogenarians, any dose level of ACEIs/ARBs was associated with improved survival compared to not receiving these medications, indicating that some degree of neurohormonal blockade is better than none, even if the target dose cannot be achieved.
Instruction: Does drainage hole size influence adhesion on ventricular catheters? Abstracts: abstract_id: PUBMED:21476036 Does drainage hole size influence adhesion on ventricular catheters? Purpose: Ventricular catheter drainage holes of shunt systems used to treat hydrocephalus obstruct with tissue commonly comprising monocytes/macrophages, astrocytes, and giant cells. Despite high rates of obstruction, very few studies have manipulated drainage hole orientation, number, position, or diameter. By altering the hole diameter but maintaining a constant hole surface area, we manipulated shear stress through the holes, which we hypothesized would change the degree of macrophage and astrocyte attachment. Methods: First, a hole fabrication method was chosen from two fabrication techniques including punched holes in catheter tubing and constructed holes using nanofabrication techniques. Results: Punched holes were chosen to vary hole size from 282 to 975 μm because (1) samples were geometrically similar to commercially available ventricular catheters without significant microscopic differences in roughness values and (2) total macrophage and astrocyte adhesion on the punched holes was not significantly different from adhesion on the commercially available catheters. Overall adhesion from least to most adherent appeared to follow 975 &lt; 754 ≈ 500 &lt; 282-μm hole diameter for macrophages and 975 &lt; 500 &lt; 754 &lt; 282 for astrocytes with an obvious dependency on catheter orientation with respect to the horizontal; a dependency to the proximity of the hole to the catheter tip was not observed. Conclusion: This study suggests that macrophage and astrocyte adhesion generally decreases with increasing hole diameter under flow conditions and underscores the necessity for future work to examine how hole diameter impacts inflammatory-based shunt obstruction. abstract_id: PUBMED:27455059 Influence of the hole geometry on the flow distribution in ventricular catheters for hydrocephalus. Background: Hydrocephalus is a medical condition consisting of an abnormal accumulation of cerebrospinal fluid within the brain. A catheter is inserted in one of the brain ventricles and then connected to an external valve to drain the excess of cerebrospinal fluid. The main drawback of this technique is that, over time, the ventricular catheter ends up getting blocked by the cells and macromolecules present in the cerebrospinal fluid. A crucial factor influencing this obstruction is a non-uniform flow pattern through the catheter, since it facilitates adhesion of suspended particles to the walls. In this paper we focus on the effects that tilted holes as well as conical holes have on the flow distribution and shear stress. Methods: We have carried out 3D computational simulations to study the effect of the hole geometry on the cerebrospinal fluid flow through ventricular catheters. All the simulations were done with the OpenFOAM® toolbox. In particular, three different groups of models were investigated by varying (i) the tilt angles of the holes, (ii) the inner and outer diameters of the holes, and (iii) the distances between the so-called hole segments. Results: The replacement of cylindrical holes by conical holes was found to have a strong influence on the flow distribution and to lower slightly the shear stress. Tilted holes did not involve flow distribution changes when the hole segments are sufficiently separated, but the mean shear stress was certainly reduced. Conclusions: The authors present new results about the behavior of the fluid flow through ventricular catheters. These results complete earlier work on this topic by adding the influence of the hole geometry. The overall objective pursued by this research is to provide guidelines to improve existing commercially available ventricular catheters. abstract_id: PUBMED:10413162 Presence of vitronectin and activated complement factor C9 on ventriculoperitoneal shunts and temporary ventricular drainage catheters. Object: The pathogenesis of cerebrospinal fluid (CSF) shunt infection is characterized by staphylococcal adhesion to the polymeric surface of the shunt catheter. Proteins from the CSF--fibronectin, vitronectin, and fibrinogen--are adsorbed to the surface of the catheter immediately after insertion. These proteins can interfere with the biological systems of the host and mediate staphylococcal adhesion to the surface of the catheter. In the present study, the presence of fibronectin, vitronectin, and fibrinogen on CSF shunts and temporary ventricular drainage catheters is shown. The presence of fragments of fibrinogen is also examined. Methods: The authors used the following methods: binding radiolabeled antibodies to the catheter surface, immunoblotting of catheter eluates, and scanning force microscopy of immunogold bound to the catheter surface. The immunoblot showed that vitronectin was adsorbed in its native form and that fibronectin was degraded into small fragments. Furthermore, the study demonstrated that the level of vitronectin in CSF increased in patients with an impaired CSF-blood barrier. To study complement activation, an antibody that recognizes the neoepitope of activated complement factor C9 was used. The presence of activated complement factor C9 was shown on both temporary catheters and shunts. Conclusions: Activation of complement close to the surface of an inserted catheter could contribute to the pathogenesis of CSF shunt infection. abstract_id: PUBMED:24908221 Infection rate of emergency bolt-kit vs. non-emergency conventional implanted silver bearing external ventricular drainage catheters. Background: Bolt-kit systems are increasingly used as an alternative to conventional external cerebrospinal fluid (CSF) drainage systems. Since 2009 we regularly utilize bolt-kit external ventricular drainage (EVD) systems with silver-bearing catheters inserted manually with a hand drill and skull screws for emergency ventriculostomy. For non-emergency situations, we use conventional ventriculostomy with subcutaneous tunneled silver-bearing catheters, performed in the operating room with a pneumatic drill. This retrospective analysis compared the two techniques in terms of infection rates. Methods: 152 patients (aged 17-85 years, mean=55.4 years) were included in the final analysis; 95 received bolt-kit silver-bearing catheters and 57 received conventionally implanted silver-bearing catheters. The primary endpoint combined infection parameters: occurrence of positive CSF culture, colonization of catheter tips, or elevated CSF white blood cell counts (&gt;4/μl). Secondary outcome parameters were presence of microorganisms in CSF or on catheter tips. Incidence of increased CSF cell counts and number of patients with catheter malposition were also compared. Results: The primary outcome, defined as analysis of combined infection parameters (occurrence of either positive CSF culture, colonization of the catheter tips or raised CSF white blood cell counts &gt;4/μl)was not significantly different between the groups (58.9% bolt-kit group vs. 63.2% conventionally implanted group, p=0.61, chi-square-test). The bolt-kit group was non-inferior and not superior to the conventional group (relative risk reduction of 6.7%; 90% confidence interval: -19.9% to 25.6%). Secondary outcomes showed no statistically significant difference in the incidence of microorganisms in CSF (2.1% bolt-kit vs. 5.3% conventionally implanted; p=0.30; chi-square-test). Conclusions: This analysis indicates that silver-bearing EVD catheters implanted with a bolt-kit system outside the operating room do not significantly elevate the risk of CSF infection as compared to conventional implant methods. abstract_id: PUBMED:26084555 Number and location of drainage catheter side holes: in vitro evaluation. Aim: To evaluate the influence of number and location of catheter shaft side holes regarding drainage efficiency in an in vitro model. Materials And Methods: Three different drainage catheter models were constructed: open-ended model with no side holes (one catheter), unilateral side hole model (six catheters with one to six unilateral side holes), and bilateral side hole model (six catheters with one to six bilateral side holes). Catheters were inserted into a drainage output-measuring device with a constant-pressure reservoir of water. The volume of water evacuated by each of the catheters at 10-second intervals was measured. A total of five trials were performed for each catheter. Data were analysed using one-way analysis of variance. Results: The open-ended catheter had a mean drainage volume comparable to the unilateral model catheters with three, four, and five side holes. Unilateral model catheters had significant drainage volume increases up to three side holes; unilateral model catheters with more than three side holes had no significant improvement in drainage volume. All bilateral model catheters had significantly higher mean drainage volumes than their unilateral counterparts. There was no significant difference between the mean drainage volume with one, two, or three pairs of bilateral side holes. Further, there was no drainage improvement by adding additional bilateral side holes. Conclusion: The present in vitro study suggests that beyond a critical side hole number threshold, adding more distal side holes does not improve catheter drainage efficiency. These results may be used to enhance catheter design towards improving their drainage efficiency. abstract_id: PUBMED:26067618 Does release of antimicrobial agents from impregnated external ventricular drainage catheters affect the diagnosis of ventriculitis? Objective: Recently concern has arisen over the effect of released antimicrobial agents from antibiotic-impregnated external ventricular drainage (EVD) catheters on the reliability of CSF culture for diagnosis of ventriculitis. The authors designed a laboratory study to investigate this possibility, and to determine whether there was also a risk of loss of bacterial viability when CSF samples were delayed in transport to the laboratory. Methods: Three types of commercially available antibiotic-impregnated EVD catheters were perfused with a suspension of bacteria (Staphylococcus epidermidis) over 21 days. Samples were analyzed for bacterial viability and for concentrations of antibiotics released from the catheters. The authors also investigated the effect on bacterial viability in samples stored at 18°C and 4°C to simulate delay in CSF samples reaching the laboratory for analysis. Results: Bacterial viability was significantly reduced in all 3 catheter types when sampled on Day 1, but this effect was not observed in later samples. The results were reflected in stored samples, with significant loss of viability in Day 1 samples but with little further loss of viable bacteria in samples obtained after this time point. All samples stored for 18 hours showed significant loss of viable bacteria. Conclusions: While there were differences between the catheters, only samples taken on Day 1 showed a significant reduction in the numbers of viable bacteria after passing through the catheters. This reduction coincided with higher concentrations of antimicrobial agents in the first few hours after perfusion began. Similarly, bacterial viability declined significantly after storage of Day 1 samples, but only slightly in samples obtained thereafter. The results indicate that drugs released from these antimicrobial catheters are unlikely to affect the diagnosis of ventriculitis, as sampling for this purpose is not usually conducted in the first 24 hours of EVD. abstract_id: PUBMED:1632367 Influence of the stopcock on the efficiency of percutaneous drainage catheters: laboratory evaluation. The effects of stopcocks on percutaneous fluid drainage were tested in a laboratory model by using a standard stopcock (6-French inner diameter) and a prototype stopcock (9-French inner diameter) connected to 8-, 10-, 12-, 14-, and 16-French catheters. Catheters were immersed in water alone or in viscous fluid with particulate matter, and the system was connected to low wall suction or gravity drainage. The average volume of fluid aspirated in a given period with and without a stopcock was compared for each catheter. The standard stopcock decreased drainage efficiency for these catheters by 13-42%. This decreased drainage efficiency was worse with the larger catheters. Particulate fluid blocked the stopcock connection for all catheters. With the prototype stopcock, drainage of water alone was reduced by 0-9% for the catheters of different sizes. Particulate fluid did not obstruct the prototype stopcock with any size catheter. With gravity drainage, the volume of water aspirated was reduced by 12-42% with the standard stopcock and by 3-6% with the prototype stopcock. These data suggest that stopcock connections greatly influence the efficiency of the percutaneous drainage systems. Stopcocks with larger inner diameters may improve drainage over that achievable with the stopcocks that are currently available. abstract_id: PUBMED:26530709 Parametric study of ventricular catheters for hydrocephalus. Background: To drain the excess of cerebrospinal fluid in a hydrocephalus patient, a catheter is inserted into one of the brain ventricles and then connected to a valve. This so-called ventricular catheter is a standard-size, flexible tubing with a number of holes placed symmetrically around several transversal sections or "drainage segments". Three-dimensional computational dynamics shows that most of the fluid volume flows through the drainage segment closest to the valve. This fact raises the likelihood that those holes and then the lumen get clogged by the cells and macromolecules present in the cerebrospinal fluid, provoking malfunction of the whole system. In order to better understand the flow pattern, we have carried out a parametric study via numerical models of ventricular catheters. Methods: The parameters chosen are the number of drainage segments, the distances between them, the number and diameter of the holes on each segment, as well as their relative angular position. Results: These parameters were found to have a direct consequence on the flow distribution and shear stress of the catheter. As a consequence, we formulate general principles for ventricular catheter design. Conclusions: These principles can help develop new catheters with homogeneous flow patterns, thus possibly extending their lifetime. abstract_id: PUBMED:34626273 Factors which influence the complications of external ventricular cerebrospinal fluid drainage. Background: External ventricular drainage (EVD) is one of the most common neurosurgical procedures in emergencies. This study aims to find out which factors influence the occurrence of EVD-related complications in a comparative investigation of metal needles and polyurethane catheters. This is the first clinical study comparing these two systems. Methods: Adult patients undergoing pre-coronal EVD placement via freehand burr hole trepanation were included in this prospective study. The exclusion criteria were the open EVD insertion and/or a pre-existing infectious disease of the central nervous system. Results: Two hundred consecutive patients were enrolled. Of these, 100 patients were treated by using metal EVD (group 1) and 100 patients with polyurethane catheters (group 2). The overall complication rate was 26% (misplacement 13.5%, hemorrhage 12.5%, infection 2.5%, and dislocation 1%) without statistically significant differences between both groups. Generalized brain edema and midline shift had a significant influence on misplacements (generalized brain edema: p = 0.0002, Cramer-V: 0.307, OR = 7.364, 95% CI: 2.691-20.148; all patients: p = 0.001, Cramer-V: 0.48, OR = 43.5, 95% CI: 4.327-437.295; group 1: p = 0.047, Cramer-V: 0.216, OR = 3.75, 95% CI: 1.064-13.221; group 2: midline shift: p = 0.038, Cramer-V: 0.195, OR = 3.626, 95% CI: 1.389-9.464) all patients: p = 0.053, Cramer-V: 0.231, OR = 5.533, 95% CI 1.131-27.081; group 1: p = 0.138, Cramer-V: 0.168, OR = 2.769, 95% CI: 0.813-9.429 group 2. Hemorrhages were associated with the use of oral anticoagulants or antiplatelet therapy (p = 0.002; Cramer-V: 0.220, OR = 3.798, 95% CI: 1.572-9.175) with a statistically similar influence in both groups. Conclusion: Generalized brain edema has a significant influence on misplacements in both groups. Midline shift lost its significance when considering only the patients in group 2. Patients under oral anticoagulation and antiplatelet therapy have increased odds of EVD-associated hemorrhage. Metal needles and polyurethane catheters are equivalent in terms of patient safety when there are no midline shift and generalized brain edema. abstract_id: PUBMED:29729458 Burr-Hole Craniostomy with T-Tube Drainage as Surgical Treatment for Chronic Subdural Hematoma. Objective: We sought to investigate the effect of burr-hole craniostomy with T-tube drainage to treat chronic subdural hematoma (CSDH). Methods: Eighty-seven patients with CSDH who were recruited from January 2012 to June 2017 at the Department of Neurosurgery, The First Affiliated Hospital of Xi'an Medical University, were divided into 2 groups according to the method of drainage: T-tube drainage system (n = 45) and conventional subdural drainage system (n = 42). Retrospective analysis of clinical data and efficacy was performed between the 2 groups. Results: There were no significant differences in age, preoperative Markwalder grade scale, preoperative hematoma volume, hospitalization days, and discharge Markwalder grade scale between the 2 groups (P &gt; 0.05). The incidence of postoperative complications and hematoma recurrence in the group of patients with T-tube drainage was significantly reduced when compared with conventional subdural drainage systems (P &lt; 0.05). Conclusions: Both methods were effective in the treatment of CSDH; however, we found a lower overall surgical complication rate following treatment with burr-hole craniostomy and T-tube drainage. This indicates that it may be a better therapeutic option for management of CSDH. Answer: Yes, drainage hole size does influence adhesion on ventricular catheters. A study suggested that macrophage and astrocyte adhesion generally decreases with increasing hole diameter under flow conditions (PUBMED:21476036). This underscores the importance of considering hole diameter in the design of ventricular catheters to potentially reduce inflammatory-based shunt obstruction.
Instruction: Breast MRI as an adjunct to mammography: Does it really suffer from low specificity? Abstracts: abstract_id: PUBMED:20707656 Breast MRI as an adjunct to mammography: Does it really suffer from low specificity? A retrospective analysis stratified by mammographic BI-RADS classes. Background: Reports on the specificity of breast MRI are heterogeneous, depending on the respective setting of the performed study. Purpose: To retrospectively estimate the sensitivity and especially the specificity of breast MRI in the non-screening setting as an adjunct to mammography sorted by breast density and to estimate the accuracy of breast MRI in cases rated BI-RADS 0 and 3 mammographically. Material And Methods: A total of 216 consecutive patients with referral to breast MRI and previously acquired mammography were enrolled in this analysis. Negative findings were followed up with a mean time of 26.7 months. The loss to follow-up was 10.8%. The single breast was regarded as the study subject (n=399, 364 cases were eligible for calculation of diagnostic accuracy). BI-RADS 1 and 2 were rated as benign, 4 and 5 as malignant. BI-RADS 0 and 3 were analyzed separately. The 95% confidence intervals (CIs) were calculated from the normally approximated binomial distribution and taken to represent significant differences for the two imaging modalities if they did not overlap. Results: Among the study population, 62 malignant neoplasms were detected. For cases rated BI-RADS 1, 2, 4, and 5 (n=251), the sensitivity of breast MRI was 95.7% (95% CI 89.9-100.0%) and 74.5% (95% CI 62.0-87.0%) for mammography, respectively. The specificity of breast MRI was 96.1% (95% CI 93.4-98.8%) and 92.2% (95% CI 88.5-95.9%) for mammography, respectively. The diagnostic accuracy of breast MRI did not depend on breast density. In cases rated BI-RADS 0, n=57 (3, n=56), breast MRI achieved a sensitivity of 100% (90.9%) and a specificity of 98.1% (88.9%). There was a significant (P&lt; 0.01) accumulation of dense breast tissue (ACR IV) in breasts rated BI-RADS 0 in mammography. Breast MRI missed three malignant lesions, two of them being smaller than 3 mm. Conclusion: There is no rationale to criticize the low specificity of breast MRI when used as an adjunct to mammography. The independency of the diagnostic accuracy of breast MRI from breast density makes it a worthwhile choice in mammographic BI-RADS 0 cases. abstract_id: PUBMED:25794083 Breast MRI as an adjunct to mammography for breast cancer screening in high-risk patients: retrospective review. Objective: In July 2011, the provincial government of Ontario, Canada, approved funding for the addition of annual breast MRI to mammography screening for all women 30-69 years old considered to be at high risk for breast cancer. The purpose of this study was to evaluate the diagnostic performance of screening breast MRI as compared with mammography in a population-based high-risk screening program. Materials And Methods: A retrospective review identified 650 eligible high-risk women who underwent screening breast MRI and mammography between July 2011 and January 2013 at one institution. Results of 806 screening rounds (comprising both MRI and mammography) were reviewed. Results: Malignancy was diagnosed in 13 patients (invasive cancer in nine, ductal carcinoma in situ in three [one with microinvasion], and chest wall metastasis in one). Of the 13 cancers, 12 (92.3%) were detected by MRI and four (30.8%) by mammography. In nine of these patients, the cancer was diagnosed by MRI only, resulting in an incremental cancer detection rate of 10 cancers per 1000 women screened. MRI screening had significantly higher sensitivity than mammography (92.3% vs 30.8%) but lower specificity (85.9% vs 96.8%). MRI also resulted in a higher callback rate for a 6-month follow-up study (BI-RADS category 3 assessment) than mammography (119 [14.8%] vs 13 [1.6%]) and more image-guided biopsies than mammography (95 [11.8%] vs 19 [2.4%]). Conclusion: MRI is a useful adjunct to mammography for screening in high-risk women, resulting in a significantly higher rate of cancer detection. However, this was found to be at the cost of more imaging and biopsies for lesions that ultimately proved to be benign. abstract_id: PUBMED:38424895 Preferences and Attitudes Regarding Adjunct Breast Cancer Screening Among Patients with Dense Breasts. Objective: New breast screening modalities are being investigated to address the need for more sensitive breast cancer screening in women with dense breasts. We investigated the preferences and attitudes of these patients regarding adjunct screening modalities to help evaluate the acceptability of these exams. Methods: In this institutional review board-approved prospective study, patients with dense breasts on their prior mammogram were invited to complete a survey. Patients were asked to estimate their personal breast cancer risk compared with peers, indicate their level of concern related to screening callbacks, radiation exposure, and intravenous (IV) contrast allergies, and identify which factors might deter them from getting adjunct screening exams. Results: Five hundred eight patients with dense breasts presenting for screening mammography completed surveys. While most patients (304/508, 59.9%) felt it was likely or very likely that cancer could be missed on their mammogram, only 8.9% (45/508) had undergone adjunct screening exams in the past 3 years. The most commonly cited deterrents to adjunct screening were cost (340/508, 66.9%), pain (173/508, 34.1%), and concern that adjunct screening could lead to additional procedures (158/508, 31.1%). When asked to select among three hypothetical breast cancer screening modalities, patients strongly preferred the more sensitive examination, even if this involved greater cost (162/508, 31.9%) or IV-contrast administration (315/508, 62.0%). Conclusion: Our data suggest that patients with dense breasts prefer adjunct screening exams that are both sensitive and inexpensive, although an increase in sensitivity could outweigh additional cost or even IV-line placement. abstract_id: PUBMED:36126953 Digital Breast Tomosynthesis versus MRI as an Adjunct to Full-Field Digital Mammography for Preoperative Evaluation of Breast Cancer according to Mammographic Density. Objective: To compare digital breast tomosynthesis (DBT) and MRI as an adjunct to full-field digital mammography (FFDM) for the preoperative evaluation of women with breast cancer based on mammographic density. Materials And Methods: This retrospective study enrolled 280 patients with breast cancer who had undergone FFDM, DBT, and MRI for preoperative local tumor staging. Three radiologists independently sought the index cancer and additional ipsilateral and contralateral breast cancers using either FFDM alone, DBT plus FFDM, or MRI plus FFDM. Diagnostic performances across the three radiologists were compared among the reading modes in all patients and subgroups with dense (n = 186) and non-dense breasts (n = 94) according to mammographic density. Results: Of 280 patients, 46 (16.4%) had 48 additional (39 ipsilateral and nine contralateral) cancers in addition to the index cancer. For index cancers, both DBT plus FFDM and MRI plus FFDM showed sensitivities of 100% in the non-dense group. In the dense group, DBT plus FFDM showed lower sensitivity than that of MRI plus FFDM (94.6% vs. 99.6%, p &lt; 0.001). For additional ipsilateral cancers, DBT plus FFDM showed specificity and positive predictive value (PPV) of 100% in the non-dense group, but sensitivity and negative predictive value (NPV) were not statistically different from those of MRI plus FFDM (p &gt; 0.05). In the dense group, DBT plus FFDM showed higher specificity (98.2% vs. 94.1%, p = 0.005) and PPV (83.1% vs. 65.4%; p = 0.036) than those of MRI plus FFDM, but lower sensitivity (59.9% vs. 75.3%; p = 0.049). For contralateral cancers, DBT plus FFDM showed higher specificity than that of MRI plus FFDM (99.0% vs. 96.7%, p = 0.014), however, the other values did not differ (all p &gt; 0.05) in the dense group. Conclusion: DBT plus FFDM showed an overall higher specificity than that of MRI plus FFDM regardless of breast density, perhaps without substantial loss in sensitivity and NPV in the diagnosis of additional cancers. Thus, DBT may have the potential to be used as a preoperative breast cancer staging tool. abstract_id: PUBMED:36611428 Assessment of Enhancement Kinetics Improves the Specificity of Abbreviated Breast MRI: Performance in an Enriched Cohort. Objective: To investigate the added value of kinetic information for breast lesion evaluation on abbreviated breast MRI (AB-MRI). Methods: This retrospective study analyzed 207 breast lesions with Breast Imaging Reporting and Data System categories 3, 4, or 5 on AB-MRI in 198 consecutive patients who had breast MRI for screening after breast cancer surgery between January 2017 and December 2019. All lesions were pathologically confirmed or stable on follow-up images for 2 years or more. Kinetic information of the lesions regarding the degree and rate of enhancement on the first post-contrast-enhanced image and the enhancement curve type from two post-contrast-enhanced images were analyzed on a commercially available computer-assisted diagnosis system. The diagnostic performances of AB-MRI with morphological analysis alone and with the addition of kinetic information were compared using the McNemar test. Results: Of 207 lesions, 59 (28.5%) were malignant and 148 (71.5%) were benign. The addition of an enhancement degree of ≥90% to the morphological analysis significantly increased the specificity of AB-MRI (29.7% vs. 52.7%, p &lt; 0.001) without significantly reducing the sensitivity (94.9% vs. 89.8%, p = 0.083) compared to morphological analysis alone. Unnecessary biopsy could have been avoided in 34 benign lesions, although three malignant lesions could have been missed. For detecting invasive cancer, adding an enhancement degree ≥107% to the morphological analysis significantly increased the specificity (26.5% vs. 57.6%, p &lt; 0.001) without significantly decreasing the sensitivity (94.6% vs. 86.5%, p = 0.083). Conclusion: Adding the degree of enhancement on the first post-contrast-enhanced image to the morphological analysis resulted in higher AB-MRI specificity without compromising its sensitivity. abstract_id: PUBMED:37949007 Breast intervention device for low-field MRI with a customized unilateral coil. With the incidence of breast cancer rising to the top among female malignant tumors, magnetic resonance images guided breast biopsy intervention and minimally invasive treatment have developed as a clinically practical research issue. High field studies have shown the diagnostic value of breast MRI, but the examination costs greatly exceed those of competing conventional mammography. In this case, low-field MRI cannot merely provide typical MRI contrast, but also significantly reduce the cost of diagnosis and treatment for breast cancer patients. This work describes a unilateral breast coil and prototype intervention device, which provides a customized solution for low-field MRI-guided breast intervention. Results demonstrate that the low-field MRI breast intervention device facilitates medical intervention procedures. And the designed positioning device can locate the target lesion within 2-3 mm accuracy. Phantom tests with the customized unilateral coil indicate that the open loops perform as well as the 4-channel commercial closed breast coil, presenting a relatively good SNR (signal-to-noise ratio) and uniformity characteristics. MR scanning images of the volunteer breast using the breast intervention coil also show high SNR, which lays a foundation for further implementation of image-guided breast interventional minimally invasive surgery with the low-field MRI system. abstract_id: PUBMED:38330534 Initial experience with a next-generation low-field MRI scanner: Potential for breast imaging? Purpose: Broader clinical adoption of breast magnetic resonance imaging (MRI) faces challenges such as limited availability and high procedural costs. Low-field technology has shown promise in addressing these challenges. We report our initial experience using a next-generation scanner for low-field breast MRI at 0.55T. Methods: This initial cases series was part of an institutional review board-approved prospective study using a 0.55T scanner (MAGNETOM Free.Max, Siemens Healthcare, Erlangen/Germany: height &lt; 2 m, weight &lt; 3.2 tons, no quench pipe) equipped with a seven-channel breast coil (Noras, Höchberg/Germany). A multiparametric breast MRI protocol consisting of dynamic T1-weighted, T2-weighted, and diffusion-weighted sequences was optimized for 0.55T. Two radiologists with 12 and 20 years of experience in breast MRI evaluated the examinations. Results: Twelve participants (mean age: 55.3 years, range: 36-78 years) were examined. The image quality was diagnostic in all examinations and not impaired by relevant artifacts. Typical imaging phenotypes were visualized. The scan time for a complete, non-abbreviated breast MRI protocol ranged from 10:30 to 18:40 min. Conclusion: This initial case series suggests that low-field breast MRI is feasible at diagnostic image quality within an acceptable examination time. abstract_id: PUBMED:22006539 Usefulness of breast-specific gamma imaging as an adjunct modality in breast cancer patients with dense breast: a comparative study with MRI. Objective: The aim of this study was to evaluate the adjunctive benefits of breast-specific gamma imaging (BSGI) versus magnetic resonance imaging (MRI) in breast cancer patients with dense breasts. Methods: This study included a total of 66 patients (44.1 ± 8.2 years) with dense breasts (breast density &gt;50%) and already biopsy-confirmed breast cancer. All of the patients underwent BSGI and MRI as part of an adjunct modality before the initial therapy. Of 66 patients, the 97 undetermined breast lesions were newly detected and correlated with the biopsy results. Results: Twenty-six of the 97 breast lesions proved to be malignant tumors (invasive ductal cancer, n = 16; ductal carcinoma in situ, n = 6; mixed or other malignancies, n = 4); the remaining 71 lesions were diagnosed as benign tumors. The sensitivity and specificity of BSGI were 88.8% (confidence interval (CI), 69.8-97.6%) and 90.1% (CI, 80.7-95.9%), respectively, while the sensitivity and specificity of MRI were 92.3% (CI, 74.9-99.1%) and 39.4% (CI, 28.0-51.7%), respectively (p &lt; 0.0001). MRI detected 43 false-positive breast lesions, 37 (86.0%) of which were correctly diagnosed as benign lesions using BSGI. In 12 malignant lesions &lt;1 cm, the sensitivities of BSGI and MR imaging were 83.3% (CI, 51.6-97.9%) and 91.7% (CI, 61.5-99.8%), respectively. Conclusion: BSGI showed an equivocal sensitivity and a high specificity compared to MRI in the diagnosis of breast lesions. In addition, BSGI had a good sensitivity in discriminating breast cancers ≤1 cm. The results of this study suggest that BSGI could play a crucial role as an adjunctive imaging modality which can be used to evaluate breast cancer patients with dense breasts. abstract_id: PUBMED:10663717 MRI of breast tumors. Breast MRI is increasingly used as an adjunct to conventional imaging modalities, particularly in diagnostic problem cases, but also for pre-operative staging. It is an extremely sensitive technique, with relatively limited specificity. To avoid both, false-positive, but particularly false-negative diagnoses, it is imperative to be familiar with the varying MRI appearance of benign and malignant breast tumors. This review serves to give an overview of the different criteria that can be used for differential diagnostic purposes. Moreover, practical guidelines are given to help with the adequate management of enhancing lesions. abstract_id: PUBMED:28470744 MRI-guided breast interventions. Magnetic resonance imaging (MRI)-guided interventions, including biopsies and wire localizations, are fundamental to any breast imaging practice due to the high sensitivity but limited specificity of breast MRI. The basic steps of MRI-guided biopsies are similar regardless of the vendor or platform, and technical considerations include approach planning, patient preparation and positioning, lesion targeting, and directional sampling using a vacuum-assisted biopsy technique. Unique challenges related to MRI-guided biopsies include vanishing lesions due to contrast washout, obscuration of the biopsy site due to susceptibility artifacts, and limited access to posteromedial lesions. A careful approach to planning, patient positioning, and lesion targeting will maximize the chances for a successful biopsy. Due to overlapping imaging features between benign and malignant lesions, radiologic-pathologic concordance is difficult and essential for further patient management. Level Of Evidence: 5 Technical Efficacy: Stage 3 J. MAGN. RESON. IMAGING 2017;46:631-645. Answer: The specificity of breast MRI as an adjunct to mammography has been a topic of debate, with some reports suggesting that it suffers from low specificity. However, a retrospective analysis stratified by mammographic BI-RADS classes found that the specificity of breast MRI was actually quite high. In cases rated BI-RADS 1, 2, 4, and 5, the specificity of breast MRI was 96.1%, which was comparable to the 92.2% specificity for mammography. Additionally, the diagnostic accuracy of breast MRI did not depend on breast density, making it a valuable choice in mammographic BI-RADS 0 cases (PUBMED:20707656). In a population-based high-risk screening program, MRI was found to be a useful adjunct to mammography for screening in high-risk women, resulting in a significantly higher rate of cancer detection. However, this increased detection rate came with more imaging and biopsies for lesions that ultimately proved to be benign, indicating a trade-off between higher sensitivity and lower specificity (PUBMED:25794083). Another study comparing digital breast tomosynthesis (DBT) and MRI as adjuncts to full-field digital mammography (FFDM) for preoperative evaluation of breast cancer found that DBT plus FFDM showed an overall higher specificity than MRI plus FFDM regardless of breast density, suggesting that DBT may be a potential alternative for preoperative breast cancer staging with a high specificity (PUBMED:36126953). Furthermore, the addition of kinetic information to abbreviated breast MRI (AB-MRI) significantly increased the specificity of AB-MRI without significantly reducing sensitivity, indicating that AB-MRI can achieve high specificity when additional parameters are considered (PUBMED:36611428). In summary, the evidence suggests that breast MRI, when used as an adjunct to mammography, does not inherently suffer from low specificity. Its specificity can be comparable to or even higher than mammography, depending on the context and additional techniques used.
Instruction: Direct urine polymerase chain reaction for chlamydia and gonorrhoea: a simple means of bringing high-throughput rapid testing to remote settings? Abstracts: abstract_id: PUBMED:23639791 Direct urine polymerase chain reaction for chlamydia and gonorrhoea: a simple means of bringing high-throughput rapid testing to remote settings? Unlabelled: Background Rapid point-of-care tests (POCTs) for chlamydia (Chlamydia trachomatis) and gonorrhoea (Neisseria gonorrhoeae) have the potential to confer health benefits in certain populations even at moderate sensitivities; however, suitable POCTs for these organisms are currently lacking. Methods: In this study, we investigated the use of direct urine polymerase chain reaction (PCR), with the view of implementing a simplified PCR strategy for high-throughput chlamydia and gonorrhoea screening in remote settings. Briefly, a simple dilution of the urine was performed before adding it directly to a real-time PCR reaction. The method was evaluated using 134 stored urine specimens that had been submitted for chlamydia and gonorrhoea testing and had been tested using a commercial C. trachomatis and N. gonorrhoeae PCR method. These included samples that were PCR-positive for chlamydia (n=87), gonorrhoea (n=16) or both (n=2). Direct urine testing was conducted using previously described in-house real-time PCR methods for C. trachomatis and N. gonorrhoeae as well as for recognised N.gonorrhoeae antimicrobial resistance mechanisms. Results: The overall sensitivities and specificities of the direct urine PCR were 78% and 100% for chlamydia, and 83% and 100% for gonorrhoea. N.gonorrhoeae penicillin and quinolone resistance mechanisms were characterised in 14 of the 18 N. gonorrhoeae-positive samples. Conclusions: The results of this study show that the simplified PCR strategy may be a feasible approach for rapid screening and improving chlamydia and gonorrhoea treatment in remote settings. abstract_id: PUBMED:9858349 Detection of Neisseria gonorrhoeae infection by ligase chain reaction testing of urine among adolescent women with and without Chlamydia trachomatis infection. Background And Objectives: Culture, the conventional method for detection of Neisseria gonorrhoeae, requires invasive sampling and stringent specimen transport conditions. The recently developed ligase chain reaction test (LCR; Abbott Laboratories; North Chicago, IL) allows noninvasive sampling and stable transport conditions, but has not been evaluated with specimens from adolescent populations. Goal Of This Study: To perform a comparative evaluation of a commercial LCR test and culture for the diagnosis of N. gonorrhoeae in adolescent women. Study Design: Urine and endocervical swab specimens from 330 teenage women seen in two public health adolescent clinics were tested by LCR and culture. For resolution of discordant results, a polymerase chain reaction (PCR) test was developed that directly amplifies N. gonorrhoeae DNA from urine samples processed for LCR. Results: Thirty-one of 330 (9.4%) cervical specimens were culture-positive for N. gonorrhoeae, and 30 of 330 (9.1%) urine specimens were positive by LCR. After resolution of 13 discordant results, the sensitivity, specificity, and positive and negative predictive values of LCR for urine were 88.2%, 100%, 100%, 98.7%, respectively, and for culture of endocervical specimens were 82.3%, 98.9%, 90.3% and 98%, respectively. Conclusions: Although more expensive than culture, LCR offers a sensitive means for the detection of N. gonorrhoeae in urine samples and may be useful for this purpose in settings where pelvic examinations are difficult to perform and simultaneous detection of N. gonorrhoeae and Chlamydia trachomatis is advantageous. abstract_id: PUBMED:9662768 Reappraising the value of urine leukocyte esterase testing in the age of nucleic acid amplification. Background: The leukocyte esterase (LE) test has a limited role in determination of empiric therapy for male patients screened for urethritis because of its poor positive predictive value in low (&lt; 5%) prevalence settings. The recent advent of nucleic acid amplification testing of first-void urine (FVU) has dramatically increased the ease with which widespread screening for Chlamydia trachomatis and Neisseria gonorrhoeae can be performed, but the costs of such testing may be prohibitive. The LE test may therefore have a role in management of urethritis because of its high negative predictive value. Objectives: To determine the sensitivity, specificity, and positive and negative predictive value of LE testing for the diagnosis of N. gonorrhoeae and C. trachomatis in male FVU specimens in a low-prevalence urban setting using a commercial polymerase chain reaction (PCR) as the gold standard. Methods: Data were obtained on men presenting to an urban sexually transmitted disease clinic over a 16-month period. Patients were included if an FVU had been tested for the presence of LE using a rapid dipstick, read by an automated urine analyzer, and the sample (either an FVU or urethral swab) had then been processed for the detection of N. gonorrhoeae and C. trachomatis by PCR. Results: Of 301 assessable patients, there were 14 cases of gonorrhoea, 21 cases of chlamydia, and 1 case of dual infection detected by PCR. Most men (245/301; 81.4%) were asymptomatic, of whom 12 of 245 (4.9%) had an infection detected compared with 24 of 56 (42.9%) in the symptomatic men (P &lt; 0.001). Using a "&lt; or = trace" cutoff, the overall value for the sensitivity of the LE test was 77.8% (95% confidence interval, 60.4-89.3), specificity 80.8% (75.4-85.2), positive predictive value 35.4% (25.2-47.1), and negative predictive value 96.4% (92.8-98.3). Conclusions: The negative predictive value of the LE test may be of use in determining which patients should proceed to specific diagnosis by nucleic amplification methods (e.g., PCR or ligase chain reaction). By limiting testing to patients with positive LE results, cost savings may be made, enabling the technology to be used in a wider community setting. The value of the LE test in higher prevalence populations with access to nucleic amplification testing remains to be established. abstract_id: PUBMED:10092961 Diagnosis of Chlamydia trachomatis and Neisseria gonorrhoeae. Genitourinary infections in males by the Amplicor PCR assay of urine. The Amplicor CT/NG polymerase chain reaction (PCR) test on urine specimens from males was prospectively evaluated against established specimens and laboratory methods for diagnosing Chlamydia trachomatis and Neisseria gonorrhoeae genitourinary infections, in patients from a remote region of Western Australia. Seventy-three males who were tested for both C. trachomatis and N. gonorrhoeae by both conventional methodology and Amplicor PCR on urine were enrolled in the study. Established testing comprised enzyme immunoassay/immunofluorescence antigen testing (EIA/IF) for C. trachomatis and microscopy and/or culture for N. gonorrhoeae on urethral swabs. Positive test results were confirmed using a set of criteria that included supplemental PCR testing and clinical history. Overall, 13.7% of patients were resolved as positive for C. trachomatis and 52.1% as positive for N. gonorrhoeae. The sensitivity and specificity of the Amplicor CT/NG PCR on male urine specimens for C. trachomatis were 80.0% (8/10) and 95.2% (60/63), compared with 60.0% (6/10) and 100.0% (63/63) for EIA/IF on urethral swabs. For N. gonorrhoeae, the sensitivity and specificity of the Amplicor CT/NG PCR on male urine specimens were both 100% (38/38 and 35/35, respectively) compared with 86.8% (33/38) and 100% (35/35) for microscopy and/or culture on urethral swabs. The results of this study indicate that the Amplicor CT/NG multiplex PCR test for C. trachomatis and N. gonorrhoeae performed on urine in males provides a highly sensitive, specific, and robust method for the diagnosis of both C. trachomatis and N. gonorrhoeae, for the early detection of both symptomatic and asymptomatic infected individuals. abstract_id: PUBMED:9152340 Urinary diagnosis of gonorrhoea and Chlamydia in men in remote aboriginal communities. Aims: (1) To evaluate the acceptability and validity of an intervention based on urine tests for diagnosis and treatment of gonorrhoea and chlamydia in men in remote Aboriginal communities. (2) To provide a prevalence estimate of these infections in the male population in the surveyed communities. Methods: First-void urine samples from 460 men in remote communities and 33 men in the Alice Springs Gaol were tested for gonorrhoea and chlamydia with at least one of polymerase chain reaction (PCR), enzyme immunoassay (EIA) and culture (gonorrhoea only). Results: One hundred and three men (20.9%) were infected with gonorrhoea or chlamydia. The prevalence of infection for gonorrhoea only was 11.7%, for chlamydia only 4.1% and for dual infection 5.1%. Eighty-eight infected men and 45 of their sexual partners were recorded as having been treated within two months of testing. PCR tests detected the largest number of infections and were the easiest to use. Conclusions: The prevalence of these infections was higher than anticipated. Urine PCR tests were acceptable to men and are well suited to the remote-community setting. As an effective alternative to urethral swabs, they permit a range of community-based strategies to address high rates of infection with gonorrhoea and chlamydia. abstract_id: PUBMED:1431378 Detection of Neisseria gonorrhoeae in first-voided urine sediments from male urethritis patients by polymerase chain reaction Neisseria gonorrhoeae was detected from first-voided urine sediments of male patients with urethritis by polymerase chain reaction (PCR). Urine and urinary sediment were treated with proteinase K, and DNA was further purified by phenol extraction. Two oligonucleotides based on sequences within a ribosomal RNA gene from N. gonorrhoeae were used as primers for the PCR. A DNA fragment of 206 bp specific for N. gonorrhoeae was amplified by PCR and detected by agarose gel electrophoresis. In 19 specimens of urine sediments collected from 21 patients in whom N. gonorrhoeae was isolated from urethral swab by culture, 206 bp DNA fragment was amplified by PCR. In all specimens of urine sediments from 24 patients in whom cultures for N. gonorrhoeae were negative, no DNA was amplified by the PCR. The overall coincidence rate between the PCR for detecting N. gonorrhoeae in first-voided urine sediments and culture in urethral swab was 95.6% (43/45). PCR procedure for detection of pathogens from first-voided urine sediments would be noninvasive and would be applied for the diagnosis of gonococcal urethritis and chlamydial urethritis. abstract_id: PUBMED:15931332 Decision analysis: point-of-care Chlamydia testing vs. laboratory-based methods. Objective: To evaluate and compare the performance of several different methods available for detection of Chlamydia trachomatis (Ct) infection, and to explore possible testing and treatment strategies incorporating point-of-care testing versus laboratory-based tests. Design: Prospective trial and decision analysis. Setting: Large, urban, publicly funded sexually transmitted disease clinic. Participants: 1,384 female patients. Methods: Each subject was tested for Ct infection by direct fluorescent antibody (DFA, Sanofi/Kallestad, Chaska, MN), optical immunoassay (OIA, Thermo Electron, Point of Care and Rapid Diagnostics, Louisville CO), McCoy cell culture (in-house method), and polymerase chain reaction (microwell PCR, microwell assay, Roche, Branchburg NJ). Results: Performing a rapid in-clinic test on women who did not meet empiric treatment criteria would have increased the overall proportion of infected persons receiving same-day treatment from 48.6% to 79.1% using DFA or 78.4% using OIA. Conclusions: Use of empiric treatment criteria and same-day point-of-care testing for patients not meeting the empiric treatment threshold appears to be an appropriate, useful, and cost-effective strategy for increasing same-day treatment of Ct infections in this population. abstract_id: PUBMED:12438900 Evaluation of self-collected samples in contrast to practitioner-collected samples for detection of Chlamydia trachomatis, Neisseria gonorrhoeae, and Trichomonas vaginalis by polymerase chain reaction among women living in remote areas. Background: Self-collected samples have been shown to be an acceptable and sensitive method for the detection by polymerase chain reaction (PCR) of sexually transmitted infections (STIs) among women. GOAL The goal of the study was to compare self-collected sampling methods to conventional practitioner endocervical sampling for the PCR detection of Chlamydia trachomatis and Neisseria gonorrhoeae to compare two self-collected sampling methods for the detection of T vaginalis by PCR. Study Design: Women (n = 318) from urban and remote areas of central Australia participated in the study when attending their health clinic for a check-up. They each provided a FVU sample, self-collected vaginal swab specimen, and tampon specimen. This was followed by a clinical examination by a practitioner, with collection of endocervical and high vaginal swabs for testing by conventional microscopy and culture for N gonorrhoeae and T vaginalis, respectively. The FVU, self-collected vaginal swab, tampon, and endocervical swab specimens were tested by Roche Cobas Amplicor for C trachomatis and N gonorrhoeae. The self-collected vaginal swab and tampon specimens were also tested by an in-house PCR method for the detection of T vaginalis. Results: In toto, C trachomatis was detected by PCR in 11.5%, N gonorrhoeae in 11.8%, and T vaginalis in 24.6%. Molecular diagnostics for N gonorrhoeae and T vaginalis were significantly more sensitive than traditional assays with microscopy and culture. For the detection of C trachomatis by PCR, tampons were the most sensitive (100.0%) and urine the least sensitive (72.7%) specimens ( = 0.01). For the detection of by PCR, the self-collected tampon was the most sensitive specimen, followed by the endocervical swab, self-collected swab, and urine specimen, with sensitivities of 97.2%, 92.6%, 71.9%, and 31.2%, respectively. For detection of N gonorrhoeae, statistically significant differences were detected for urine versus tampon ( &lt; 0.0001), endocervical swab ( &lt; 0.001), and self-collected swab ( = 0.01) and for self-collected swab versus tampon ( = 0.01). Subsequent data collection showed that sensitivity of urine PCR for detection of N gonorrhoeae improved with freezing of urine specimens and shorter transport time. Tampons were also more sensitive than self-collected swabs for detection of T vaginalis (sensitivity of 100% versus 87.7%). Conclusion: Self-collected specimens offer women in remote communities an acceptable and sensitive alternative method of testing for STIs. The low sensitivity of N gonorrhoeae PCR of urine specimens may reflect poor transport and storage conditions, which we have shown can be improved by freezing urine specimens and reducing transport delays. abstract_id: PUBMED:10341775 Impact of improved diagnosis and treatment on prevalence of gonorrhoea and chlamydial infection in remote aboriginal communities on Anangu Pitjantjatjara Lands. Objective: To evaluate the impact of a program to improve access to, and delivery of, diagnosis and treatment on prevalence of gonorrhoea and chlamydial infection in remote Aboriginal communities. Design: Analysis of cross-sectional data from annual age-based screening. Setting: Six remote Aboriginal communities and three homelands on the Anangu Pitjantjatjara Lands in the far north-west of South Australia, 1996-1998. Participants: All Aboriginal people aged 12-40 years listed on the Nganampa Health Council population register as resident on the Anangu Pitjantjatjara Lands. Main Outcome Measures: Prevalence of gonorrhoea and chlamydial infection, determined by urine polymerase chain reaction tests. Results: The prevalence of gonorrhoea in people aged 12-40 years almost halved, from 14.3% in 1996 to 7.7% in 1998 (test for trend: P &lt; 0.001). The fall in prevalence of gonorrhoea was comparable and statistically significant in both men and women. Prevalence of chlamydial infection also fell, from 8.8% in men and 9.1% in women in 1996 to 7.2% in both men and women in 1998, but this decline was not statistically significant (test for trend: P = 0.174). Conclusion: This study documents a rapid reduction in prevalence of gonorrhoea, probably reflecting reduced duration of infectiousness due to advances in diagnosis, increased testing activity and reduced interval to treatment rather than behaviour change. These results demonstrate that, in remote communities, even with a highly mobile population, it is possible to implement effective control activities for sexually transmitted diseases. abstract_id: PUBMED:10497676 Use of ligase chain reaction and polymerase chain reaction on urine specimens to detect Chlamydia trachomatis infections in a sexually transmitted diseases clinic in Singapore. This study was done to assess the specificity and sensitivity of the DNA amplification assays of ligase chain reaction (LCR) and polymerase chain reaction (PCR) on urine specimens to detect Chlamydia trachomatis infections in both male and female patients seen at a sexually transmitted diseases (STD) clinic in Singapore, compared with other diagnostic methods currently in use. A total of 100 patients were selected; 50 male patients diagnosed with non-gonococcal urethritis based on symptoms and a positive Gram-stained urethral smear and 50 female asymptomatic sex workers were assessed. Automated assays using LCR and PCR were used, and compared to enzyme immunoassays, chlamydial cell cultures and PCR of urethral and endocervical swab specimens. In male patients, LCR and PCR of urine specimens had sensitivities of 100%, compared to 87.0% for PCR of urethral swab specimen, 82.6% for enzyme immunoassay (EIA) and 91.3% for cell cultures. In female patients, LCR and PCR of urine samples achieved sensitivities of 77.8% and 88.9% respectively, compared with 55.6% for PCR of endocervical swab specimens, 22.2% for EIA and 66.7% for cell cultures. LCR and PCR of urine samples provided higher sensitivity compared to cell cultures, EIA and PCR of urethral and endocervical swab specimens. The use of LCR and PCR on urine as a non-invasive means of detecting chlamydial infections is viable, and may have a role to play in population-based screening programmes. Answer: The direct urine polymerase chain reaction (PCR) for chlamydia and gonorrhoea has been investigated as a potential method for high-throughput rapid testing, particularly in remote settings where access to healthcare and laboratory facilities may be limited. The study described in PUBMED:23639791 evaluated a simplified PCR strategy using direct urine testing for Chlamydia trachomatis and Neisseria gonorrhoeae. The method involved a simple dilution of urine before adding it directly to a real-time PCR reaction. The study used 134 stored urine specimens and found overall sensitivities and specificities of 78% and 100% for chlamydia, and 83% and 100% for gonorrhoea, respectively. The study concluded that this simplified PCR strategy could be a feasible approach for rapid screening and improving treatment for these infections in remote settings. Other studies have also explored the use of urine samples for the detection of these sexually transmitted infections (STIs). For instance, the ligase chain reaction (LCR) test on urine specimens was found to be a sensitive means for the detection of N. gonorrhoeae among adolescent women, offering a noninvasive sampling method with stable transport conditions (PUBMED:9858349). Similarly, the leukocyte esterase (LE) test on first-void urine (FVU) specimens was considered for its high negative predictive value in diagnosing N. gonorrhoeae and C. trachomatis, suggesting that it could be used to determine which patients should proceed to specific diagnosis by nucleic acid amplification methods (PUBMED:9662768). The Amplicor PCR assay on urine specimens from males was also evaluated and found to provide a highly sensitive, specific, and robust method for diagnosing both C. trachomatis and N. gonorrhoeae (PUBMED:10092961). Additionally, urine PCR tests were found to be acceptable to men in remote Aboriginal communities and detected a higher number of infections compared to other methods (PUBMED:9152340). Overall, these studies support the potential of direct urine PCR as a simple and effective means for high-throughput rapid testing for chlamydia and gonorrhoea, particularly in remote and resource-limited settings.
Instruction: Right ventricular dimensions and function in isolated left bundle branch block: is there evidence of biventricular involvement? Abstracts: abstract_id: PUBMED:18279400 Right ventricular dimensions and function in isolated left bundle branch block: is there evidence of biventricular involvement? Background: Isolated left bundle branch block (LBBB) may be an expression of idiopathic cardiomyopathy affecting both ventricles. The present study was conducted to evaluate right ventricular (RV) dimensions and function in asymptomatic LBBB patients with mildly depressed left ventricular (LV) function. Methods: Fifteen patients with asymptomatic LBBB in whom coronary artery disease, hypertension, and valvular pathology was excluded were studied. Fifteen healthy volunteers and 15 idiopathic dilated cardiomyopathy LBBB patients served as controls. RV long axis and tricuspid annulus diameter were obtained, as were tricuspid annular plane systolic excursion (TAPSE) and peak systolic velocity (Sm) of the RV free wall annulus. Tricuspid regurgitation (TR) jets (peak TR jets) were used for RV pressure assessment. Results: RV dimensions were comparable between the asymptomatic LBBB patients and controls. RV functions of healthy volunteers and asymptomatic LBBB patients were similar (TAPSE: 24 +/- 3 and 24 +/- 4 mm, Sm: 13 +/- 2 and 13 +/- 3 cm/s, respectively), whereas functional parameters in idiopathic dilated cardiomyopathy patients were significantly reduced (TAPSE: 19 +/- 5 mm, Sm: 9 +/- 2 cm/s, both P &lt; 0.01 by analysis of variance [ANOVA]). For the three groups combined, a significant inverse correlation between RV pressure (peak TR jets) and RV function (Sm) was observed (r =-0.52, P = 0.017). Conclusions: In patients with an asymptomatic LBBB, RV dimensions and function are within normal range. The present study suggests that screening of RV functional parameters in asymptomatic LBBB patients is not useful for identification of an early-stage cardiomyopathy, and RV dysfunction is merely a consequence of increased RV loading conditions caused by left-sided heart failure and does not indicate a generalized cardiomyopathy affecting both ventricles. abstract_id: PUBMED:20185107 Biventricular stimulation improves right and left ventricular function after tetralogy of Fallot repair: acute animal and clinical studies. Background: Optimal treatment of right ventricular (RV) dysfunction observed in patients after tetralogy of Fallot (TOF) repair is unclear. Studies of biventricular (BiV) stimulation in patients with congenital heart disease have been retrospective or have included patients with heterogeneous disorders. Objective: The purpose of this study was to determine the effects on cardiac function of stimulating at various cardiac sites in an animal model of RV dysfunction and dyssynchrony and in eight symptomatic adults with repaired TOF. Methods: Pulmonary stenosis and regurgitation as well as RV scars were induced in 15 piglets to mimic repaired TOF. The hemodynamic effects of various configurations of RV and BiV stimulation were compared with sinus rhythm (SR) 4 months after surgery. In eight adults with repaired TOF, RV and left ventricular (LV) dP/dt(max) were measured invasively during SR, apical RV stimulation, and BiV stimulation. Results: At 4 months, RV dilation, dysfunction, and dyssynchrony were present in all piglets. RV stimulation caused a decrease in LV function but no change in RV function. In contrast, BiV stimulation significantly improved LV and RV function (P &lt; .05). Echocardiography and epicardial electrical mapping showed activation consistent with right bundle branch block during SR and marked resynchronization during BiV stimulation. In patients with repaired TOF, BiV stimulation increased significantly RV and LV dP/dt(max) (P &lt; .05). Conclusion: In this swine model of RV dysfunction and in adults with repaired TOF, BiV stimulation significantly improved RV and LV function by alleviating electromechanical dyssynchrony. abstract_id: PUBMED:24062912 QRS fragmentation in patients with arrhythmogenic right ventricular cardiomyopathy and complete right bundle branch block: a risk stratification. Background: Patients with arrhythmogenic right ventricular cardiomyopathy/dysplasia (ARVC/D) and complete right bundle branch block (RBBB) very often have recurrent ventricular tachycardia and develop biventricular heart failure in the follow up, requiring heart transplantation and/or diuretics. In other patients with ARVC/D excluding RBBB, QRS fragmentation in the S wave of right precordial leads identifies patients with recurrent ventricular tachycardia, primary ventricular fibrillation, and recurrent implantable cardioverter defibrillator discharges; QRS fragmentation ≥3 leads characterized patients who died from sudden cardiac death. Method: In a cohort of 374 patients with ARVC/D (208 males; mean±SD age 46.5±14.8 years), there were 22 patients with complete RBBB: 17 patients with ARVD/C developed complete RBBB and had biventricular heart failure in a follow up of 4-6 years. In five patients with ARVC/D, complete RBBB was initially evident. In all patients with ARVC/D and RBBB, QRS fragmentation ≥3 of all 12 ECG leads and QRS fragmentation in the S wave of right precordial leads were analysed. Results: QRS fragmentation ≥3 of all 12 ECG leads and in the S wave of right precordial leads were present in 16/17 patients who developed RBBB and none of the five patients with initial RBBB. In one patient with initial RBBB, QRS fragmentation ≥3 leads was present (r=17.45; p&lt;0.0001). Conclusion: Patients with recurrent ventricular tachycardia who develop biventricular heart failure requiring heart transplantation and/or diuretics are characterized by QRS fragmentation in the S wave of right precordial leads and ≥3 of all 12 ECG leads. These results are statistically significant. Patients with initial RBBB have an overall benign prognosis. abstract_id: PUBMED:9363504 Right ventricular cardiomyopathy: diffuse dilatation, focal dysplasia or biventricular disease. Non-coronary ventricular tachyarrhythmias originating from the right ventricle are frequent events associated in many cases with structural and functional abnormalities of the right ventricle. Primary right ventricular affections such as arrhythmogenic right ventricular dysplasia and secondary right ventricular involvements such as in dilated cardiomyopathy must be distinguished. The value of conventional diagnostic procedures is undetermined. A total of 73 patients (41 males, mean age 40.6 +/- 11.4 years) with left bundle branch block ventricular arrhythmias and angiographic aspects of right ventricular outpouchings or aneurysms were divided into three groups: Group 1: diffuse right ventricular dilatation without left ventricular affection, Group 2: focal right ventricular abnormalities (dysplasia) Group 3: biventricular disease. The results of standard ECG, angiography and programmed ventricular stimulation were analysed retrospectively. Clinical monomorphic ventricular tachycardia was more often in diffuse dilatation (82%) and focal dysplasia (57%). In these two groups programmed ventricular stimulation was able to induce clinical tachycardias at a high degree (57-82%). In cases of biventricular disease cardiac arrest as the primary event without inducibility of monomorphic ventricular tachycardia was the predominant feature (44%). Standard ECG disclosed localised right precordial QRS prolongation in 'normal' QRS morphology, incomplete and complete right bundle branch block in 66 patients in all three subgroups. Other ECG findings such as left ventricular hypertrophy in four patients with heart failure and single premature beats and left bundle branch block in a patient with rapid ventricular tachycardia and ventricular fibrillation was found only in group 3 supposed to be the most heterogeneous group. In summary, angiographic classification used in this study demonstrates different morphological aspects of right ventricular cardiomyopathy with ventricular tachyarrhythmias as the major clinical aspect. High risk patients with diffuse dilatation or biventricular disease can be identified. Only patients with the angiographic aspect of focal dysplasia seem to be possible candidates for catheter ablation techniques. abstract_id: PUBMED:25660657 Arrhythmogenic right ventricular cardiomyopathy with left ventricular involvement: a novel splice site mutation in the DSG2 gene. We report the case of a 37-year-old male patient admitted to the cardiac intensive care unit for acute pulmonary edema. He had a history of excessive alcoholic consumption and had had a viral syndrome in the preceding 10 days. A transthoracic echocardiogram revealed severe biventricular dysfunction, mild dilatation of the left heart chambers, and severe dilatation of the right chambers. Nonsustained ventricular tachycardia with a left bundle branch block morphology was detected during electrocardiographic monitoring. In the follow-up, he underwent a contrast-enhanced transthoracic echocardiogram and a cardiac resonance which were compatible with the diagnosis of arrhythmogenic right ventricular cardiomyopathy with biventricular involvement. Molecular analysis detected the mutation c.1423+2T&gt;G (IVS10 ds +2T&gt;G) in intron 10 of the gene DSG2 (desmoglein-2) in heterozygosity. To our knowledge, this mutation has not been previously described in arrhythmogenic right ventricular cardiomyopathy. abstract_id: PUBMED:24013057 Comparative electromechanical and hemodynamic effects of left ventricular and biventricular pacing in dyssynchronous heart failure: electrical resynchronization versus left-right ventricular interaction. Objectives: The purpose of this study was to enhance understanding of the working mechanism of cardiac resynchronization therapy by comparing animal experimental, clinical, and computational data on the hemodynamic and electromechanical consequences of left ventricular pacing (LVP) and biventricular pacing (BiVP). Background: It is unclear why LVP and BiVP have comparative positive effects on hemodynamic function of patients with dyssynchronous heart failure. Methods: Hemodynamic response to LVP and BiVP (% change in maximal rate of left ventricular pressure rise [LVdP/dtmax]) was measured in 6 dogs and 24 patients with heart failure and left bundle branch block followed by computer simulations of local myofiber mechanics during LVP and BiVP in the failing heart with left bundle branch block. Pacing-induced changes of electrical activation were measured in dogs using contact mapping and in patients using a noninvasive multielectrode electrocardiographic mapping technique. Results: LVP and BiVP similarly increased LVdP/dtmax in dogs and in patients, but only BiVP significantly decreased electrical dyssynchrony. In the simulations, LVP and BiVP increased total ventricular myofiber work to the same extent. While the LVP-induced increase was entirely due to enhanced right ventricular (RV) myofiber work, the BiVP-induced increase was due to enhanced myofiber work of both the left ventricle (LV) and RV. Overall, LVdP/dtmax correlated better with total ventricular myofiber work than with LV or RV myofiber work alone. Conclusions: Animal experimental, clinical, and computational data support the similarity of hemodynamic response to LVP and BiVP, despite differences in electrical dyssynchrony. The simulations provide the novel insight that, through ventricular interaction, the RV myocardium importantly contributes to the improvement in LV pump function induced by cardiac resynchronization therapy. abstract_id: PUBMED:11499820 Left ventricular and biventricular pacing in congestive heart failure. Dual-chamber pacing improved hemodynamics acutely in a subset of patients with left ventricular (LV) dysfunction but conveyed no long-term symptomatic benefit in most. More recently, LV pacing and biventricular (multisite) pacing have been used to improve systolic contractility by altering the electrical and mechanical ventricular activation sequence in patients with severe congestive heart failure (CHF) and intraventricular conduction delay or left bundle branch block (LBBB). Intraventricular conduction delay and LBBB cause dyssynchronous right ventricular and LV contraction and worsen LV dysfunction in cardiomyopathies. Both LV and biventricular cardiac pacing are thought to improve cardiac function in this situation by effecting a more coordinated and efficient ventricular contraction. Short-term hemodynamic studies have shown improvement in LV systolic function, which seems more pronounced with monoventricular LV pacing than with biventricular pacing. Recent clinical studies in limited numbers of patients suggest long-term clinical benefit of biventricular pacing in patients with severe CHF symptoms. Continuing and future studies will demonstrate whether and in which patients LV and biventricular pacing are permanently effective and equivalent and which pacing site within the LV produces the most beneficial hemodynamic results. abstract_id: PUBMED:30968669 A case of arrhythmogenic right ventricular cardiomyopathy with biventricular involvement. We reported a case of a young adult male aged 18 years admitted in our institution for syncope during a basketball match. No previous symptoms were reported. Electrocardiogram (ECG) showed T-wave inversion in the anterior leads and an incomplete right bundle branch block. Surprisingly, a complete echocardiographic evaluation demonstrated the presence of severe right ventricular enlargement with significant wall motion abnormalities, apical aneurysm and reduced systolic function. Cardiac Magnetic Resonance was pathognomonic for a fibro-fatty replacement of both ventricles. We decided for a subcutaneous defibrillator implantation and, after inducing a ventricular fibrillation to test the device status, epsilon wave appeared on the ECG. This clinical scenario depicted an advanced arrhythmogenic right ventricular cardiomyopathy at its first clinical manifestation. abstract_id: PUBMED:7776713 Right ventricular dysplasia: the Mayo Clinic experience. Objective: To determine the initial clinical manifestations and echocardiographic features of right ventricular dysplasia as encountered in a major cardiovascular referral center in the United States. Design: We conducted a retrospective study of cases of right ventricular dysplasia diagnosed at the Mayo Clinic between January 1978 and January 1993. Material And Methods: In an institutional data-base search, we identified 20 patients with right ventricular dysplasia. Echocardiographic, electrophysiologic, Holter monitoring, cardiac catheterization, and endomyocardial biopsy results were analyzed. The mean duration of follow-up was 7 years. Results: In the 12 female and 8 male patients (mean age, 30 years; range, 3 to 60), the initial manifestations of right ventricular dysplasia included ventricular arrhythmia (45%), congestive heart failure (25%), heart murmur (10%), asymptomatic (10%), complete heart block (5%), and sudden death (5%). First-order relatives were affected in 30% of the patients. Ventricular tachycardia with morphologic features of left bundle branch block was inducible in seven of nine patients. On Holter monitoring, all but 2 of 15 patients studied had frequent ventricular ectopic activity (Lown grade 2 or more). Characteristic fatty infiltration of the myocardium was present in 7 of 13 right ventricular biopsy specimens. Inordinate right ventricular enlargement was present in 60% of the patients at first echocardiographic assessment and in two other patients on follow-up assessment. Variable left ventricular involvement was noted in 50% of the cases. During the follow-up period, four patients died: two died suddenly, one died of congestive heart failure, and one died of respiratory failure after a coronary artery bypass operation. Of the 16 living patients, 8 are doing well, 3 have an implanted cardiac defibrillator, 3 are receiving antiarrhythmic agents, and 2 have undergone cardiac transplantation because of progressive biventricular failure. Conclusion: Patients with right ventricular dysplasia have varied initial manifestations and a high frequency of serious cardiovascular symptoms and complications. abstract_id: PUBMED:29974198 Stroke volume ratio derived from magnetic resonance imaging as an indicator of interventricular dyssynchrony predicts future cardiac event in patients with biventricular Fontan circulation. The prognostic factors in patients with biventricular heart who underwent Fontan surgery remain unclear. This study wanted to assess the hypothesis that interventricular dyssynchrony evaluated by cardiac magnetic resonance imaging (MRI) can predict future cardiac events in patients with biventricular heart who have undergone Fontan surgery. We prospectively enrolled consecutive patients with biventricular Fontan circulation from 2003 to 2016, and performed protocolized cardiac MRI. We determined the stroke volume ratio (SVr) using the following formula to assess interventricular dyssynchrony: (stroke volume (SV) of the two whole ventricles)/(SV of the right ventricle + SV of the left ventricle), by tracing cine MRI data. If interventricular dyssynchrony existed, blood flowed and returned to each ventricle through the ventricular septal defect; therefore, the SVr in this instance should be less than 1.0. We enrolled 40 patients. SVr ranged from 0.81 to 1.0 (median 0.95). Low SVr (&lt; 0.95) was associated with worse New York Heart Association functional class, longer QRS duration, right bundle branch block, low biventricular indexed stroke volume, and low biventricular ejection fraction. During the follow-up period (median 53.5 months), 10 cardiac events occurred (six cases of acute exacerbation of heart failure, three cases of supraventricular tachycardia, and one case of exacerbation of protein-losing enteropathy). Univariate analysis showed four clinical predictors: SVr &lt; 0.95 [hazard ratio (HR) 9.3, 95% confidential interval (CI) 1.7-171.5]; biventricular ejection fraction &lt; 0.45 (HR 9.4, 95% CI 2.2-65.3); left ventricular indexed end-diastolic volume &gt; 73 mL/m2 (HR 4.5, 95% CI 1.1-15.7); and the presence of the aorta directly arising from the right ventricular conus (HR 5.8, 95% CI 1.1-106). SVr derived from MRI can predict future cardiac events in Fontan patients with biventricular hearts. Answer: The study described in abstract PUBMED:18279400 investigated right ventricular (RV) dimensions and function in patients with isolated left bundle branch block (LBBB) who were asymptomatic and had mildly depressed left ventricular (LV) function. The study aimed to determine if there was evidence of biventricular involvement in these patients. The results showed that RV dimensions were comparable between the asymptomatic LBBB patients and controls, and RV functions of healthy volunteers and asymptomatic LBBB patients were similar. In contrast, functional parameters in idiopathic dilated cardiomyopathy patients were significantly reduced. The study concluded that in patients with asymptomatic LBBB, RV dimensions and function are within the normal range, suggesting that RV dysfunction is a consequence of increased RV loading conditions caused by left-sided heart failure and does not indicate a generalized cardiomyopathy affecting both ventricles. Therefore, there is no evidence of biventricular involvement in isolated LBBB based on RV dimensions and function.
Instruction: Do additional shaved margins at the time of lumpectomy eliminate the need for re-excision? Abstracts: abstract_id: PUBMED:18809063 Do additional shaved margins at the time of lumpectomy eliminate the need for re-excision? Background: Most women diagnosed with breast cancer undergo breast-conservation surgery. Re-excision rates for positive margins have been reported to be greater than 50%. The purpose of our study was to determine if removing additional shaved margins from the lumpectomy cavity at the time of lumpectomy reduces re-excisions. Methods: A retrospective study was performed on 125 women who had undergone lumpectomy with additional shaved margins taken from the lumpectomy cavity. Pathology reports were reviewed for tumor size and histology, lumpectomy and additional margin status, and specimen and margin volume. Results: If additional margins were not taken, 66% would have required re-excision. Because of taking additional shaved margins, re-excision was eliminated in 48%. Conclusion: Excising additional shaved margins at the original surgery reduced reoperations by 48%. There is a balance between removing additional margins and desirable cosmesis after breast-conservation surgery. The decision to take extra margins should be based on the surgeon's judgment. abstract_id: PUBMED:25613784 Lumpectomy specimen margins are not reliable in predicting residual disease in breast conserving surgery. Background: In breast conserving surgery, the concordance between lumpectomy margin (LM) status and the status of the corresponding lumpectomy cavity remains uncertain. Methods: We analyzed pathology reports of lumpectomies from 2004 to 2006. We included those which contained both ink-directed LM and complete (≥4) separate corresponding shaved cavity margins (SCMs). SCM pathology was used as a surrogate for lumpectomy cavity status, to determine the predictive value of LM for residual disease. Results: Pathology from 1,201 pairs of LM and SCM from 242 patients was compared. LM status predicted corresponding lumpectomy cavity status with 50.9% sensitivity, 69.5% specificity, 35% positive predictive value, and 81.4% negative predictive value, giving an overall accuracy of 64.9%. Conclusions: Oriented LMs are not reliable for predicting lumpectomy cavity status, and therefore not reliable for directing re-excision. Taking complete, oriented SCMs at the time of lumpectomy may improve accuracy compared with traditional LM assessment. abstract_id: PUBMED:23773680 A pilot study evaluating shaved cavity margins with micro-computed tomography: a novel method for predicting lumpectomy margin status intraoperatively. Microscopically clear lumpectomy margins are essential in breast conservation, as involved margins increase local recurrence. Currently, 18-50% of lumpectomies have close or positive margins that require re-excision. We assessed the ability of micro-computed tomography (micro-CT) to evaluate lumpectomy shaved cavity margins (SCM) intraoperatively to determine if this technology could rapidly identify margin involvement by tumor and reduce re-excision rates. Twenty-five SCM from six lumpectomies were evaluated with a Skyscan 1173 table top micro-CT scanner (Skyscan, Belgium). Micro-CT results were compared to histopathological results. We scanned three SCM at once with a 7-minute scanning protocol, and studied a total of 25 SCM from six lumpectomies. Images of the SCM were evaluated for radiographic signs of breast cancer including clustered microcalcifications and spiculated masses. SCM were negative by micro-CT in 19/25 (76%) and negative (≥2 mm) by histopathology in 19/25 (76%). Margin status by micro-CT was concordant with histopathology in 23/25 (92%). Micro-CT overestimated margin involvement in 1/25 and underestimated margin involvement in 1/25. Micro-CT had an 83.3% positive predictive value, a 94.7% negative predictive value, 83.3% sensitivity, and 94.7% specificity for evaluation of SCM. Evaluation of SCM by micro-CT is an accurate and promising method of intraoperative margin assessment in breast cancer patients. The scanning time required is short enough to permit real-time feedback to the operating surgeon, allowing immediate directed re-excision. abstract_id: PUBMED:35526391 Shaves off the Cavity or Specimen in Lumpectomy for Breast Cancer. Introduction: Re-excision for positive margins (margins where tumor is positive) after breast conserving surgery (BCS) is common and burdensome for breast cancer patients. Routine shave margins can reduce positive margins and re-excision rates. Cavity shaving margin (CSM) removes margins from the lumpectomy cavity edges, whereas specimen shave margin (SSM) requires ex vivo removal of margins from the resected specimen. Methods: We assessed breast cancer patients undergoing BCS who received CSM or SSM procedures from 2017 to 2019. CSM and SSM techniques were compared by analyzing positive rates of primary and final shaved margins, re-excision rates, and tissue volumes removed. Results: Of 116 patients included in this study, 57 underwent CSM and 59 underwent SSM. Primary margins were positive or close in 19 CSM patients and 21 SSM patients (33% versus 36%; P = 0.798). Seventeen CSM patients had a tumor in shaved margin specimens, compared to four patients for SSM (30% versus 7%; P &lt; 0.001); however, final shave margins were similar (5% versus 5%; P = 0.983). Volumes of shave specimens were higher with SSM (40.7 versus 13.4 cm3; P &lt; 0.001), but there was no significant difference in the total volume removed (146.8 versus 134.4 cm3; P = 0.540). For tumors 2 cm or larger, the total volume removed (140 versus 206 cm3; P = 0.432) and rates of final margin positivity (7.5% versus 0%; P = 0.684) were similar for both techniques. Conclusions: CSM and SSM are effective techniques for achieving low re-excision rates. Our findings suggest that surgeons performing either CSM or SSM may maintain operative preferences and achieve similar results. abstract_id: PUBMED:38248120 Cavity Shave Margins in Breast Conservative Surgery a Strategy to Reduce Positive Margins and Surgical Time. Background: Resection of additional tissue circumferentially around the cavity left by lumpectomy (cavity shave) was suggested to reduce rates of positive margins and re-excision. Methods: A single center retrospective study which analyzed margins status, re-excision, and surgical time in patients who underwent breast conserving surgery and cavity shave or intraoperative evaluation of resection margins. Results: Between 2021 and 2023, 594 patients were enrolled in the study. In patients subjected to cavity shave, a significant reduction in positive, focally positive, or closer margins was reported 8.9% vs. 18.5% (p = 0.003). No difference was reported in terms of surgical re-excision (p &lt; 0.846) (5% vs. 5.5%). Surgical time was lower in patients subjected to cavity shave (&lt;0.001). The multivariate analysis intraoperative evaluation of sentinel lymph node OR 1.816 and cavity shave OR 2.909 were predictive factors for a shorter surgical time. Excluding patients subjected to intraoperative evaluation of sentinel lymph node and patients with ductal carcinoma in situ, patients that underwent the cavity shave presented a reduced surgical time (67.9 + 3.8 min vs. 81.6 + 2.8 min) (p = 0.006). Conclusions: Cavity shaving after lumpectomy reduced the rate of positive margins and it was associated with a significant reduction in surgical time compared to intraoperative evaluation of resection margins. abstract_id: PUBMED:37265920 Risk Factors for Positive Margins in Breast-Conserving Surgery. Introduction Breast-conserving surgery (BCS) followed by adjuvant radiotherapy has similar overall survival compared to mastectomy but is associated with higher rates of local recurrence. Positive surgical margins in BCS are the most important predictor of local recurrence. The aim of our study was to assess the risk factors associated with positive margins in women undergoing BCS for breast cancer in order to inform our clinical practice and minimize re-operation rates. Methods Patients with a diagnosis of breast cancer who underwent BCS from January 2013 to January 2021 were identified from our pathology database and included in the study. All patients underwent a lumpectomy with the removal of additional shaved cavity margins. Statistical analysis was used to assess the effect of patient clinical and pathological risk factors on the rate of positive margins. Results One hundred and twenty patients underwent BCS for breast cancer. Twenty-four percent of patients had positive margins. Of the 29 patients that underwent subsequent re-excisions, only 13 (45%) had residual disease in the re-excision specimen. In younger patients, tumors localized in lower quadrants and the presence of extensive intraductal component within invasive breast cancer increased the risk of positive margins. In addition, positive margins were encountered more significantly in patients with ductal carcinoma in situ (DCIS) compared to invasive tumors. Multivariate analysis showed that DCIS and young age were the only factors independently associated with positive margins. Conclusion DCIS and younger patients have a higher rate of positive margins during BCS than invasive breast cancer. For such patients at higher risk of positive margins, excision of cavity shave margins and intraoperative inking may be done to lower positive margin rates. Preoperative review of breast imaging, core biopsies, and counseling of patients about the likelihood of positive margins is important. abstract_id: PUBMED:27207096 Should New "No Ink On Tumor" Lumpectomy Margin Guidelines be Applied to Ductal Carcinoma In Situ (DCIS)? A Retrospective Review Using Shaved Cavity Margins. Background: No consensus exists for clear margins for breast-conserving surgery for pure ductal carcinoma in situ (DCIS). We examined the implications of applying a "no ink on tumor" standard for pure DCIS by correlating clear margin width with rates of residual disease. Methods: Lumpectomies with complete shaved cavity margins (SCMs) for pure DCIS at our institution from 2004 to 2007 were reviewed and patients with microinvasive cancer or multifocal disease requiring multiple wires excluded. Rates of residual disease in shaved margins were determined based on margin status of the main lumpectomy specimen using margin widths of "ink on tumor," ≤1, &gt;1 to &lt;2, and ≥2 mm. Results: Overall, 182 women undergoing lumpectomy for pure DCIS met eligibility criteria. In patients with "ink on tumor" in the main lumpectomy specimen, 88 % had residual disease in the SCMs. Rates of residual disease in SCMs for lumpectomies with margins of &lt;2 mm (but not on ink) were 52 % compared with 13 % for lumpectomies with margins ≥2 mm (p &lt; 0.0005). Multivariate analyses confirmed the association of lumpectomy margin width and residual tumor in shaved cavity margins. Odds of residual disease in the SCM for postmenopausal patients were 74 % less than for pre/perimenopausal women (odds ratio 0.26; confidence interval 0.08-0.82). Conclusions: Application of a "no ink on tumor" lumpectomy margin standard to patients with DCIS results in a significant increase in the rates of residual disease in cavity margins compared with use of a ≥2-mm margin standard. Use of narrower margins may have important implications for use of adjuvant therapy. abstract_id: PUBMED:21947583 Lumpectomy cavity shaved margins do not impact re-excision rates in breast cancer patients. Background: The benefits of taking shaved cavity margins (SCM) at the time of lumpectomy are unclear. We sought to determine if taking SCM decreases re-excision rates by increasing the total breast tissue volume excised. Methods: We undertook a retrospective review of breast cancer patients who underwent lumpectomy from 2004 to 2006. Patients were divided into three groups. Group 1 had lumpectomy alone, group 2 had lumpectomy plus select (1-3) SCM, and group 3 had lumpectomy plus complete (≥4) SCM. Pathologic findings and surgical outcomes were compared between groups. Results: 773 cancers treated by lumpectomy were included in this study; 197 were in group 1, 130 were in group 2, and 446 were in group 3. The mean total volume of breast tissue excised in group 1 (106.6 cm(3)) was significantly larger than the volume excised in groups 2 (79.3 cm(3)) and 3 (76.3 cm(3)). Rates of re-excision and successful breast-conservation therapy (BCT) were not significantly different between groups. Despite a lower total volume of breast tissue excised in groups 2 and 3, there was no significant increase in locoregional recurrence rates (LRR) at median follow-up of 54 months. Conclusions: Taking additional SCM during lumpectomy resulted in a significantly lower overall volume of breast tissue excised, with no increase in LRR. Contrary to prior studies, we found that SCM did not decrease re-excision rates or impact the success of BCT. Therefore, the main advantage of taking SCM appears to be that less breast tissue is excised, which could potentially improve cosmetic outcomes. abstract_id: PUBMED:38405107 Does Nipple-Ward Positive Margin Contribute to a Higher Rate of Re-Excision Procedures After a Lumpectomy with Pathology-Confirmed Positive Margins? A Retrospective Study. Background: Positive margins on lumpectomy specimens are associated with a twofold increased risk of local breast tumor recurrence. Prior literature has demonstrated various techniques and modalities for assessing margin status to reduce re-excision rates. However, there is paucity of literature analyzing which margin contributes to the highest re-excision rates. Therefore, the primary aim of the study was to investigate whether the nipple-ward margins resulted in a higher rate of re-excision in our patient population. Methods:  A retrospective chart review was performed on patients who had re-excision surgery. Nipple-ward margin was identified by correlating radiological and pathological reports. A cut-off of more than 25% was used to demonstrate correlation between nipple-ward margin and re-excision rate. Results: A total of 98 patients' data were analyzed, with 41 (41.8%), 14 (14.3%), 5 (5.1%), and 38 (38.8%) diagnosed with DCIS, IDC, ILC, and mixed pathology on their margins, respectively. Overall, 48% (n=47) of the positive margins were nipple-ward, with 44.7% (n=21) reporting DCIS. Upon stratification, 45 (45.9%) cases were single-margin positive, with 26 (57.8%) being nipple-ward. Furthermore, the remaining 53 (54.1%) patients had multiple positive margins, with 21 (39.6.7%) nipple-ward cases. Conclusion: Positive nipple-ward margins significantly contribute to a higher re-excision rate p &lt; 0.001; 48% of re-excision surgeries had positive nipple-ward margins, and 57.8% of positive single-margin cases were nipple-ward. Taking an additional shave during initial lumpectomy decreases re-excision rates. However, planning a lumpectomy procedure with a more elliptical rather than a spherical resection with additional cavity shave (ie, larger volume) in the nipple-ward direction and minimizing the remaining cavity shaves so the total volume resected remains unchanged. Nevertheless, future studies with larger sample sizes are required to bolster our findings. abstract_id: PUBMED:27857263 The MarginProbe® System: An Innovative Approach to Reduce the Incidence of Positive Margins Found After Lumpectomy. The goal of lumpectomy surgery for breast cancer is to completely remove the tumor and have clear margins, reducing the rates of local recurrence. The MarginProbe® System is a new device that can detect microscopic tumor cells at or close to the margin of the surgical resection intraoperatively, providing the surgeon with the ability to re-excise tissue at the time of surgery, reducing the need for a second surgery to obtain clear margins. . Answer: Additional shaved margins at the time of lumpectomy do not completely eliminate the need for re-excision, but they can significantly reduce the likelihood of requiring a second surgery. A retrospective study showed that if additional margins were not taken, 66% of patients would have required re-excision, but by taking additional shaved margins, re-excision was eliminated in 48% of cases (PUBMED:18809063). However, lumpectomy specimen margins are not entirely reliable in predicting residual disease, and therefore not reliable for directing re-excision (PUBMED:25613784). The use of micro-computed tomography (micro-CT) to evaluate shaved cavity margins intraoperatively has shown promise in accurately identifying margin involvement by tumor, potentially reducing re-excision rates (PUBMED:23773680). Comparing cavity shaving margin (CSM) and specimen shave margin (SSM) techniques, both were found to be effective in achieving low re-excision rates, suggesting that surgeons can maintain their operative preferences and achieve similar results with either technique (PUBMED:35526391). Cavity shaving after lumpectomy has been associated with a significant reduction in the rate of positive margins and a reduction in surgical time compared to intraoperative evaluation of resection margins (PUBMED:38248120). However, another study found that lumpectomy cavity shaved margins did not impact re-excision rates, suggesting that the main advantage of taking shaved margins may be a lower overall volume of breast tissue excised, potentially improving cosmetic outcomes (PUBMED:21947583). In conclusion, while additional shaved margins at the time of lumpectomy can reduce the need for re-excision, they do not guarantee the elimination of this need. The decision to take extra margins should be based on the surgeon's judgment, balancing the goal of complete tumor removal with the desire for good cosmetic outcomes.
Instruction: Is diameter-axial-polar scoring predictive of renal functional damage in patients undergoing partial nephrectomy? Abstracts: abstract_id: PUBMED:24304131 Prediction of partial nephrectomy outcomes by using the diameter-axis-polar nephrometry score. Objectives: To evaluate the utility of diameter-axis-polar nephrometry score to evaluate partial nephrectomy outcomes. Methods: Renal tumors of 127 patients with a functional contralateral kidney who underwent partial nephrectomy were scored using the diameter-axis-polar and R.E.N.A.L nephrometry scores. The mean tumor diameter was 2.9 cm (range 1.0-8.0 cm) and warm ischemic time was 27.3 min (range 12-46 min). All patients underwent (99m)Tc- mercaptoacetyltriglycine renal scintigraphy preoperatively and 6 months postoperatively to assess effective renal plasma flow. We compared nephrometry scores with estimated glomerular filtration rate, effective renal plasma flow and ischemic time. Results: The mean serum creatinine level was 0.84 mg/dL preoperatively and 0.93 mg/dL at 6 months postoperatively. The mean estimated glomerular filtration rate decreased from 74.2 to 66.9 mL/min/1.73 m(2) . Effective renal plasma flow of the operated kidney decreased to 73.5% from baseline (from 157.0 to 114.3 mL/min/1.73 m(2)). Univariate and multivariate analyses showed that diameter-axis-polar score had a stronger association with the percent change in estimated glomerular filtration rate, effective renal plasma flow in the treated kidney and ischemia time compared with the R.E.N.A.L score. Conclusions: Diameter-axis-polar nephrometry score is a useful tool for the assessment of small renal tumors amenable to partial nephrectomy, and it better predicts postoperative functional changes and ischemic time compared with the R.E.N.A.L nephrometry score. abstract_id: PUBMED:26222856 Diameter-Axial-Polar Nephrometry is Predictive of Surgical Outcomes Following Partial Nephrectomy. This study sought to evaluate the predictive value of the DAP (diameter-axial-polar) nephrometry system on surgical outcomes following partial nephrectomy (PN).This was a retrospective study of 237 patients who underwent open or minimally invasive PN for renal tumors at a single tertiary care center between 2009 and 2013. The primary outcomes included ischemia time &gt;20 minutes and percentage of estimated glomerular filtration rate (eGFR) decline &gt;10%. Statistical analysis was performed to study associations and predictions.The DAP sum score exhibited a statistically significant correlation with ischemia time, operative time (OT), estimated blood loss (EBL), length of hospital stay (LOS), and percent change in eGFR. The DAP sum score (odds ratio [OR]: 1.749; 95% confidence interval [CI] 1.379-2.220; P &lt; 0.001) and conventional laparoscopy and laparo-endoscopic single-site (CL&amp;LESS) surgery versus the open surgical approach (OR: 5.736; 95% CI: 2.529-13.011; P &lt; 0.001) independently predicted an ischemia time &gt;20 minutes. Similarly, the DAP sum score (OR: 1.297; 95% CI 1.051-1.602; P = 0.016), age-weighted Charlson comorbidity index (CCI) (OR: 4.730; 95% CI 1.463-15.291; P = 0.009), EBL (OR 2.433; 95% CI 1.095-5.407; P = 0.029), and ischemia time (OR 3.332; 95% CI 1.777-6.249; P &lt; 0.001) were identified as independent predictors of eGFR decline &gt;10%. Furthermore, the DAP score × ischemia time interactions were statistically significant (P &lt; 0.001).We confirmed the predictive value of the DAP nephrometry score with respect to ischemia time and renal functional decline in an independent external cohort of patients undergoing PN. The effect of the DAP score on renal functional decline partially depends on that of ischemia time, and the individual component DAP scores may have different effects on clinical outcomes. abstract_id: PUBMED:26585343 Comparison of diameter-axial-polar nephrometry score and RENAL nephrometry score for surgical outcomes following laparoscopic partial nephrectomy. Objectives: To compare diameter-axial-polar nephrometry score with RENAL nephrometry score for surgical outcomes after laparoscopic partial nephrectomy. Methods: We retrospectively reviewed data from 134 patients who underwent laparoscopic partial nephrectomy, using diameter-axial-polar and RENAL scores. We analyzed data for warm ischemic time and estimated blood loss intraoperatively, and percentage change in estimated glomerular filtration rate 6 months and 1 year postoperatively. Both scores were classified as low-, middle- and high-risk, and were used to compare the three analyzed parameters. Results: The median tumor size was 2.3 cm (range 1.0-5.4 cm); warm ischemic time was 25.4 min (range 6.5-57 min); and at 6 months and 1 year, percentage change in estimated glomerular filtration rate was 93% (range 51.7-133.3%) and 91% (range 49.4-137.6%), respectively. There were no significant differences in warm ischemic time and estimated blood loss for RENAL between risk groups (P = 0.38 and 0.09, respectively), but significant differences between groups for diameter-axial-polar score (P = 0.02 and 0.01, respectively). There were no significant differences in either score between groups for percentage change in estimated glomerular filtration rate at 6 months and 1 year. A total of 27 high-risk cases with a diameter-axial-polar score of seven points underwent laparoscopic partial nephrectomy safely; all three cases with a diameter-axial-polar score of eight points were converted to open partial nephrectomy. Conclusions: Diameter-axial-polar score seems to estimate the complexity of tumor characteristics in patients undergoing laparoscopic partial nephrectomy better than RENAL score. It has a better correlation with warm ischemic time and estimated blood loss. abstract_id: PUBMED:23651467 Is diameter-axial-polar scoring predictive of renal functional damage in patients undergoing partial nephrectomy? An evaluation using technetium Tc 99m (⁹⁹Tcm) diethylene-triamine-penta-acetic acid (DTPA) glomerular filtration rate. Objective: To identify the effects of diameter-axial-polar (DAP) scoring and other clinical variables on renal functional outcomes after partial nephrectomy (PN). Material And Methods: A total of 59 patients with complete radiographic, clinical and follow-up information were included for analysis. Technetium Tc 99m ((99) Tc(m) )-diethylene-triamine-penta-acetic acid (DTPA) renal scintigraphy was used to determine the glomerular filtration rate (GFR) of both kidneys and each kidney individually. All cross-sectional images were reviewed by a single radiologist and a DAP score was assigned. Results: The median decline in total GFR after PN was 13% at a median follow-up of 12 months (from 86.8 to 76.2 mL/min per 1.73 m(2) , P &lt; 0.001). The median GFR of the operated kidney showed a significant decrease peri-operatively (42.4 to 27.1 mL/min per 1.73 m(2) , P &lt; 0.001). The function of the contralateral kidney showed a significant increase (43.5 to 48.8 mL/min per 1.73 m(2) , P &lt; 0.001). On multivariate analysis, preoperative total GFR, ischaemia time and DAP sum score were independent predictors of absolute functional decline of the affected kidney (all P &lt; 0.001), while only preoperative total GFR and DAP sum score were significantly associated with the total absolute GFR reduction (all P &lt; 0.001). Conclusion: Preoperative renal function status and DAP score of renal tumours are the primary determinants of long-term functional outcomes after PN, but renal ischaemia damage to the operated kidney after PN is possibly masked by functional compensation of the contralateral healthy kidney if only overall renal function is assessed. abstract_id: PUBMED:36456124 Association of ABC and RENAL Scoring Systems With Morbidity in Partial Nephrectomy. Background/aim: Different nephrometry scoring systems (NSSs) are used to evaluate the surgical complexity and outcomes of partial nephrectomy (PN) in patients with small renal tumors. This study aimed to assess the validity of nephrometry scoring systems towards aiding the preoperative planning of laparoscopic partial nephrectomy (PN). Patients And Methods: Data of 77 patients who underwent partial nephrectomy at the Puerto Real University Hospital between January 2011 and December 2017 were retrospectively analyzed. Statistical analyses were carried out to determine whether there was an association between the complexity of the surgical procedure and the assigned nephrometry scores. Results: Operative complications (bleeding volume, conversion to open surgery, perioperative bleeding, and postoperative fistula) were significantly associated with independent variables (age, sex, body mass index, radiological tumor size, and operative ischemia time) and with the classification of patients using arterial-based complexity (ABC) and radius endophytic/exophytic nearness anterior-posterior location (RENAL) scores. There was also a strong correlation between the RENAL and ABC scores [Cramer's V coefficient (0.682) and Fisher's test (p&lt;0.0001)]. Conclusion: The RENAL and ABC scores are associated with the risk of the complexity of partial nephrectomy for T1 renal tumors, even for ≥T1b tumors and/or with complex anatomical features. abstract_id: PUBMED:22698624 Diameter-axial-polar nephrometry: integration and optimization of R.E.N.A.L. and centrality index scoring systems. Purpose: The R.E.N.A.L. (radius, exophytic/endophytic properties, nearness of tumor to collecting system or sinus, anterior/posterior) and centrality index nephrometry scores enable systematic, objective assessment of anatomical tumor features. We systematically compared these systems using item analysis test theory to optimize scoring methodology. Materials And Methods: Analysis was based on 299 patients who underwent partial nephrectomy from 2007 to 2011 and met study inclusion criteria. Percent functional volume preservation, and R.E.N.A.L. and centrality index scores were measured. Late percent glomerular filtration rate preservation was calculated as the ratio of the late to the preoperative rate. Interobserver variability analysis was done to assess measurement error. All data were statistically analyzed. Results: A novel scoring method termed DAP (diameter-axial-polar) nephrometry was devised using a data based approach. Mean R.E.N.A.L., centrality index and DAP scores for the cohort were 7.3, 2.5 and 6 with 84%, 90% and 95% interobserver agreement, respectively. The DAP sum score and all individual DAP scoring components were associated with the clinical outcome, including percent functional volume preservation, warm ischemia time and operative blood loss. DAP scoring criteria allowed for the normalization of score distributions and increased discriminatory power. DAP scores showed strong linear associations with percent functional volume preservation (r(2) = 0.97) and late percent glomerular filtration rate preservation (r(2) = 0.81). Each 1 unit change in DAP score equated to an average 4% change in kidney volume. Conclusions: DAP nephrometry integrates the optimized attributes of the R.E.N.A.L. and centrality index scoring systems. DAP scoring was associated with simplified methodology, decreased measurement error, improved performance characteristics, improved interpretability and a clear association with volume loss and late function after partial nephrectomy. abstract_id: PUBMED:25138578 Usefulness of the diameter-axial-polar nephrometry score for predicting perioperative parameters in robotic partial nephrectomy. Purpose: The present study aimed to verify the association between diameter-axial-polar (DAP) nephrometry and surgical outcomes, postoperative renal function, and perioperative complications in patients undergoing robotic partial nephrectomy (RPN). Methods: Diameter-axial-polar nephrometry was assessed using computed tomography or magnetic resonance imaging on 158 patients who received RPN between July 2007 and February 2013. Demographic data, surgical data, and perioperative complications were recorded, and percent change between the preoperative and last estimated glomerular filtration rate (eGFR) was determined. Linear regression analysis was conducted to assess the relationship between the DAP sum score and warm ischemia time (WIT), estimated blood loss (EBL), and percent decrease in eGFR. Multivariable linear regression analysis was conducted to determine the relationship between each DAP scoring parameter and surgical outcomes. Results: The median patient age was 50.5 years and median DAP sum score was 6. On linear regression, the DAP sum score was associated with WIT and EBL. On multivariable regression, all DAP parameters were associated with WIT, but the polar distance was not associated with EBL. Patients with a higher DAP sum score showed greater decrease in eGFR after RPN. Patients with a DAP sum score of 6 or higher had a higher risk of major complications than those with a DAP sum score below 6. Conclusions: Diameter-axial-polar nephrometry predicted WIT and EBL in patients who underwent RPN. It was also associated with the decrease in eGFR and rate of major perioperative complications, and thus can be useful for surgical planning or patient counseling before RPN. abstract_id: PUBMED:24767522 Correlations of tumor size, RENAL, centrality index, preoperative aspects and dimensions used for anatomical, and diameter-axial-polar scoring with warm ischemia time in a single surgeon's series of robotic partial nephrectomy. Objective: To evaluate the association of tumor size, renal nephrometry systems (RNSs), including RENAL, centrality index (C-index), preoperative aspects and dimensions used for anatomical, and diameter-axial-polar scoring, and individual categories of these RNSs with warm ischemia time (WIT). Methods: A single surgeon's series of robotic partial nephrectomy patients for whom RNS was available were identified from 2012 to 2013. Spearman's correlation was used to evaluate associations between the RNSs and individual categories and WIT. Results: Analysis was performed based on 69 patients with a median tumor size of 3.5 cm (interquartile range 2.6-4.2). Overall, each scoring system and tumor size were found to have a statistically significant association with the WIT, with the C-index system exhibiting the strongest correlation (coefficient: -0.609, P &lt;.001), which was confirmed in the series beyond the surgeon's learning curve. In the subgroup of relatively small renal masses (&lt;3 cm), there was a statistically significant association between the C-index, preoperative aspects and dimensions used for anatomical, and diameter-axial-polar systems with WIT. However, for tumors &gt;3 cm, none of the RNSs was found to have a statistically significant correlation with WIT, except for the tumor size (coefficient: 0.354, P = .027). For subcategories, axial scoring was found to have a consistent statistically significant correlation with WIT. Conclusion: Overall, each RNS and tumor size strongly correlated with WIT in patients undergoing robotic partial nephrectomy. However, the RNS outperformed tumor size in small (&lt;3 cm) renal masses with an optimal discriminating power, whereas for relatively larger (&gt;3 cm) tumors, WIT was significantly dependent on tumor size, and every centimeter may count. abstract_id: PUBMED:34322455 Open and Laparoscopic Partial Nephrectomy: Comparison and Validation of Preoperative Scoring Systems, Including PADUA, RENAL, ABC Nephrometric Scores and Perinephric Fat Evaluation with Mayo Adhesive Probability Score. Purpose: To evaluate potential associations between 4 nephrometric scoring systems, namely the PADUA (preoperative aspects and dimensions used for anatomical classification), RENAL (radius endophytic/exophytic nearness anterior posterior location), ABC (arterial-based complexity), and MAP (Mayo adhesive probability) scores and their individual components, with surgical and oncological outcomes of patients undergoing a partial nephrectomy. Materials And Methods: A consecutive, monocentric cohort of partial nephrectomy patients was retrospectively analyzed. PADUA, RENAL, ABC and MAP nephrometry scores were determined from preoperative axial images. Unadjusted and adjusted associations between overall scores, individual components, surgical approach, complications and oncological outcomes were determined using univariate and multivariate logistic regressions. Results: A total of 189 partial nephrectomies were performed in 181 patients, via an open or a laparoscopic approach. Among scoring systems, only the MAP classification, which assesses adherent perinephric fat, was associated with severe surgical complications as well as with operative time (p&lt;0.05). Among all components of the PADUA and RENAL scores, only proximity of the tumor to the collecting system was associated with overall surgical complication rates, while the diameter of the tumor influenced the operative time (p&lt;0.05). The ABC score was not relevant. Male gender, antiplatelet therapy, and a laparoscopic approach were associated with higher overall surgical complication rates (p&lt;0.05). The number of oncologic recurrences during follow-up was too low to run statistical analyses. Conclusion: Nephrometry scores could be simplified to predict surgical complications after partial nephrectomy. In this framework, adherent perinephric fat seemed to be strongly associated with an increased risk of surgical complications. abstract_id: PUBMED:23701054 MR renographic measurement of renal function in patients undergoing partial nephrectomy. Objective: The purpose of this review is to describe the role of functional renal MRI, or MR renography, in the care of patients with renal masses undergoing partial nephrectomy. Conclusion: MR renography can be used to monitor renal functional outcome for patients undergoing partial nephrectomy and may help guide patient selection in this population with elevated risk of chronic kidney disease. Answer: Yes, diameter-axial-polar (DAP) scoring is predictive of renal functional damage in patients undergoing partial nephrectomy. Studies have shown that the DAP score is associated with postoperative changes in renal function, such as a decrease in estimated glomerular filtration rate (eGFR) and effective renal plasma flow in the operated kidney (PUBMED:24304131). Additionally, the DAP score has been found to be predictive of surgical outcomes, including ischemia time and renal functional decline following partial nephrectomy (PUBMED:26222856). Furthermore, the DAP score has been shown to be an independent predictor of absolute functional decline of the affected kidney and total absolute GFR reduction after partial nephrectomy (PUBMED:23651467). These findings suggest that the DAP nephrometry score can be a useful tool for assessing the complexity of renal tumors and predicting renal functional outcomes after partial nephrectomy.
Instruction: Is the self-report of recent cocaine or methamphetamine use reliable in illicit stimulant drug users who present to the Emergency Department with chest pain? Abstracts: abstract_id: PUBMED:19081702 Is the self-report of recent cocaine or methamphetamine use reliable in illicit stimulant drug users who present to the Emergency Department with chest pain? Background: Use of illicit drugs results in an increased risk of morbidity and mortality, which is often seen in the Emergency Department (ED). Chest pain is frequently associated with cocaine and methamphetamine use. Objectives: To determine if the self-report of recent cocaine or methamphetamine use is reliable in illicit stimulant drug users who present to the ED with chest pain. Methods: A retrospective review of patients presenting to the ED from July 1, 2004 through June 30, 2006 was undertaken. Inclusion criteria were: age &gt;or= 18 years, chief complaint of chest pain, documented social history of drug abuse, positive urine toxicology screen and myoglobin and troponin levels measured, sent from the ED. Results: For the 318 patients who met the inclusion criteria, the self-report rate of cocaine or methamphetamine use was 51.8% (95% confidence interval [CI] 0.46-0.57). No difference was found in the self-report rate between users of methamphetamine vs. cocaine (odds ratio [OR] 1.12, 95% CI 0.7-1.7). There also was no difference in the self-report rate by patient age &lt; 50 years compared to patient age &gt;or= 50 years (OR 0.67, 95% CI 0.42-1.08). The self-report rate for males compared to females was not significantly different (OR 0.87, 95% CI 0.54-1.4). Patients who had a positive troponin were not significantly more likely to self-report drug use than patients who did not have a positive troponin (OR 1.1, 95% CI 0.55-2.2). Conclusion: The self-report rate among cocaine- or methamphetamine-using patients presenting to the ED with chest pain was 51.8%. There seems to be no significant difference in the self-report rate among those who use methamphetamine vs. those who use cocaine, nor by gender, nor stratified by age over 50 years. abstract_id: PUBMED:19152201 Prescription opiate misuse among rural stimulant users in a multistate community-based study. Objectives: The purpose of the current analysis was to examine the factors associated with prescription opiate misuse among stimulant users from rural counties in Arkansas, Kentucky, and Ohio (N = 714). Methods: Multiple logistic regression was utilized to determine the independent correlates of recent (prior 6 months) prescription opiate misuse. Results: More than half of participants (53.2%) reported prescription opiate misuse in the previous 6 months. Other drug use (heroin, cocaine, methamphetamine, and marijuana) and anxiety (Adjusted Odds Ratio: 2.04, 95% Confidence Interval: 1.60, 2.59) were independently associated with prescription opiate misuse. Chronic pain and other health indicators were not associated with prescription opiate misuse after adjustment for covariates. Conclusions: Results indicate that illicit drug involvement and psychiatric symptoms may be driving the high rates of prescription opiate misuse among rural stimulant users. These findings have implications for the provision of treatment in resource-deprived rural areas. abstract_id: PUBMED:22367607 Energy drink use and adverse effects among emergency department patients. Energy drink usage is common and contains caffeine or other stimulants. We evaluated demographics, prevalence, reasons and adverse effects with consuming energy beverages. Cross-sectional study of a convenience sample of patients recruited from two San Diego Emergency Departments from January to December 2009. One-thousand-two-hundred-ninety-eight subjects participated of which 52.6% were male. Ethnicity: Caucasian 48.3%, African American 17%, Hispanic 18%, Other 16.7%. Age ranges: 18-29 years (38.4%), 30-54 years (49.6%) and greater than 55 years (12%). Reasons for use: 57% to "increase energy", 9.5% for studying/work projects, 2.4% while prolonged driving, improve sports performance 2%, with ethanol 6.3%, "other" reasons 22.1%. Adverse reactions reported by 33.5% (429) patients. Two-hundred-eighty report feeling "shaky/jittery", insomnia 136, palpitations 150, gastrointestinal upset 82, headache 68, chest pain 39, and seizures in 6. Eighty-five patients reported co-ingestion with illicit "stimulants" including cocaine and methamphetamine. We identified one-third of patients reported at least one adverse effect. Whilst most were not severe, a small number were serious e.g., seizures. In addition, some report purposely ingesting with illicit drugs. abstract_id: PUBMED:27248776 Reasons for Energy Drink Use and Reported Adverse Effects Among Adolescent Emergency Department Patients. Objectives: There is concern of energy drink use by adolescents. The objective of this study was to evaluate the energy drink consumption use, frequency, age of first use, reasons for use, influences of choice of brand, and adverse events recorded in a predominant Latino adolescent population. Methods: Subjects between the ages of 13 and 19 years utilizing emergency department services for any reason at a large county hospital answered a questionnaire about energy drink usage. Results: There were 192 subjects, of which 49% were male and 51% were female. Latino adolescents were 85% of the participants, although other ethnic groups participated including African American, white, and Asian. Reasons for use include 61% to increase energy, 32% as study aide, 29% to improve sports performance, and 9% to lose weight. Twenty-four percent reported using energy drinks with ethanol or illicit drugs including marijuana, cocaine, and methamphetamine. Adverse reactions were reported in 40% of the subjects including insomnia (19%), feeling "jittery" (19%), palpitations (16%), gastrointestinal upset (11%), headache (8%), chest pain (5%), shortness of breath (4%), and seizures (1%). Conclusions: Both brand name and packaging influenced the choice of energy drink in most subjects. Forty percent reported at least 1 adverse effect. While most adverse effects were not severe, a small number are serious. In addition, we showed intentional ingestion with ethanol and illicit drugs. Of additional concern is that both brand and packaging seem to directly affect choice of energy drink consumed. abstract_id: PUBMED:28913001 Methamphetamine Use and Emergency Department Utilization: 20 Years Later. Background: Methamphetamine (MAP) users present to the emergency department (ED) for myriad reasons, including trauma, chest pain, and psychosis. The purpose of this study is to determine how their prevalence, demographics, and resource utilization have changed. Methods: Retrospective review of MAP patients over 3 months in 2016. Demographics, mode of arrival, presenting complaints, disposition, and concomitant cocaine/ethanol use were compared to a 1996 study at the same ED. Results: 638 MAP-positive patients, 3,013 toxicology screens, and 20,203 ED visits represented an increase in prevalence compared to 1996: 461 MAP-positive patients, 3,102 screens, and 32,156 visits. MAP patients were older compared to the past. Mode of arrival was most frequently by ambulance but at a lower proportion than 1996, as was the proportion of MAP patients with positive cocaine toxicology screens and ethanol coingestion. Admission rate was lower compared to the past, as was discharge to jail. The proportion of MAP patients presenting with blunt trauma was lower compared to the past and higher for chest pain. Conclusion: A significant increase in the prevalence of MAP-positive patients was found. Differences in presenting complaints and resource utilization may reflect the shifting demographics of MAP users, as highlighted by an older patient population relative to the past. abstract_id: PUBMED:22439144 Bath salts: they are not what you think. Psychoactive bath salts are a relatively new group of designer drugs sold as tablets, capsules, or powder and pur-chased in places such as tobacco and convenience stores, gas stations, head shops, and the Internet. Bath salts are stimulant agents that mimic cocaine,lysergic acid diethylamide, methamphetamine, or methylenedioxymethamphetamine (ecstasy). The most common bath salts are the cathinone derivatives 3,4-methylenedioxypyrovalerone(MDPV), 4-methylmethcathinone(mephedrone), and 3,4-methylenedioxy-N-methylcathinone (methylone). The drugs cause intense stimulation, eu-phoria, elevated mood, and a pleasurable "rush" Tachycardia, hypertension,peripheral constriction, chest pain, hallucinations, paranoia, erratic behavior,inattention, lack of memory of substance use, and psychosis have been observed in those who have used bath salts. The U.S. Drug Enforcement Administration recently exercised an emergency authority to name three key ingredients in bath salts as Schedule I, thereby making them illegal to possess or sell in the United States. Nursing implications related to both clinical and educational settings are discussed. abstract_id: PUBMED:21171849 Case series of individuals with analytically confirmed acute mephedrone toxicity. Context: Previous reports of acute toxicity/harm associated with mephedrone use have been based on self-reported mephedrone use; toxicological screening has not been undertaken in these cases to determine whether mephedrone has been used. Objective: To report the first case series of analytically confirmed mephedrone-related acute toxicity. Materials And Methods: Serum samples were collected from individuals presenting to an emergency department (ED) with acute toxicity related to self-reported mephedrone use. Toxicological analysis, by gas-chromatography coupled with mass-spectrometry and liquid chromatography with tandem mass-spectrometry was performed to qualitatively confirm mephedrone use. Symptoms/signs of acute mephedrone toxicity and basic physiological parameters were extracted from the routine ED records. Results: Acute mephedrone-related toxicity was analytically confirmed in seven male patients; the mean ± SD age was 24.6 ± 6.5 years (range 16-36 years). Agitation (four patients) was the most common symptom/sign reported; other common symptoms/signs included: palpitations (two patients); chest pain (two patients); self-limiting pre-hospital seizures (one patient) and headaches (one patient). The mean heart rate was 109.1 ± 21.8 (range 80-140) beats per minute; one patient had a "severe" tachycardia (heart rate of ≥ 140 bpm). The mean systolic blood pressure was 153.0 ± 39.6 (range 110-210) mmHg; three patients had clinically significant hypertension (systolic blood pressure ≥ 160 mmHg). Discussion: These analytically confirmed acute mephedrone toxicity presentations had clinical features of toxicity consistent with an acute sympathomimetic toxidrome (e.g. hypertension, tachycardia and agitation). These findings are similar to the pattern of toxicity seen with other sympathomimetic recreational drugs such as 3,4-Methylenedioxymethamphetamine (MDMA) and cocaine. Conclusion: The process for determining whether a novel psychoactive substance should be controlled often relies on demonstrated/proven acute harm associated with its use. It is important that clinical toxicologists undertake appropriate biological sampling and toxicological analyses in suspected cases of "novel psychoactive drug" toxicity. This will ensure that both clinicians and legislative authorities are informed of the confirmed pattern of toxicity associated with these drugs. abstract_id: PUBMED:24054881 Cardiac complications of adult methamphetamine exposures. Background: Methamphetamine use is an increasing problem in the United States. Despite the similarities to cocaine in clinical presentation, methamphetamine effects on the cardiovascular system are not as clearly understood. Objectives: The purpose of this retrospective review is to evaluate the cardiovascular outcomes of methamphetamine exposures reported to both the California Poison Control System (CPCS) and two Level 1 trauma centers. Methods: CPCS records from 2000-2009, as well as inpatient and emergency department charts from two Level 1 trauma centers during the time frame of 2000-2004 were reviewed. Charts of adults with coded methamphetamine exposures were included. Results: Records of 2356 adults with methamphetamine exposure were assigned levels of severity according to National Poison Data System Medical Outcomes Criteria. Out of the combined charts reviewed from the CPCS data and hospital registries, 584 cases were coded as minor effects, 450 as moderate effects, 208 as major effects, and 28 as deaths. Of the included 1270 cases, 627 electrocardiograms (ECGs) were analyzed. Together, troponins and ECGs showed evidence of myocardial ischemia in 82 total cases, which is 6.5% of the cases reviewed. Cocaine was a co-ingestant in 10 of the 82 cases. Conclusions: This study presents a large cohort of methamphetamine users. The study found that chest pain and myocardial ischemia were reported in a number of cases. Although no true incidence can be taken from this dataset, it provides a large-scale review of methamphetamine use and its possible association with acute coronary syndromes. abstract_id: PUBMED:37434776 An Uncommon Presentation of Acute Thoracic Aortic Dissection. We present a case of a 40-year-old Caucasian male with past medical history of polysubstance abuse (cocaine and methamphetamine), who presented to the emergency department (ED) complaining of intermittent cough with associated chest discomfort and shortness of breath for 2 weeks. Initial vital signs demonstrated borderline tachycardia (98 beats per minute), tachypnea (37 times per minutes), and hypoxia (oxygen saturation 89% on room air), and his physical exam was grossly unremarkable. A preliminary workup including a computed tomography angiography (CTA) revealed a type A aortic dissection with both thoracic and abdominal involvement for which the patient was admitted. This patient had resection of the ascending aorta with graft placement, cardiopulmonary bypass, aortic root replacement using composite prosthesis and left and right coronary reconstruction and reimplantation and survived a complicated hospital course. This case demonstrates the classic association known to exist between recreational drug use, specifically stimulants such as cocaine and amphetamines, and acute aortic dissection (AAD). However, such a presentation of borderline subacute, painless dissection in the setting of polysubstance use raises further questions, since uncommon AAD is typically found in higher-risk populations such as those with connective tissue disorders (Marfan syndrome, Ehlers-Danlos syndrome, Loeys-Dietz syndrome), bicuspid aortic valve, chronic hypertension, or previous aortic pathology. We therefore suggest clinicians strongly consider uncommon AAD as part of their differential diagnosis in patients with known or highly suspected polysubstance abuse. abstract_id: PUBMED:18091405 Evaluation of patients with methamphetamine- and cocaine-related chest pain in a chest pain observation unit. Objective: Risk of acute coronary events in patients with methamphetamine and cocaine intoxication has been described. Little is known about the need for additional evaluation in these patients who do not have evidence of myocardial infarction after the initial emergency department evaluation. We herein describe our experience with these patients in a chest pain unit (CPU) and the rate of cardiac-related chest pain in this group. Methods: Retrospective analysis of patients evaluated in our CPU from January 1, 2000 to December 16, 2004 with a history of chest pain. Patients who had a positive urine toxicologic screen for methamphetamine or cocaine were included. No patients had ECG or cardiac injury marker evidence of myocardial infarction or ischemia during the initial emergency department evaluation. A diagnosis of cardiac-related chest pain was based upon positive diagnostic testing (exercise stress testing, nuclear perfusion imaging, stress echocardiography, or coronary artery stenosis &gt;70%). Results: During the study period, 4568 patients were evaluated in the CPU. A total of 1690 (37%) of patients admitted to the CPU underwent urine toxicologic testing. The result of urine toxicologic test was positive for cocaine or methamphetamine in 224 (5%). In the 2871 patients who underwent diagnostic testing for coronary artery disease (CAD), 401 (14%) were found to have positive results. There was no difference in the prevalence of CAD between those with positive result for toxicology screens (26/156, 17%) and those without (375/2715, 13%, RR 1.2, 95% CI 0.8-1.7). Conclusion: These findings suggest a relatively high rate of CAD in patients with methamphetamine and cocaine use evaluated in a CPU. Answer: The self-report of recent cocaine or methamphetamine use in illicit stimulant drug users who present to the Emergency Department (ED) with chest pain has a reliability rate of 51.8% (PUBMED:19081702). This means that slightly more than half of the patients in the study who met the inclusion criteria reported their use of cocaine or methamphetamine. The study found no significant difference in the self-report rate between users of methamphetamine versus cocaine, nor by gender, nor stratified by age over 50 years (PUBMED:19081702). This suggests that the reliability of self-reporting does not significantly vary across these different demographics within the context of the study. However, it is important to note that this still indicates that a substantial proportion of patients may not self-report their drug use, which could have implications for their clinical assessment and treatment in the ED.
Instruction: Long-term survival following Kasai portoenterostomy: is chronic liver disease inevitable? Abstracts: abstract_id: PUBMED:28940041 A review of long-term outcome and quality of life of patients after Kasai operation surviving with native livers. Biliary atresia (BA) is a rare neonatal cholestatic disease which leads to progressive obliterative cholangiopathy, resulting in biliary obstruction and jaundice. The standard surgical treatment is hepatoportoenterostomy (Kasai operation). Although approximately 50% of the affected infants would require liver transplantation within the first 2 years of life, the other 50% of the patients can live for years with their native liver, despite the progression of cirrhosis and chronic liver disease. Many of these patients will be affected by long-term complications such as repeated cholangitis, portal hypertension, variceal bleeding, growth problems, biochemical abnormalities, and hepatic osteodystrophy. These morbidities impose a huge impact on the quality of life of the patients and their families. Herein, we performed a comprehensive review on the clinical status and quality of life of long-term survivors of biliary atresia with their native livers, to facilitate meticulous longitudinal follow-up of these patients, and alert caregivers the probable complications to be aware of. abstract_id: PUBMED:29110825 Long-term outlook in biliary atresia. The oldest survivors from the Kasai portoenterostomy originate in Sendai, Japan and are approaching their 60th birthday. These represent the tip of an expanding cohort of adults born with this previously fatal condition. Increasingly transition to adult-biased hepatologists and physicians will be the expectation of many with this condition. However unlike their usual patients with alcohol, drugs, virally mediated liver disease these are different with different expectations of health and quality of life. Cure is not on the cards for most of these and they survive still with impaired bile flow and increased liver fibrosis and cirrhosis with the threat of cholangitis and portal hypertension still apparent. We review the reported statistics on long-term survival essentially from Japan and Western Europe (such as the UK and France) and the range of complications that may still beset this group. abstract_id: PUBMED:27464487 Analysis of the prognostic factors of long-term native liver survival in survivors of biliary atresia. Purpose: Long-term survivors of biliary atresia (BA) sometimes experience liver dysfunction. We evaluated the prognostic factors for long-term native liver survival (NLS) in BA patients after the Kasai procedure. Methods: This study included 67 patients with jaundice disappearance after the Kasai procedure performed between 1972 and 1995, and NLS for over 10 years. We retrospectively evaluated the clinical parameters, including the type of BA, age at the Kasai procedure, medical conditions, and treatments. The adjusted odds ratios (aOR) were obtained for 20-year NLS using logistic regression analysis. Results: The median age of the patients at the Kasai procedure was 63 days. Of the 67 study patients, 62 patients (92.5 %) had jaundice-free NLS at the age of 20 years, 4 patients died before the age of 20 years from liver failure, and 1 patient underwent living related liver transplantation. The presence of gastro-esophageal varices requiring endoscopic injection sclerotherapy was a significant factor (aOR 33.8; p = 0.0033), while hypersplenism and cholangitis were not identified as significant factors. Conclusions: The existence of symptomatic portal hypertension would influence long-term NLS in BA patients after the Kasai procedure. In such patients, accurate evaluation of hepatic function and adequate treatment for sequelae are needed. abstract_id: PUBMED:33532908 Imaging prediction with ultrasound and MRI of long-term medical outcome in native liver survivor patients with biliary atresia after kasai portoenterostomy: a pilot study. Purpose: To comparatively assess the role of abdominal ultrasound (US) and magnetic resonance imaging (MRI) in predicting long-term medical outcome in native liver survivor patients with biliary atresia (BA) after Kasai portoenterostomy (KP). Methods: Twenty-four retrospectively enrolled patients were divided in two groups according to clinical and laboratory data at initial evaluation after KP (median follow-up = 9.7 years; range = 5-25 years) as with ideal (Group 1; n = 15) or non-ideal (Group 2; n = 9) medical outcome. All patients were re-evaluated for a period of additional 4 years using clinical and laboratory indices. US and MRI studies were qualitatively analyzed assessing imaging signs suggestive of chronic liver disease (CLD). Results: At re-evaluation, 6 patients (40%) of Group 1 changed their medical outcome in non-ideal (Group 1A); the other 9 patients (60%) remained stable (Group 1B); the mean time to change the medical outcome in non-ideal status at re-evaluation was 43.5 ± 2.3 months. The area under the ROC curve was 0.84 and 0.87 for US and MRI scores to predict long-term medical outcome with the best cut-off value score &gt; 4 for both modalities (p = 0.89). In Group 2, 6 (67%) patients showed a clinical progression (Group 2A) with a mean time of 39.8 ± 3.8 months; in the other 3 (33%) patients, no clinical progression was observed (Group 2B). Conclusion: In BA patients with ideal medical outcome after KP, US and MRI may both predict long-term outcome. US, non-invasive and widely available technique, should be preferred. abstract_id: PUBMED:27476151 Prognostic values of serum bilirubin at 7th day post-Kasai for survival with native livers in patients with biliary atresia. Background: Biliary atresia (BA) is a serious liver disease with uncertain prognosis. The objective of this study was to investigate prognostic values of the &gt;20 % decrease in serum total bilirubin (TB) at 7th day post-op regarding early outcome and 5-year survival with native liver in BA. Methods: Biliary atresia patients undergoing Kasai operation between 2000 and 2014 were reviewed. The ratio of serum TB at 7th day post-op to pre-op TB levels (TB7/TB0) was calculated for every patient. TB7/TB0 ratio of &lt;0.8 indicated the &gt;20 % decrease in serum TB. At 6th month following Kasai operation, outcome of BA patients were categorized into good outcome (TB &lt; 2 mg % or clinically jaundice free) and poor outcome (TB &gt; 2 mg % or clinically jaundice). For outcome analysis, logistic regression was used. For survival analysis, Cox regression was applied. Results: There were 133 BA patients (M:F = 68:65) undergoing Kasai operation. Median age at surgery was 79 days. BA patients with TB7/TB0 ratio of &lt;0.8 were found in 38 %. Outcome at 6-month post-op could be evaluated in 126 patients (good: poor = 68:58). The 1-, 3- and 5-year survival rates with native livers were 85, 70 and 65 %, respectively. The median overall survival with native livers was 164 months. Median follow-up time was 87 months. Logistic regression showed that gender and age at operation were not significant factors impacting on early outcome (p &gt; 0.05). However, TB7/TB0 ratio of &lt;0.8 was an independent factor for good outcome (Odds ratio = 3.0, p = 0.006). Cox regression analysis demonstrated that 5-year survival rate was significantly correlated with TB7/TB0 ratio of &lt;0.8 (HR = 0.46, 95 % CI 0.23-0.91, p = 0.025) and outcome at 6th month post-op (HR = 0.05, 95 % CI 0.01-0.15, p &lt; 0.001). Conclusions: The &gt;20 % decrease in serum TB at 7th day post-Kasai is a predictor for good outcome. BA patients with TB7/TB0 of &lt;0.8 had 5-year survival with native livers significantly higher than those with the ratio of &gt;0.8. abstract_id: PUBMED:31104835 Outcomes of liver transplantation for Alagille syndrome after Kasai portoenterostomy: Alagille Syndrome with agenesis of extrahepatic bile ducts at porta hepatis. Background: Alagille syndrome (ALGS) is an autosomal dominant disorder, characterized by a paucity of intrahepatic bile ducts, resulting in significant cholestasis, and peculiar extrahepatic features. Some ALGS patients show a considerable overlap with biliary atresia (BA), and they can undergo Kasai procedure. The purpose of this study is to show the manifestations of BA overlapped ALGS cases in our institution, and to compare the outcomes of ALGS patients following liver transplantation (LT) between those who previously underwent Kasai surgery (ALGS-Kasai group) and those who did not (ALGS-non-Kasai group). Methods: Medical records of ALGS patients who underwent LT in Kyoto University Hospital, Japan from January 1992 to March 2018 were analyzed. ALGS diagnosis was determined according to physical, radiologic, and histopathological findings. Results: Thirty-one patients were ascertained (ALGS-Kasai: 4 males and 5 females vs. ALGS-non-Kasai: 14 males and 8 females, p = 0.43). Of 31 ALGS patients, 96.8% of children had pulmonary artery stenosis, 54.8% showed facial features, 29% revealed skeletal anomalies and 9.7% demonstrated ocular anomalies. The age at LT was significantly younger in ALGS-Kasai than ALGS-non-Kasai group (1.47 [interquartile range (IQR), 0.75-1.92] vs. 5.1 [IQR, 1.4-9.29] years; p = 0.038). Overall patient survival did not significantly differ between ALGS-Kasai (88.9%) and ALGS-non-Kasai patients (86.4%) (p = 0.84). Furthermore, the 1-year, 5-year, and 10-year patient survival rates for ALGS-Kasai group were 100%, 88.9%, and 88.9%, respectively, whereas those for ALGS-non-Kasai group were 90.9%, 90.9%, and 86.4%, respectively, with p-values of 0.36, 0.90, and 0.84, respectively. Conclusions: BA overlapped ALGS cases had neonatal progressive cholestasis which prompted Kasai procedure, and early liver dysfunction after Kasai led to performing LT. The ALGS-Kasai patients undergo LT at earlier ages than the ALGS-non-Kasai patients, however, overall patients' survival rates are similar between groups. Overall ALGS patients' survival rate after LT is considered high. Levels Of Evidence: Level III; Case-control study or Retrospective comparative study. abstract_id: PUBMED:28357030 Factors associated with long-term survival after liver transplantation: A retrospective cohort study. Aim: To identify predictive factors associated with long-term patient and graft survival (&gt; 15 years) in liver transplant recipients. Methods: Medical charts of all de novo adult liver transplant recipients (n = 140) who were transplanted in Hamburg between 1997 and 1999 were retrospectively reviewed. In total, 155 transplantations were identified in this time period (15 re-transplantations). Twenty-six orthotopic liver transplant (OLT) recipients were early lost to follow-up due to moving to other places within 1 year after transplantation. All remaining 114 patients were included in the analysis. The following recipient factors were analysed: Age, sex, underlying liver disease, pre-OLT body mass index (BMI), and levels of alanine aminotransferase (ALT), bilirubin, creatinine and gamma-glutamyltransferase (gamma-GT), as well as warm and cold ischemia times. Furthermore, the following donor factors were assessed: Age, BMI, cold ischemia time and warm ischemia time. All surviving patients were followed until December 2014. We divided patients into groups according to their underlying diagnosis: (1) hepatocellular carcinoma (n = 5, 4%); (2) alcohol toxic liver disease (n = 25, 22.0%); (3) primary sclerosing cholangitis (n = 6, 5%); (4) autoimmune liver diseases (n = 7, 6%); (5) hepatitis C virus cirrhosis (n = 15, 13%); (6) hepatitis B virus cirrhosis (n = 21, 19%); and (7) other (n = 35, 31%). The group "other" included rare diagnoses, such as acute liver failure, unknown liver failure, stenosis and thrombosis of the arteria hepatica, polycystic liver disease, Morbus Osler and Caroli disease. Results: The majority of patients were male (n = 70, 61%). Age and BMI at the time point of transplantation ranged from 16 years to 69 years (median: 53 years) and from 15 kg/m2 to 33 kg/m2 (median: 24), respectively. Sixty-six OLT recipients (58%) experienced a follow-up of 15 years after transplantation. Recipient's age (P = 0.009) and BMI (P = 0.029) were identified as risk factors for death by χ2-test. Kaplan-Meier analysis confirmed BMI or age above the median as predictors of decreased long-term survival (P = 0.008 and P = 0.020). Hepatitis B as underlying disease showed a trend for improved long-term survival (P = 0.049, χ2-test, P = 0.055; Kaplan-Meier analysis, Log rank). Pre-transplant bilirubin, creatinine, ALT and gamma-GT levels were not associated with survival in these patients of the pre-era of the model of end stage liver disease. Conclusion: The recipients' age and BMI were predictors of long-term survival after OLT, as well as hepatitis B as underlying disease. In contrast, donors' age and BMI were not associated with decreased survival. These findings indicate that recipient factors especially have a high impact on long-term outcome after liver transplantation. abstract_id: PUBMED:36291480 Long-Term Survival and Kidney Function in Pediatric Patients Following Liver Transplantation: A 15-Year Retrospective Cohort Study. Long-term preservation of kidney function after liver transplantation (LT) has not been well studied. We thus evaluated the rates of kidney function preservation and long-term survival after pediatric LT. We also investigated the risk factors associated with the progression of chronic kidney disease (CKD). We conducted a retrospective study of 184 pediatric patients who had undergone LT from 2003 to 2018 at a university hospital. We collected demographics, primary indications for LT, liver disease scores, renal function test results, immunosuppressive drug prescriptions, and diagnosis of post-LT complications. The 15-year survival rate was 90.8%. Furthermore, the rate of kidney function preservation at 14 years post-LT in patients at high risk of renal disease was 79.3%, and that in those with less risk of kidney diseases was 96.0%. Arterial hypertension was an independent risk factor associated with CKD progression. However, when arterial hypertension was excluded, the use of cyclosporine and liver disease with renal involvement were risk factors for CKD progression. We found that kidney function after pediatric LT was well preserved. We encourage the early detection of underlying kidney involvement, routine monitoring of renal function for high-risk patients, active control of hypertension, and appropriate immunosuppressive regimens for pediatric patients with LT. abstract_id: PUBMED:15213895 Predicting factor of quality of life in long-term jaundice-free survivors after the Kasai operation. Background/purpose: The aim of this study was to determine simple predictors for quality of life (QOL) in long-term jaundice-free survivors after the Kasai operation. Methods: Kasai's original portoenterostomy was performed on 55 patients with biliary atresia. Among them, records were reviewed retrospectively of 35 long-term (at least 5 years) and jaundice-free (clearance in bilirubin level less than 1.0 mg/dL after Kasai operation) survivors. The patients were divided into 2 groups based on QOL, and the differences in clinical and laboratory data were analyzed statistically. Results: The ages at Kasai operation, histologic, fibrosis grade of liver biopsy specimen at operation, the first onset and frequency of postoperative cholangitis, and postoperative clearance speed of jaundice after Kasai operation were not significantly different between the 2 groups. The aspartate aminotransferase (AST) level at 1 year was significantly correlated with the serum concentration of hyaluronic acid and an independent predictor for QOL in long-term jaundice-free survivors of the Kasai operation. Conclusions: The serum AST level at 1 year was a simple, strong predicting factor of QOL and liver dysfunction in long-term jaundice-free survivors after Kasai operation and may prove useful in planning liver transplantation. abstract_id: PUBMED:37766552 Small Intestinal Adenocarcinoma Arising at the Anastomotic Site after Kasai Operation for Biliary Atresia: A Case Report and Literature Review. Biliary atresia is an obliterative cholangiopathy of unknown etiology. Hepatic portoenterostomy, in which obliterated extrahepatic bile ducts are resected and bile flow is restored, known as Kasai operation, is performed within 3 months after birth. While this operation enhances long-term survival of patients, the occurrence of primary malignant hepatic tumors has been increasing. We report a case of small intestinal adenocarcinoma arising at the anastomotic site after Kasai operation. A 49-year-old man, who underwent Kasai operation for biliary atresia when he was 2 months old, experienced rapidly progressive jaundice and liver dysfunction. Deceased-donor liver transplantation was performed for liver failure. Macroscopically, there was a white-yellow tumor located at the anastomotic site of hepatic portoenterostomy of the resected liver. Pathological examination revealed a well-differentiated adenocarcinoma with some Paneth cells in the neoplastic lesion. Immunohistochemically, the tumor cells were negative for cytokeratin 7 (CK7) but positive for cytokeratin 20 (CK20) and a homeobox domain-containing transcription factor (CDX2). Mucin expression in tumor cells was negative for mucin 1 (MUC1) and mucin 6 (MUC6) and positive for mucin 2 (MUC2) and mucin 5AC (MUC5AC). The pathological diagnosis was small intestinal adenocarcinoma originating from the jejunum. The patient was discharged 48 days after the operation. The patient had not experienced recurrence at 10 months after the operation. This is the first report of small intestinal adenocarcinoma arising at the anastomotic site after Kasai operation for biliary atresia. Special care should be taken for the patients after Kasai operation with acute progressive jaundice and liver dysfunction because there is a possibility of malignancy in their native liver. Answer: Long-term survival following Kasai portoenterostomy for biliary atresia (BA) is possible, but chronic liver disease (CLD) is a common and often inevitable outcome for many patients. The Kasai operation, or hepatoportoenterostomy, is the standard surgical treatment for BA, a rare neonatal cholestatic disease that leads to progressive obliterative cholangiopathy and biliary obstruction. While the procedure can extend the survival of patients, it does not cure the underlying liver disease (PUBMED:28940041). Approximately 50% of infants with BA will require liver transplantation within the first two years of life, but the other 50% can live for years with their native liver. However, these patients often experience the progression of cirrhosis and chronic liver disease, along with complications such as repeated cholangitis, portal hypertension, variceal bleeding, growth problems, biochemical abnormalities, and hepatic osteodystrophy, which significantly impact their quality of life (PUBMED:28940041). The oldest survivors from the Kasai operation are approaching their 60th birthday, and this cohort of adults born with BA is expanding. These individuals continue to live with impaired bile flow, increased liver fibrosis, and cirrhosis, and they face ongoing threats of cholangitis and portal hypertension (PUBMED:29110825). Long-term native liver survival (NLS) in BA patients after the Kasai procedure can be influenced by various factors. The presence of gastro-esophageal varices requiring endoscopic injection sclerotherapy has been identified as a significant factor affecting long-term NLS (PUBMED:27464487). In summary, while long-term survival following Kasai portoenterostomy is achievable, chronic liver disease remains a significant concern for these patients. The progression of liver disease and related complications can have a profound impact on the quality of life and require careful management and monitoring (PUBMED:28940041; PUBMED:29110825; PUBMED:27464487).
Instruction: Maternal bisphenol-A levels at delivery: a looming problem? Abstracts: abstract_id: PUBMED:18273031 Maternal bisphenol-A levels at delivery: a looming problem? Objective: The objective was to determine whether bisphenol-A (BPA) is found in maternal circulation of pregnant women in the US population and is related to gestational length and birth weight. Method: Circulating levels of BPA were quantified by high performance liquid chromatography-tandem mass spectrometry at delivery in 40 southeastern Michigan mothers and correlated with gestational length and birth weight of offspring. Result: Maternal levels of unconjugated BPA ranged between 0.5 and 22.3 ng ml(-1) in southeastern Michigan mothers. There was no correlation between BPA concentrations and gestational length or birth weight of offspring. Conclusion: This is the first study to document measurable levels of BPA in maternal blood of the US population. Long-term follow-up studies of offspring are needed to validate or refute concerns over human fetal exposure to synthetic exogenous steroids. abstract_id: PUBMED:18577445 Maternal and fetal exposure to bisphenol A in Korea. Bisphenol A (BPA) is a well-known endocrine disrupter used widely. Despite the potential risk of human exposure to BPA, little information exists concerning maternal and fetal exposure to BPA during pregnancy in Korea. This study purposed to evaluate the correlation between maternal and fetal exposure, and to determine exposure levels to BPA in Korean pregnant women and their fetuses. Maternal blood and umbilical cord blood were collected from 300 subjects, and total BPA levels were measured. Blood BPA concentrations ranged from non-detectable to 66.48 microg/L in pregnant women and from non-detectable to 8.86 microg/L in umbilical cords. Serum BPA levels in most pregnant women were higher than in corresponding fetal umbilical cords and a positive correlation was found between in maternal and fetal BPA concentrations (p&lt;0.05). abstract_id: PUBMED:11936220 Maternal-fetal disposition of bisphenol a in pregnant Sprague-Dawley rats. This study describes the maternal-fetal disposition of bisphenol A and its distribution into the placenta and amniotic fluid after iv injection (2 mg/kg) to pregnant Sprague-Dawley rats. Bisphenol A was distributed extensively to the placenta and fetus, with their respective AUC values 4.4- and 2.2-fold greater than AUC for the maternal serum. In contrast, the distribution of bisphenol A into the amniotic fluid was low, with the mean amniotic fluid-to-maternal serum AUC ratio of 0.2. The decay curves of bisphenol A in the placenta, fetus, and amniotic fluid paralleled that of the maternal serum during the terminal elimination phase. A five-compartment open model consisting of the maternal central, maternal peripheral, placental, fetal, and amniotic fluid compartments was used to describe the disposition of bisphenol A in pregnant rats, with the elimination occurring from the maternal central and fetal compartments. Based on this model, bisphenol A delivered to the placenta was transferred primarily to the fetus [kpf/(kpf + kpc + kpa) = 65.4 %], with the remaining fraction transported to the maternal central (33.2%) and amniotic fluid (1.4%) compartments. Bisphenol A was eliminated from the amniotic fluid by the fetal (63.9%) and placental (36.1%) routes. On the other hand, bisphenol A was eliminated from the fetus primarily by the placental route back to mother [kfp/(kfp + kfa + kfo) = 100%], with the amniotic route playing an insignificant role in fetal elimination. The percent contribution of the fetal elimination to the total elimination in the maternal-fetal unit was 0.0% [CLfoAUCfetus/(CLcoAUCmaternal serum + CLfoAUCfetus)]. The pharmacokinetic model used in this study provides insights into the routes of elimination of bisphenol A in the maternal-fetal rat upon maternal administration. abstract_id: PUBMED:37651794 The association between maternal urinary Bisphenol A levels and neurodevelopment at age 2 years in Chinese boys and girls: A prospective cohort study. The impact of maternal exposure to Bisphenol A on child cognitive development as well as its sex dimorphism remains uncertain. This study used data of 215 mothers and their children from a birth cohort in Shanghai. Urinary BPA were measured in spot urine samples of mothers at late pregnancy and children at age 2 years. Cognitive development was evaluated by Ages &amp; Stages Questionnaires, Third Edition (ASQ-3) at age 2 years. Urinary BPA was detectable in 98.9% of mothers (geometric mean, GM: 2.6 μg/g. creatinine) and 99.8% children (GM: 3.4 μg/g. creatinine). Relative to the low and medium BPA tertiles, high tertile of maternal urinary BPA concentrations were associated with 4.8 points lower (95% CI: -8.3, -1.2) in gross motor and 3.7 points lower (95% CI: -7.4, -0.1) in problem-solving domain in girls only, with adjustment for maternal age, maternal education, pre-pregnancy BMI, passive smoking during pregnancy, parity, delivery mode, birth-weight for gestational age, child age at ASQ-3 test. This negative association remained with additional adjustment for child urinary BPA concentrations at age 2 years. No association was observed in boys. These results suggested the sex-dimorphism on the associations of maternal BPA exposure with gross motor and problem-solving domains in children at age 2 years. This study also indicated that optimal early child development should start with a healthy BPA-free "in utero" environment. abstract_id: PUBMED:31864031 Maternal bisphenol and phthalate urine concentrations and weight gain during pregnancy. Background: Insufficient or excessive gestational weight gain are associated with increased risks of adverse birth and childhood outcomes. Increasing evidence suggests that exposure to bisphenols and phthalates may disrupt hormonal pathways and thereby influence gestational weight gain. Objective: To examine the associations of early and mid-pregnancy bisphenol and phthalate urine concentrations with gestational weight gain. Methods: In a population-based prospective cohort study among 1,213 pregnant women, we measured early and mid-pregnancy bisphenol and phthalate urine concentrations. Maternal anthropometrics before pregnancy were obtained by questionnaire and repeatedly measured at our research center during pregnancy. We used linear and logistic regressions to evaluate the associations of bisphenols and phthalates with total and period-specific gestational weight gain. Results: Higher maternal total bisphenols and bisphenol S were associated with a lower total gestational weight gain at nominal level. Stratification by body mass index group showed that higher total bisphenols and bisphenol S were associated with lower total gestational weight gain specifically in normal weight women (respectively -509 g [95% CI -819, -198] and -398 g [95% CI -627, -169]). Each log unit increase in early pregnancy total bisphenol and bisphenol A urine concentrations were associated with lower mid- to late pregnancy gestational weight gain in the whole group (effect estimates -218 g/log unit increase [95% CI -334, -102] and -132 g/log unit increase [95% CI -231, -34], respectively). These associations were independent of mid-pregnancy compounds. Mid-pregnancy bisphenols and phthalates concentrations were not associated with gestational weight gain. Discussion: Higher maternal bisphenol urine concentrations in early pregnancy may lead to reduced gestational weight in second half of pregnancy. Further research is needed to assess the effects of maternal bisphenols and phthalates urine concentrations on placental and fetal growth and development. abstract_id: PUBMED:27573212 Quantifying bisphenol A in maternal and cord whole blood using isotope dilution liquid chromatography/tandem mass spectrometry and maternal characteristics associated with bisphenol A. Bisphenol A (BPA) is endocrine disrupting chemical that have been detected among general population. Exposure levels among pregnant women and their fetus are yet largely unknown among Japanese. We have developed a new method of measuring total BPA in whole blood samples by using isotopic dilution liquid chromatography-tandem mass spectrometry (ID-LC/MS/MS). For eliminating possible contaminations, we have used glass cartridge instead of polypropylene cartridge and successfully reduced background levels. Additionally gap retention technique was applied to improve sensitivity. We also confirmed no external contamination by measuring free BPA in the samples. The limit of quantification (LOQ) was 0.040 ng/ml. With this developed method, we determined total BPA concentrations of 59 maternal blood at delivery and 285 cord blood samples in prospective birth cohort study and investigated factors possibly related to total BPA levels. Total BPA levels ranged from below LOQ to 0.419 ng/ml and for maternal blood and from below LOQ to 0.217 ng/ml for cord blood, respectively. The geometric mean was 0.051 ng/ml for maternal blood and 0.046 ng/ml for cord blood, respectively. Although no correlation was observed between maternal and fetal blood levels of total BPA, our result suggested fetal exposure to BPA. We have found that younger mothers, frequent beef and pork consumption during pregnancy were positively associated with maternal total BPA levels. We confirmed in utero exposure to BPA, which highlights the importance of further studies of investing the effects of fetus BPA exposure on health outcomes. abstract_id: PUBMED:38077258 Placental transfer of bisphenol diglycidyl ethers (BDGEs) and its association with maternal health in a population in South of China. Despite high production and usage, little is known about exposure to bisphenol diglycidyl ethers (BDGEs) and their derivatives in pregnant women and fetuses. In this study, we determined nine BDGEs in 106 paired maternal and cord serum samples collected from e-waste dismantling sites in South of China. Bisphenol A bis (2,3-dihydroxypropyl) glycidyl ether (BADGE·2H2O), bisphenol A (3-chloro-2-hydroxypropyl) (2,3-dihydroxypropyl) glycidyl ether (BADGE·HCl·H2O), and bisphenol F diglycidyl ether (BFDGE) were the major BDGEs, with median concentrations of 0.57, 4.07, and 1.60 ng/mL, respectively, in maternal serum, and of 3.58, 5.61, and 0.61 ng/mL, respectively, in cord serum. The transplacental transfer efficiencies (TTEs) were estimated for BDGEs found in samples, and median values were in the range of 0.98 (BFDGE) to 5.91 (BADGE·2H2O). Our results suggested that passive diffusion plays a role in the placental transfer of BADGE·HCl·H2O and BFDGE, whereas several mechanisms contribute to the high accumulation of BADGE·2H2O in cord serum. Multiple linear regression analysis indicated significant associations between maternal serum concentrations of BDGEs and blood clinical biomarkers, especially those related to liver injuries, such as alanine aminotransferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP), and adenosine deaminase (ADA) (P &lt; 0.05). To our knowledge, this is the first study to report the occurrence of BDGEs in paired maternal-fetal serum samples and provide new insights into prenatal and fetal exposures. The newly discovered TTEs in maternal-fetal pairs contribute to a fuller inventory of the transmission activity of pollutants in the human body, ultimately adding to a more significant comprehensive risk evaluation. abstract_id: PUBMED:35471947 Maternal Phthalate and Bisphenol Urine Concentrations during Pregnancy and Early Markers of Arterial Health in Children. Background: Fetal exposure to endocrine-disrupting chemicals such as phthalates and bisphenols might lead to fetal cardiovascular developmental adaptations and predispose individuals to cardiovascular disease in later life. Objectives: We examined the associations of maternal urinary bisphenol and phthalate concentrations in pregnancy with offspring carotid intima-media thickness and distensibility at the age of 10 y. Methods: In a population-based, prospective cohort study of 935 mother-child pairs, we measured maternal urinary phthalate and bisphenol concentrations at each trimester. Later, we measured child carotid intima-media thickness and distensibility in the children at age 10 y using ultrasound. Results: Maternal urinary average or trimester-specific phthalate concentrations were not associated with child carotid intima-media thickness at age 10 y. Higher maternal average concentrations of total bisphenol, especially bisphenol A, were associated with a lower carotid intima-media thickness [differences -0.15 standard deviation score and 95% confidence interval (CI): -0.24, -0.09 and -0.13 (95% CI: -0.22, -0.04) per interquartile range (IQR) increase in maternal urinary total bisphenol and bisphenol A concentration]. Trimester-specific analysis showed that higher maternal third-trimester total bisphenol and bisphenol A concentrations were associated with lower child carotid intima-media thickness [differences -0.13 (95% CI: -0.22, -0.04) and -0.13 (95% CI: -0.22, -0.05) per IQR increase in maternal urinary bisphenol concentration]. Maternal urinary bisphenol or phthalate concentrations were not associated with child carotid distensibility. Discussion: In this large prospective cohort, higher maternal urinary bisphenols concentrations were associated with smaller childhood carotid intima-media thickness. Further studies are needed to replicate this association and to identify potential underlying mechanisms. https://doi.org/10.1289/EHP10293. abstract_id: PUBMED:37023811 Trimester-specific associations of maternal exposure to bisphenols with neonatal thyroid stimulating hormone levels: A birth cohort study. Background: Growing evidence suggests that exposure to bisphenol A (BPA) during pregnancy could interfere with neonatal thyroid function. Bisphenol F (BPF) and bisphenol S (BPS) are increasingly used as the substitutes of BPA. However, little is known about the effects of maternal exposure to BPS and BPF on neonatal thyroid function. The current study was aimed to investigate the trimester-specific associations of maternal exposure to BPA, BPS, and BPF with neonatal thyroid stimulating hormone (TSH) levels. Methods: Between November 2013 and March 2015, a total of 904 mother-newborn pairs were recruited from the Wuhan Healthy Baby Cohort Study, providing maternal urine samples in the first, second, and third trimesters for bisphenol exposure assessment, and neonatal heel prick blood samples for TSH measurement. Multiple informant model and quantile g-computation were used to evaluate the trimester-specific associations of bisphenols individually and mixture with TSH, respectively. Results: Each doubling concentration increase of maternal urinary BPA in the first trimester was significantly related to a 3.64 % (95% CI: 0.84 %, 6.51 %) increment in neonatal TSH. Each doubling concentration increase of BPS in the first, second and third trimesters were associated with 5.81 % (95 % CI: 2.27 %, 9.46 %), 5.70 % (95 % CI: 1.99 %, 9.55 %), 4.36 % (95 % CI: 0.75 %, 8.11 %) higher neonatal blood TSH, respectively. No significant association between trimester-specific BPF concentration and TSH was observed. The relationships between exposures to BPA/BPS and neonatal TSH were more evident in female infants. Quantile g-computation indicated that maternal co-exposure to bisphenols in the first trimester was significantly associated with neonatal TSH levels in a non-linear fashion. Conclusion: Maternal exposure to BPA and BPS were positively associated with neonatal TSH levels. The results indicated the endocrine disrupting effect of prenatal exposure to BPS and BPA, which should be of particular concern. abstract_id: PUBMED:31348763 Simultaneous analysis of bisphenol A fractions in maternal and fetal compartments in early second trimester of pregnancy. Background Bisphenol A (BPA) is an estrogenic, endocrine-disrupting compound widely used in the industry. It is also a ubiquitous environmental pollutant. Its presence was confirmed in human fetuses, which results from maternal exposure during pregnancy. The mechanisms behind maternal-fetal transfer, and relationships between pregnant women and fetal exposures remain unclear. The aim of this study was to assess the impact of maternal exposure to BPA on the exposure of the fetus. Methods Maternal plasma and amniotic fluid samples were collected from 52 pregnant women undergoing amniocentesis for prenatal diagnosis of chromosomal abnormalities. BPA was measured by gas chromatography-mass spectrometry (GC-MS). The permeability factor - a ratio of fetal-to-maternal BPA concentration - was used as a measure delineating the transplacental transfer of BPA. Results The median concentration of maternal plasma BPA was 8 times higher than the total BPA concentration in the amniotic fluid (8.69 ng/mL, range: 4.3 ng/mL-55.3 ng/mL vs. median 1.03 ng/mL, range: 0.3 ng/mL-10.1 ng/mL). There was no direct relationship between the levels of BPA in maternal plasma and amniotic fluid levels. The permeability factor, in turn, negatively correlated with fetal development (birth weight) (R = -0.54, P &lt; 0.001). Conclusion Our results suggest that the risk of fetal BPA exposure depends on placental BPA permeability rather than the levels of maternal BPA plasma concentration and support general recommendations to become aware and avoid BPA-containing products. Answer: The presence of bisphenol A (BPA) in maternal blood at the time of delivery has been documented, raising concerns about potential health risks to both mothers and their offspring. Studies have detected measurable levels of BPA in maternal blood of the US population, with concentrations ranging from 0.5 to 22.3 ng ml(-1) in southeastern Michigan mothers (PUBMED:18273031). However, this particular study did not find a correlation between BPA concentrations and gestational length or birth weight of offspring. In Korea, maternal blood and umbilical cord blood were analyzed, revealing BPA concentrations ranging from non-detectable to 66.48 microg/L in pregnant women and from non-detectable to 8.86 microg/L in umbilical cords. A positive correlation was found between maternal and fetal BPA concentrations, indicating that maternal exposure to BPA is related to fetal exposure (PUBMED:18577445). Research on pregnant Sprague-Dawley rats has shown that BPA is distributed extensively to the placenta and fetus, with higher concentrations in these tissues compared to maternal serum (PUBMED:11936220). This suggests that BPA can cross the placental barrier and reach the fetus. In a study of Chinese boys and girls, high tertiles of maternal urinary BPA concentrations were associated with lower scores in gross motor and problem-solving domains in girls at age 2 years, suggesting sex-specific neurodevelopmental effects of BPA exposure (PUBMED:37651794). Another study found that higher maternal bisphenol urine concentrations in early pregnancy may lead to reduced gestational weight gain in the second half of pregnancy (PUBMED:31864031). In Japan, total BPA levels in maternal and cord blood were quantified, and younger mothers and frequent beef and pork consumption during pregnancy were positively associated with maternal total BPA levels (PUBMED:27573212). Additionally, maternal exposure to bisphenols has been linked to smaller childhood carotid intima-media thickness, a marker of arterial health (PUBMED:35471947). Furthermore, maternal exposure to BPA and bisphenol S (BPS) during pregnancy has been positively associated with neonatal thyroid stimulating hormone (TSH) levels, indicating potential endocrine-disrupting effects (PUBMED:37023811).
Instruction: Do women with tuberculosis have a lower likelihood of getting diagnosed? Abstracts: abstract_id: PUBMED:15135842 Do women with tuberculosis have a lower likelihood of getting diagnosed? Prevalence and case detection of sputum smear positive pulmonary TB, a population-based study from Vietnam. Objective: The aim was to estimate the gender-specific prevalence of tuberculosis (TB) through screening. A further aim was to calculate case detection within the Vietnamese National TB program. Study Design And Setting: A population-based survey of 35,832 adults was performed within an existing sociodemographic longitudinal study in Bavi district, northern Vietnam. Cases were identified by a screening question about prolonged cough and further diagnosed with sputum examination and a chest X-ray. Results: The estimated prevalence of pulmonary TB among men was 90/100,000 (95% CI 45-135/100,000) and among women 110/100,000 (95% CI 63-157/100,000). Case detection in the district was estimated to 39% (95% CI 20-76%) among men and 12% (95% CI 6-26%) among women. Conclusion: TB prevalence was similar among men and women. Case detection among men and women was significantly lower than the reported national case detection of 80%, and there was a significant underdetection of female cases. These findings warrant actions, and emphasize the need to perform similar studies in different contexts. abstract_id: PUBMED:31600408 The multimorbidity profile of South African women newly diagnosed with breast cancer. Multimorbidity in women with breast cancer may delay presentation, affect treatment decisions and outcomes. We described the multimorbidity profile of women with breast cancer, its determinants, associations with stage at diagnosis and treatments received. We collected self-reported data on five chronic conditions (hypertension, diabetes, cerebrovascular diseases, asthma/chronic obstructive pulmonary disease, tuberculosis), determined obesity using body mass index (BMI) and tested HIV status, in women newly diagnosed with breast cancer between January 2016 and April 2018 in five public hospitals in South Africa. We identified determinants of ≥2 of the seven above-mentioned conditions (defined as multimorbidity), multimorbidity itself with stage at diagnosis (advanced [III-IV] vs. early [0-II]) and multimorbidity with treatment modalities received. Among 2,281 women, 1,001 (44%) presented with multimorbidity. Obesity (52.8%), hypertension (41.3%), HIV (22.0%) and diabetes (13.7%) were the chronic conditions that occurred most frequently. Multimorbidity was more common with older age (OR = 1.02; 95% CI 1.01-1.03) and higher household socioeconomic status (HSES) (OR = 1.06; 95% CI 1.00-1.13). Multimorbidity was not associated with advanced-stage breast cancer at diagnosis, but for self-reported hypertension there was less likelihood of being diagnosed with advanced-stage disease in the adjusted model (OR 0.80; 95% CI 0.64-0.98). Multimorbidity was associated with first treatment received in those with early-stage disease, p = 0.003. The prevalence of multimorbidity is high among patients with breast cancer. Our findings suggest that multimorbidity had a significant impact on treatment received in those with early-stage disease. There is need to understand the impact of multimorbidity on breast cancer outcomes. abstract_id: PUBMED:22173666 Higher-order likelihood inference in meta-analysis and meta-regression. This paper investigates the use of likelihood methods for meta-analysis, within the random-effects models framework. We show that likelihood inference relying on first-order approximations, while improving common meta-analysis techniques, can be prone to misleading results. This drawback is very evident in the case of small sample sizes, which are typical in meta-analysis. We alleviate the problem by exploiting the theory of higher-order asymptotics. In particular, we focus on a second-order adjustment to the log-likelihood ratio statistic. Simulation studies in meta-analysis and meta-regression show that higher-order likelihood inference provides much more accurate results than its first-order counterpart, while being of a computationally feasible form. We illustrate the application of the proposed approach on a real example. abstract_id: PUBMED:27891359 Diabetes Mellitus among Newly Diagnosed Tuberculosis Patients in Tribal Odisha: An Exploratory Study. Introduction: The association between Diabetes Mellitus (DM) and Tuberculosis (TB) poses a strong public health challenge. Tribal ethnics possess a different propensity towards infectious and haematological diseases which may influence the inter-relationship of DM and TB and thus merit separate attention. Aim: To investigate the prevalence of diabetes in newly diagnosed pulmonary TB patients of tribal ethnicity in Odisha. Materials And Methods: A cross-sectional study was carried out over a period of 9 months at four designated TB microscopic centres in a tribal district (Malkangiri) of Odisha. A total of 110 tribal adults newly diagnosed with pulmonary TB were examined for Fasting Blood Sugar (FBS) level. Diagnosis of DM and Impaired Fasting Glucose (IFG) were based on cut-off value of FBS recommended by the WHO. Data was entered and analysed using SPSS version 22.0. Results: The prevalence of diabetes and IFG are found to be 13.9% and 8.9%, respectively. A significant difference (p&lt;0.05) was observed between the mean ages of the TB only (45.9 years) and TB-DM co-morbidity patients (53.8 years). No significant association was found between gender and diabetes. Clinical characteristics of TB were similar in TB and TB-DM co-morbidity patients. Conclusion: The prevalence of high FBS was found to be higher in newly diagnosed pulmonary TB patients of tribal ethnicity thus indicating the need for intensified bidirectional screening. Further studies should be undertaken towards the risk profiling of diabetes and other lifestyle diseases in this population. abstract_id: PUBMED:38045863 Evaluation of treatment outcomes among adult patients diagnosed with tuberculosis in Ghana: A 10 year retrospective review. Objectives: The study determined tuberculosis (TB) treatment outcomes in Southern Ghana from 2012 to 2021. Methods: A retrospective analysis of service data on TB cases was conducted. Treatment success was defined as TB cure or completion of treatment course, whereas unsuccessful outcomes was defined as death, failure of treatment and lost to follow up. Bivariate and multivariate logistic regression was used to determine factors associated with treatment outcomes. Results: A total of 4106 adult TB cases were reported with a median age of 41 (interquartile range 32-52) years. Of these, 93.1% (n = 3823) were newly diagnosed. The treatment success rate declined from 71.0% in 2012 to 55.7% in 2021 (ktau-b = -0.56, P = 0.0318). Clinically diagnosed TB and extra-pulmonary TB had 7.0% (adjusted prevalence ratio [aPR]: 0.93, 95% confidence interval [CI]: 0.88-1.00) and 24.0% (aPR: 0.76, 95% CI: 0.69-0.84) respectively, less successful treatment outcome compared to pulmonary TB patients. HIV negative status was associated with 22% higher successful treatment outcome compared with being HIV positive (aPR: 1.22, 95% CI: 1.12-1.33). Conclusion: Tuberculosis treatment success rate declined over the period. There is a need for the TB Control Programme to review the national and sub-national TB data to ascertain poor performing TB treatment sites to identify and address context specific challenges with treatment interventions and system inadequacies to improve treatment success rates. abstract_id: PUBMED:28571185 Determination and Validation of a Lower Cut Off Value of Cerebrospinal Fluid Adenosine Deaminase (CSF-ADA) Activity in Diagnosis of Tuberculous Meningitis. Introduction: Tuberculous meningitis is an infection of the meninges caused by Mycobacterium tuberculosis. It is one of the most common infectious diseases of the Central Nervous System (CNS) and a major health problem in developing countries like India. If there is delay in diagnosis and initiation of specific treatment, it causes significant morbidity and mortality. CSF-ADA 10 U/l is the standard cut off value that is used for differentiation between Tuberculous (TBM) and Non-Tuberculous Meningitis (non-TBM). Aim: To determine and validate a lower cut off value for CSF-ADA for diagnosing TBM in an Indian setting. Materials And Methods: This was a prospective study involving 85 cases of meningitis whose CSF were analysed and ADA estimated using an enzymatic deamination assay kit. Diagnosis of various types of meningitis was made based on specified diagnostic criteria. The comparison of mean value of CSF- ADA activity of the two types of meningitis was done using two Sample t-test. A p-value of &lt;0.05 was considered as significant. Cut off value to differentiate between TBM and non-TBM meningitis was determined using ROC curve analysis. Results: The CSF-ADA activity of TBM and non-TBM cases was compared. The mean CSF-ADA activity was found to be significantly higher in TBM patients (10.97±4.43; Mean±SD) than in non-TBM patients (5.09±1.53) which was statistically significant with a p-value of &lt;0.001. A cut off value of 6.65 was calculated using ROC curve for the diagnosis of TBM which gave a sensitivity of 85.3% and a specificity of 84.3% for differentiating TBM from non-TBM. The positive predictive value was 78.3% and negative predictive value of 89.5% using the above cut off. The positive likelihood ratio was 5.44 and negative likelihood ratio of 0.17 when this lower cut off value was applied. Conclusion: This study has demonstrated that CSF-ADA can be used as an important diagnostic tool in early diagnosis of TBM using a cut off value of 6.65. This cut off value gave a good sensitivity and specificity in differentiating it from non-TBM. abstract_id: PUBMED:30559870 The Levels of Hepcidin and Erythropoietin in Pregnant Women with Anemia of Various Geneses. Aim: The purpose of the present research was to study the content of erythropoietin and hepcidin in serum in pregnant women with iron deficiency anaemia and anaemia of chronic inflammation. Methods: The authors examined 98 pregnant women who were observed in LLP (Regional obstetric-gynaecological centre) in Karaganda. The including criteria for pregnant women in the study was the informed consent of the woman to participate in the study. Exclusion criteria were oncological diseases, HIV-infection, tuberculosis, severe somatic pathology, mental illness, drug addiction. The design of the study was by the legislation of the Republic of Kazakhstan, international ethical norms and normative documents of research organisations, approved by the ethics committee of the Karaganda State Medical University. Results: As a result of the study, it was determined that the content of erythropoietin and hepcidin in pregnant women with anemias of different genesis varies ambiguously. In the main group of pregnant women with IDA, the erythropoietin content rises, and the hepcidin level decreases. In pregnant women with ACI, on the contrary, the level of hepcidin increases, and in one subgroup it is significant. However, in pregnant women and with IDA and anemia of chronic inflammation, there is a subgroup of women in whom erythropoietin is either comparable with hepcidin, or their changes are of opposite nature. Conclusion: The authors concluded that the obtained data indicate ambiguous changes in the level of erythropoietin and hepcidin in pregnant women with anaemias of various origins. In all likelihood, there are still unaccounted factors affecting the content of these protein-regulators of iron metabolism, which require further definition and interpretation in anaemia of pregnant women. abstract_id: PUBMED:37333854 A nomogram for predicting mortality of patients initially diagnosed with primary pulmonary tuberculosis in Hunan province, China: a retrospective study. Objective: According to the Global Tuberculosis Report for three consecutive years, tuberculosis (TB) is the second leading infectious killer. Primary pulmonary tuberculosis (PTB) leads to the highest mortality among TB diseases. Regretfully, no previous studies targeted the PTB of a specific type or in a specific course, so models established in previous studies cannot be accurately feasible for clinical treatments. This study aimed to construct a nomogram prognostic model to quickly recognize death-related risk factors in patients initially diagnosed with PTB to intervene and treat high-risk patients as early as possible in the clinic to reduce mortality. Methods: We retrospectively analyzed the clinical data of 1,809 in-hospital patients initially diagnosed with primary PTB at Hunan Chest Hospital from January 1, 2019, to December 31, 2019. Binary logistic regression analysis was used to identify the risk factors. A nomogram prognostic model for mortality prediction was constructed using R software and was validated using a validation set. Results: Univariate and multivariate logistic regression analyses revealed that drinking, hepatitis B virus (HBV), body mass index (BMI), age, albumin (ALB), and hemoglobin (Hb) were six independent predictors of death in in-hospital patients initially diagnosed with primary PTB. Based on these predictors, a nomogram prognostic model was established with high prediction accuracy, of which the area under the curve (AUC) was 0.881 (95% confidence interval [Cl]: 0.777-0.847), the sensitivity was 84.7%, and the specificity was 77.7%.Internal and external validations confirmed that the constructed model fit the real situation well. Conclusion: The constructed nomogram prognostic model can recognize risk factors and accurately predict the mortality of patients initially diagnosed with primary PTB. This is expected to guide early clinical intervention and treatment for high-risk patients. abstract_id: PUBMED:32295893 Long Noncoding RNA and Predictive Model To Improve Diagnosis of Clinically Diagnosed Pulmonary Tuberculosis. Clinically diagnosed pulmonary tuberculosis (PTB) patients lack microbiological evidence of Mycobacterium tuberculosis, and misdiagnosis or delayed diagnosis often occurs as a consequence. We investigated the potential of long noncoding RNAs (lncRNAs) and corresponding predictive models to diagnose these patients. We enrolled 1,764 subjects, including clinically diagnosed PTB patients, microbiologically confirmed PTB cases, non-TB disease controls, and healthy controls, in three cohorts (screening, selection, and validation). Candidate lncRNAs differentially expressed in blood samples of the PTB and healthy control groups were identified by microarray and reverse transcription-quantitative PCR (qRT-PCR) in the screening cohort. Logistic regression models were developed using lncRNAs and/or electronic health records (EHRs) from clinically diagnosed PTB patients and non-TB disease controls in the selection cohort. These models were evaluated by area under the concentration-time curve (AUC) and decision curve analyses, and the optimal model was presented as a Web-based nomogram, which was evaluated in the validation cohort. Three differentially expressed lncRNAs (ENST00000497872, n333737, and n335265) were identified. The optimal model (i.e., nomogram) incorporated these three lncRNAs and six EHRs (age, hemoglobin, weight loss, low-grade fever, calcification detected by computed tomography [CT calcification], and interferon gamma release assay for tuberculosis [TB-IGRA]). The nomogram showed an AUC of 0.89, a sensitivity of 0.86, and a specificity of 0.82 in differentiating clinically diagnosed PTB cases from non-TB disease controls of the validation cohort, which demonstrated better discrimination and clinical net benefit than the EHR model. The nomogram also had a discriminative power (AUC, 0.90; sensitivity, 0.85; specificity, 0.81) in identifying microbiologically confirmed PTB patients. lncRNAs and the user-friendly nomogram could facilitate the early identification of PTB cases among suspected patients with negative M. tuberculosis microbiological evidence. abstract_id: PUBMED:8841651 Maximum likelihood estimation of the kappa coefficient from bivariate logistic regression. We propose a maximum likelihood estimator (MLE) of the kappa coefficient from a 2 x 2 table when the binary ratings depend on patient and/or clinician effects. We achieve this by expressing the logit of the probability of positive rating as a linear function of the subject-specific and the rater-specific covariates. We investigate the bias and variance of the MLE in small and moderate size samples through Monte Carlo simulation and we provide the sample size calculation to detect departure from the null hypothesis H0: kappa = kappa 0 in the direction of H1: kappa &gt; kappa 0. Answer: Yes, women with tuberculosis (TB) have a lower likelihood of getting diagnosed compared to men. A population-based study from Vietnam found that while the prevalence of pulmonary TB was similar among men and women, case detection in the district was significantly lower for women at 12% (95% CI 6-26%) compared to 39% (95% CI 20-76%) for men (PUBMED:15135842). This significant underdetection of female cases suggests that women are less likely to be diagnosed with TB, highlighting a gender disparity in TB diagnosis.
Instruction: Do medical students in the operating room affect patient care? Abstracts: abstract_id: PUBMED:34876370 Operating Room Staff Perceptions of Medical Students. Objective: While the operating room (OR) is a keystone experience, medical students often report negative experiences and intimidation in the OR. The purpose of this study is to describe the perceived role of medical students in the OR by registered nurses and certified surgical technologists to improve medical student education. Design: A cross-sectional survey of select Massachusetts General Hospital perioperative staff in surgery was conducted through an anonymous survey in March 2021. Statistical analysis included inductive thematic analysis, descriptive statistics, and Mann-Whitney U tests, with a p-value of &lt;0.05. The survey instrument characterized the perception of medical student preparedness for OR-related tasks and free-text responses on the role of medical students in the OR and opportunities for improvement. Setting: The study was conducted at the Massachusetts General Hospital, Boston, MA. Participants: Participants included Massachusetts General Hospital perioperative staff in the Department of Surgery, with 262 registered nurses and 90 certified surgical technologists receiving the survey. Results: There were 86 completed responses (24.4% response rate). A total of 71.23% of respondents believe medical students should be observational learners in the OR. Areas for improvement include OR etiquette (37.5%), awareness of the surgical field (26.79%), and scrubbing skills (26.79%). A total of 48.8% of staff agreed they enjoy working with medical students, followed by 20.9% who somewhat agree. A total of 27.91% of respondents somewhat agreed that medical students were knowledgeable about OR sterility and scrubbing procedures, followed by 25.58% who somewhat disagreed. Conclusions: Operating room staff enjoy working with medical students. The majority of staff believe medical students' role in the OR is that of observational learning. Areas of improvement for medical students include OR etiquette, awareness of the surgical field, scrubbing and gowning skills, and assisting staff whenever needed. Possible solutions include incorporating nursing staff as surgery clerkship stakeholders to optimize medical student experiences in the OR. abstract_id: PUBMED:30800853 Orientation to the Operating Room: An Introduction to the Surgery Clerkship for Third-Year Medical Students. Introduction: The operating room is a complex environment in which individual team members perform specific tasks according to their role. A simulation activity was created to introduce medical students on the surgery clerkship to issues relating to patient safety, infection control, and regulatory requirements. Methods: This activity takes place prior to general surgery rotation operative experiences, and addresses the need for students to practice roles they will perform while participating in patient care. The activity includes a simulated operation, an assessment, and a scripted debriefing. Among other tasks, students practice safe patient transfer and monitoring, donning sterile garb, preparing the surgical site, and being active participants in a sign-in and time-out. Students are assessed on assigned tasks, their ability to maintain sterility, and the degree to which they engage with their team. Results: Students reported the simulation helped them better understand how they could become involved on their first day in the operating room. Students also reported they were more confident when in the operating room. This finding also extended to students who had previously been in the operating room during a prior OB/GYN rotation. Discussion: Patient safety is paramount when in the operating room, and this simulation activity fills a current gap in student's practical knowledge as they prepare to enter their surgery clerkship. Giving medical students the information and skills needed to be safe and effective members of the operating team prior to entering the operating room is of benefit to the surgical team, students, and patients. abstract_id: PUBMED:24931415 Do medical students in the operating room affect patient care? An analysis of one institution's experience over the past five years. Background: Medical students are active learners in operating rooms during medical school. This observational study seeks to investigate the effect of medical students on operative time and complications. Methods: Data from the American College of Surgeons National Surgical Quality Improvement Program was linked to operative records for nonemergent, inpatient general surgery cases at our institution from 1 January 2009 to 1 January 2013. Cases were grouped into 13 distinct procedure groups. Hospital records provided information on the presence of medical students. Demographics, comorbidities, intraoperative variables, and postoperative complications were analyzed. Results: Overall, 2481 cases were included. Controlling for wound class, procedure group, and surgeon, medical students were associated with an additional 14 minutes of operative time. No association between medical students and postoperative complications was observed. Conclusions: The educational benefits gained by the presence of medical students do not appear to jeopardize the quality of patient care. abstract_id: PUBMED:18707660 Operating room assist: surgical mentorship and operating room experience for preclerkship medical students. Background: Despite the importance of preclerkship experiences, surgical education has essentially remained confined to the third-year operating room experience. According to experience-based learning theory, the acquisition of new clinical knowledge is a dynamic process of social enculturation and professional identity development that requires active participation, clinical applicability, and direct interaction with doctors and other members of the medical team. In conjunction with a previously described surgical skills elective, we created a new clinical elective in which preclerkship medical students were assigned a surgical mentor and invited into the operating room to assist in surgeries. Methods: The elective paired 36 first-year students with 24 surgeons and instructed students to participate in at least 2 surgeries over the 3-month elective period. Students, surgeons, and operating room nurses filled out questionnaires after each surgery. Results: Although 6 students failed to enter the operating room, 30 students scrubbed and gowned for a total of 62 procedures during the elective period. Although most students reported the operating room to be a comfortable learning environment in which they were actively included, students consistently underrated their performance and contribution to the surgical team compared with the surgeons' and nurses' ratings. With 75% of students who reported using a previously learned surgical skill during each surgery, this elective succeeded in allowing preclerkship medical students the opportunity to participate actively in the operating room. Conclusions: Early surgical exposure is critical for attracting student interest in careers in surgery. We believe that these early clinical experiences, combined with strong mentorship from the surgical faculty, will eventually lead to greater success during clerkships and greater interest in surgery as a career. abstract_id: PUBMED:37838384 Safety in Health Care: The Impact of Operating Room Design. The science of operating room design has grown over the past 20 years due to the realization that the physical environment influences health care provider performance and patient outcomes. Medical errors occur when the normal workflow in an operating room is disrupted as providers must overcome sub-optimal conditions. All aspects of the physical environment can impact operating room flow. Studying the layout, contents, ergonomics, and environmental parameters of the operating can lead improved work conditions resulting improved patient and provider safety. At the forefront of operating room design science is the use of simulation and the evaluation of new technologies. abstract_id: PUBMED:36966016 Elucidating academic surgical staff perspectives of medical student learning in the operating room: An embedded mixed methods study. Background: The operating room (OR) is a complex environment for medical students. Little is known about the OR staff's perception of medical students. Methods: We utilized an embedded mixed methods design to characterize surgical staff perceptions of students at an academic institution. We surveyed 408 OR nursing/technician staff with 16 follow-up interviews. Results: 139 respondents. 91.3% reported having daily-to-weekly interactions with medical students. Yet, only 37.9% agreed that "patient care is better when medical students are part of the team." 25.2% felt confident that they knew what a student's education entails outside the OR. 93.5% agreed that interprofessional training between physicians and OR staff should be included in educational programs. 54% agreed that their responsibilities include medical student training in the OR setting. Conclusions: Despite an overall desire for teamwork, this study highlights a lack of knowledge of each others' roles. To improve OR culture and team dynamics, concerted efforts need to be made around interprofessional training. abstract_id: PUBMED:27183943 Medical errors and patient safety in the operating room. Objective: To investigate medical errors in the operating room, attitudes of healthcare professionals in case of errors and educational needs of professionals. Methods: The descriptive study was conducted at a university hospital in Turkey from January 25 to February 14, 2011, and comprised operating room staff, including physicians, nurses, anaesthesia technicians and perfusion technicians. Data was obtained using a questionnaire. Results: Of the 69 respondents, 45(65.2%) had experienced medical errors and 29(42%) had verbally warned the person who caused the error. The main cause of the medical errors was a lack of healthcare professionals and their inadequate qualifications, 51(73.9%); and insufficient communication, 41(59.4%). Coping with stress and communication 45(65.2%) and radiation safety 28(40.6%) were the most common educational needs. Conclusions: Patient safety applications in the operating room can be improved by offering educational programmes, designing an easy reporting system, encouraging reporting of medical errors and active participation of healthcare professionals in decisions that might affect patient safety. abstract_id: PUBMED:30541224 The effect of an empathy role-play program for operating room nursing students. Purpose: The purpose of this research was to investigate the effect of a role-playing training program for empathetic communication with patients on empathy scores of operating room nursing students. Methods: This clinical trial was carried out on 77 operating room nursing students from the first to the fourth years studied in the School of Nursing and Midwifery at Isfahan University of Medical Sciences in the academic year 2017-2018. The intervention administered on the experimental group included a 12-hour training program with the theme of expressing empathy to patients using a role-playing technique. The Jefferson Scale of Empathy-Health Profession Students' Version was completed before, immediately after, and one month after the intervention by the samples. Comparison analysis was done among three stages. Results: Comparing the total mean empathy scores before intervention in the control group and the experimental one did not reveal a significant difference (P=0.50); however, the total mean empathy scores in the experimental group, immediately after and one month after the intervention, was higher than that in the control group (P&lt;0.001). Conclusion: Empathy training through a role-playing technique was effective on improving the empathy scores of operating room nursing students and it also highlighted the fact that empathy could be promoted by education. Making changes in educational curriculum of operating room nursing students was indispensable in order to make them familiar with the concept of empathy in operating room. abstract_id: PUBMED:36803863 Medical student preparation for the operating room. Background: This study examines student perceptions of preparedness for the operating room (OR), resources used, and time spent in preparation. Methods: Third-year medical and second-year physician assistant students across two campuses at a single academic institution were surveyed to assess perceptions of preparedness, time spent in preparation, resources used, and perceived benefits of preparation. Results: 95 responses (response rate 49%) were received. Students reported being most prepared to discuss operative indications and contraindications (73%), anatomy (86%), and complications (70%), but few felt prepared to discuss operative steps (31%). Students spent a mean of 28 min preparing per case, citing UpToDate and online videos as the most used resources (74%; 73%). On secondary analysis, only the use of an anatomic atlas was weakly correlated with improved preparedness to discuss relevant anatomy (p = 0.005); time spent, number of resources or other specific resources were not associated with increased preparedness. Conclusion: Students felt prepared for the OR, though there is room for improvement and a need for student-oriented preparatory materials. Understanding the deficits in preparation, preference for technology-based resources, and time constraints of current students can be used to inform optimisation for medical student education and resources to prepare for operating room cases. abstract_id: PUBMED:26645529 How Does TeamSTEPPS Affect Operating Room Efficiency? Objective: To evaluate the effect of TeamSTEPPS (Team Strategies and Tools to Enhance Performance and Patient Safety) on operating room efficiency for the otolaryngology service at a tertiary care medical center. Study Design: Retrospective database review. Setting: Otolaryngology department at tertiary care medical center. Subjects And Methods: To assess the impact of implementing an evidence-based patient safety initiative, TeamSTEPPS, on operating room efficiency in the otolaryngology department, the operative times, time lost to delayed starts, and turnover times during the year following the implementation of TeamSTEPPS were compared with the values from the prior year. Results: The study compared 1322 cases and 644 turnovers in the year prior to TeamSTEPPS implementation with 1609 cases and 769 turnovers in the following year. There were no statistically significant decreases in operating room efficiency in the year after the TeamSTEPPS rollout. Conclusion: Operating room efficiency was preserved after the rollout of a rigorous evidence-based patient safety initiative that requires active participation from all operating room team members. Answer: The presence of medical students in the operating room (OR) does not appear to jeopardize the quality of patient care. A study analyzing data from the American College of Surgeons National Surgical Quality Improvement Program linked to operative records at a particular institution found that while medical students were associated with an additional 14 minutes of operative time, there was no association between medical students and postoperative complications (PUBMED:24931415). This suggests that the educational benefits gained by the presence of medical students in the OR do not come at the expense of patient safety or the quality of care provided.
Instruction: Speech perception abilities of adults with dyslexia: is there any evidence for a true deficit? Abstracts: abstract_id: PUBMED:19635940 Speech perception abilities of adults with dyslexia: is there any evidence for a true deficit? Purpose: This study investigated whether adults with dyslexia show evidence of a consistent speech perception deficit by testing phoneme categorization and word perception in noise. Method: Seventeen adults with dyslexia and 20 average readers underwent a test battery including standardized reading, language and phonological awareness tests, and tests of speech perception. Categorization of a pea/bee voicing contrast was evaluated using adaptive identification and discrimination tasks, presented in quiet and in noise, and a fixed-step discrimination task. Two further tests of word perception in noise were presented. Results: There were no significant group differences for categorization in quiet or noise, across- and within-category discrimination as measured adaptively, or word perception, but average readers showed better across- and within-category discrimination in the fixed-step discrimination task. Individuals did not show consistent poor performance across related tasks. Conclusions: The small number of group differences, and lack of consistent poor individual performance, suggests weak support for a speech perception deficit in dyslexia. It seems likely that at least some poor performances are attributable to nonsensory factors like attention. It may also be that some individuals with dyslexia have speech perceptual acuity that is at the lower end of the normal range and exacerbated by nonsensory factors. abstract_id: PUBMED:36054844 Speech Perception Variability in Childhood Apraxia of Speech: Implications for Assessment and Intervention. Purpose: Beyond hallmark production deficits characterizing childhood apraxia of speech (CAS), largely attributed to disruption(s) in speech motor planning, children with CAS often present with co-occurring speech perception and language difficulties. Thus, careful consideration of the potential for speech perception difficulties to have cascading downstream effects on intervention responsiveness and real-life functioning for some children with CAS is highly important. The purpose of this tutorial was to consider the impact of speech perception abilities in children with CAS, which carries implications for caring for the needs of the whole child. Method: This tutorial summarizes the current literature on speech perception and how it relates to speech production, language, and reading abilities for children with CAS. We include case illustrations that are adapted from real clinical scenarios illustrating how speech perception difficulties may impact some children with CAS and provide recommendations for incorporating speech perception into assessment and intervention practices. Results: Although speech perception difficulties do not seem to be a core deficit of CAS, they are strongly linked to language difficulties, which are highly prevalent among children with CAS. Speech perception and language difficulties are also associated with reading difficulties and risk for lower academic achievement. Conclusions: Children with CAS who have co-occurring language difficulties likely also demonstrate speech perception deficits, which puts them at heightened risk for reading difficulties and struggles with academic achievement. Comprehensive assessment of children with CAS should address speech perception and production, language, and reading abilities, which carries important implications for multifaceted approaches to intervention. abstract_id: PUBMED:12093108 Perception of speech and nonspeech stimuli by children with and without reading disability and attention deficit hyperactivity disorder. The auditory temporal deficit hypothesis predicts that children with reading disability (RD) will exhibit deficits in the perception of speech and nonspeech acoustic stimuli in discrimination and temporal ordering tasks when the interstimulus interval (ISI) is short. Initial studies testing this hypothesis did not account for the potential presence of attention deficit hyperactivity disorder (ADHD). Temporal order judgment and discrimination tasks were administered to children with (1) RD/no-ADHD (n=38), (2) ADHD (n=29), (3) RD and ADHD (RD/ADHD; n=32), and (4) no impairment (NI; n=43). Contrary to predictions, children with RD showed no specific sensitivity to ISI and performed worse relative to children without RD on speech but not nonspeech tasks. Relationships between perceptual tasks and phonological processing measures were stronger and more consistent for speech than nonspeech stimuli. These results were independent of the presence of ADHD and suggest that children with RD have a deficit in phoneme perception that correlates with reading and phonological processing ability. (c) 2002 Elsevier Science (USA). abstract_id: PUBMED:30844835 Speech Envelope Enhancement Instantaneously Effaces Atypical Speech Perception in Dyslexia. Objectives: Increasing evidence exists that poor speech perception abilities precede the phonological deficits typically observed in dyslexia, a developmental disorder in learning to read. Impaired processing of dynamic features of speech, such as slow amplitude fluctuations and transient acoustic cues, disrupts effortless tracking of the speech envelope and constrains the development of adequate phonological skills. In this study, a speech envelope enhancement (EE) strategy was implemented to reduce speech perception deficits by students with dyslexia. The EE emphasizes onset cues and reinforces the temporal structure of the speech envelope specifically. Design: Speech perception was assessed in 42 students with and without dyslexia using a sentence repetition task in a speech-weighted background noise. Both natural and vocoded speech were used to assess the contribution of the temporal envelope on the speech perception deficit. Their envelope-enhanced counterparts were added to each baseline condition to administer the effect of the EE algorithm. In addition to speech-in-noise perception, general cognitive abilities were assessed. Results: Results demonstrated that students with dyslexia not only benefit from EE but benefit more from it than typical readers. Hence, EE completely normalized speech reception thresholds for students with dyslexia under adverse listening conditions. In addition, a correlation between speech perception deficits and phonological processing was found for students with dyslexia, further supporting the relation between speech perception abilities and reading skills. Similar results and relations were found for conditions with natural and vocoded speech, providing evidence that speech perception deficits in dyslexia stem from difficulties in processing the temporal envelope. Conclusions: Using speech EE, speech perception skills in students with dyslexia were improved passively and instantaneously, without requiring any explicit learning. In addition, the observed positive relationship between speech processing and advanced phonological skills opens new avenues for specific intervention strategies that directly target the potential core deficit in dyslexia. abstract_id: PUBMED:28001275 Speech-language pathology findings in Attention Deficit Hyperactivity Disorder: a systematic literature review. Purpose: To systematically review the scientific production on the relationship between Attention Deficit Hyperactivity Disorder (ADHD) and Speech-language Pathology and to methodologically analyze the observational studies on the theme. Research Strategies: Systematic review of the literature conducted at the databases Medical Literature Analysis and Retrieval System online (MEDLINE, USA), Literature of Latin America and the Caribbean Health Sciences (LILACS, Brazil) and Spanish Bibliographic Index of Health Sciences (IBECS, Spain) using the descriptors: "Language", "Language Development", "Attention Deficit Hyperactivity Disorder", "ADHD" and "Auditory Perception". Selection Criteria: Articles published between 2008 and 2013. Inclusion criteria: full articles published in national and international journals from 2008 to 2013. Exclusion criteria: articles not focused on the speech-language pathology alterations present in the attention deficit hyperactivity disorder. Data Analysis: The articles were read in full and the data were extracted for characterization of methodology and content. Results: The 23 articles found were separated according to two themes: Speech-language Pathology and Attention Deficit Hyperactivity Disorder. Conclusion: The study of the scientific production revealed that the alterations most commonly discussed were reading disorders and that there are few reports on the relationship between auditory processing and these disorders, as well as on the role of the speech-language pathologist in the evaluation and treatment of children with Attention Deficit Hyperactivity Disorder. abstract_id: PUBMED:21930615 Investigating speech perception in children with dyslexia: is there evidence of a consistent deficit in individuals? Purpose: The claim that speech perception abilities are impaired in dyslexia was investigated in a group of 62 children with dyslexia and 51 average readers matched in age. Method: To test whether there was robust evidence of speech perception deficits in children with dyslexia, speech perception in noise and quiet was measured using 8 different tasks involving the identification and discrimination of a complex and highly natural synthetic "bee"-"pea" contrast (copy synthesized from natural models) and the perception of naturally produced words. Results: Children with dyslexia, on average, performed more poorly than did average readers in the synthetic syllables identification task in quiet and in across-category discrimination (but not when tested using an adaptive procedure). They did not differ from average readers on 2 tasks of word recognition in noise or identification of synthetic syllables in noise. For all tasks, a majority of individual children with dyslexia performed within norms. Finally, speech perception generally did not correlate with pseudoword reading or phonological processing--the core skills related to dyslexia. Conclusions: On the tasks and speech stimuli that the authors used, most children with dyslexia did not appear to show a consistent deficit in speech perception. abstract_id: PUBMED:15010233 The fragile nature of the speech-perception deficit in dyslexia: natural vs synthetic speech. A number of studies reported that developmental dyslexics are impaired in speech perception, especially for speech signals consisting of rapid auditory transitions. These studies mostly made use of a categorical-perception task with synthetic-speech samples. In this study, we show that deficits in the perception of synthetic speech do not generalise to the perception of more naturally sounding speech, even if the same experimental paradigm is used. This contrasts with the assumption that dyslexics are impaired in the perception of rapid auditory transitions. abstract_id: PUBMED:22522205 Allophonic mode of speech perception in Dutch children at risk for dyslexia: a longitudinal study. There is ample evidence that individuals with dyslexia have a phonological deficit. A growing body of research also suggests that individuals with dyslexia have problems with categorical perception, as evidenced by weaker discrimination of between-category differences and better discrimination of within-category differences compared to average readers. Whether the categorical perception problems of individuals with dyslexia are a result of their reading problems or a cause has yet to be determined. Whether the observed perception deficit relates to a more general auditory deficit or is specific to speech also has yet to be determined. To shed more light on these issues, the categorical perception abilities of children at risk for dyslexia and chronological age controls were investigated before and after the onset of formal reading instruction in a longitudinal study. Both identification and discrimination data were collected using identical paradigms for speech and non-speech stimuli. Results showed the children at risk for dyslexia to shift from an allophonic mode of perception in kindergarten to a phonemic mode of perception in first grade, while the control group showed a phonemic mode already in kindergarten. The children at risk for dyslexia thus showed an allophonic perception deficit in kindergarten, which was later suppressed by phonemic perception as a result of formal reading instruction in first grade; allophonic perception in kindergarten can thus be treated as a clinical marker for the possibility of later reading problems. abstract_id: PUBMED:24905286 Functional correlates of the speech-in-noise perception impairment in dyslexia: an MRI study. Dyslexia is a language-based neurodevelopmental disorder. It is characterized as a persistent deficit in reading and spelling. These difficulties have been shown to result from an underlying impairment of the phonological component of language, possibly also affecting speech perception. Although there is little evidence for such a deficit under optimal, quiet listening conditions, speech perception difficulties in adults with dyslexia are often reported under more challenging conditions, such as when speech is masked by noise. Previous studies have shown that these difficulties are more pronounced when the background noise is speech and when little spatial information is available to facilitate differentiation between target and background sound sources. In this study, we investigated the neuroimaging correlates of speech-in-speech perception in typical readers and participants with dyslexia, focusing on the effects of different listening configurations. Fourteen adults with dyslexia and 14 matched typical readers performed a subjective intelligibility rating test with single words presented against concurrent speech during functional magnetic resonance imaging (fMRI) scanning. Target words were always presented with a four-talker background in one of three listening configurations: Dichotic, Binaural or Monaural. The results showed that in the Monaural configuration, in which no spatial information was available and energetic masking was maximal, intelligibility was severely decreased in all participants, and this effect was particularly strong in participants with dyslexia. Functional imaging revealed that in this configuration, participants partially compensate for their poorer listening abilities by recruiting several areas in the cerebral networks engaged in speech perception. In the Binaural configuration, participants with dyslexia achieved the same performance level as typical readers, suggesting that they were able to use spatial information when available. This result was, however, associated with increased activation in the right superior temporal gyrus, suggesting the need to reallocate neural resources to overcome speech-in-speech difficulties. Taken together, these results provide further understanding of the speech-in-speech perception deficit observed in dyslexia. abstract_id: PUBMED:29352968 Children with dyslexia show a reduced processing benefit from bimodal speech information compared to their typically developing peers. During information processing, individuals benefit from bimodally presented input, as has been demonstrated for speech perception (i.e., printed letters and speech sounds) or the perception of emotional expressions (i.e., facial expression and voice tuning). While typically developing individuals show this bimodal benefit, school children with dyslexia do not. Currently, it is unknown whether the bimodal processing deficit in dyslexia also occurs for visual-auditory speech processing that is independent of reading and spelling acquisition (i.e., no letter-sound knowledge is required). Here, we tested school children with and without spelling problems on their bimodal perception of video-recorded mouth movements pronouncing syllables. We analyzed the event-related potential Mismatch Response (MMR) to visual-auditory speech information and compared this response to the MMR to monomodal speech information (i.e., auditory-only, visual-only). We found a reduced MMR with later onset to visual-auditory speech information in children with spelling problems compared to children without spelling problems. Moreover, when comparing bimodal and monomodal speech perception, we found that children without spelling problems showed significantly larger responses in the visual-auditory experiment compared to the visual-only response, whereas children with spelling problems did not. Our results suggest that children with dyslexia exhibit general difficulties in bimodal speech perception independently of letter-speech sound knowledge, as apparent in altered bimodal speech perception and lacking benefit from bimodal information. This general deficit in children with dyslexia may underlie the previously reported reduced bimodal benefit for letter-speech sound combinations and similar findings in emotion perception. Answer: The evidence for a true deficit in speech perception abilities among adults with dyslexia is mixed and not entirely conclusive. Some studies suggest that there may be weak support for a speech perception deficit in dyslexia. For instance, a study found no significant group differences between adults with dyslexia and average readers in categorization in quiet or noise, across- and within-category discrimination as measured adaptively, or word perception. However, average readers showed better discrimination in a fixed-step discrimination task. The lack of consistent poor individual performance across related tasks suggests that some poor performances could be attributable to nonsensory factors like attention, and that some individuals with dyslexia may have speech perceptual acuity at the lower end of the normal range, potentially exacerbated by nonsensory factors (PUBMED:19635940). In contrast, another study demonstrated that speech envelope enhancement (EE) could instantaneously improve speech perception skills in students with dyslexia, suggesting that they may indeed have deficits in processing the temporal envelope of speech. This study found that students with dyslexia not only benefited from EE but benefited more from it than typical readers, which normalized their speech reception thresholds under adverse listening conditions. This supports the idea that speech perception deficits in dyslexia stem from difficulties in processing the temporal envelope (PUBMED:30844835). Other research has indicated that children with dyslexia may show a reduced processing benefit from bimodal speech information compared to their typically developing peers, suggesting general difficulties in bimodal speech perception that are independent of letter-speech sound knowledge (PUBMED:29352968). Overall, while some individuals with dyslexia may not show a consistent deficit in speech perception, there is evidence that speech perception difficulties, particularly under challenging listening conditions or when processing dynamic features of speech, may be present in this population. These difficulties could potentially be related to the phonological deficits observed in dyslexia and may have implications for reading skills and academic achievement (PUBMED:36054844, PUBMED:12093108, PUBMED:21930615, PUBMED:22522205, PUBMED:24905286).
Instruction: Trends in educational inequalities in cause specific mortality in Norway from 1960 to 2010: a turning point for educational inequalities in cause specific mortality of Norwegian men after the millennium? Abstracts: abstract_id: PUBMED:25418052 Trends in educational inequalities in cause specific mortality in Norway from 1960 to 2010: a turning point for educational inequalities in cause specific mortality of Norwegian men after the millennium? Background: Educational inequalities in total mortality in Norway have widened during 1960-2000. We wanted to investigate if inequalities have continued to increase in the post millennium decade, and which causes of deaths were the main drivers. Methods: All deaths (total and cause specific) in the adult Norwegian population aged 45-74 years over five decades, until 2010 were included; in all 708,449 deaths and over 62 million person years. Two indices of inequalities were used to measure inequality and changes in inequalities over time, on the relative scale (Relative Index of Inequality, RII) and on the absolute scale (Slope Index of Inequality, SII). Results: Relative inequalities in total mortality increased over the five decades in both genders. Among men absolute inequalities stabilized during 2000-2010, after steady, significant increases each decade back to the 1960s, while in women, absolute inequalities continued to increase significantly during the last decade. The stabilization in absolute inequalities among men in the last decade was mostly due to a fall in inequalities in cardiovascular disease (CVD) mortality and lung cancer and respiratory disease mortality. Still, in this last decade, the absolute inequalities in cause-specific mortality among men were mostly due to cardiovascular diseases (CVD) (34% of total mortality inequality), lung cancer and respiratory diseases (21%). Among women the absolute inequalities in mortality were mostly due to lung cancer and chronic lower respiratory tract diseases (30%) and CVD (27%). Conclusions: In men, absolute inequalities in mortality have stopped increasing, seemingly due to reduction in inequalities in CVD mortality. Absolute inequality in mortality continues to widen among women, mostly due to death from lung cancer and chronic lung disease. Relative educational inequalities in mortality are still on the rise for Norwegian men and women. abstract_id: PUBMED:30200912 What's the difference? A gender perspective on understanding educational inequalities in all-cause and cause-specific mortality. Background: Material and behavioural factors play an important role in explaining educational inequalities in mortality, but gender differences in these contributions have received little attention thus far. We examined the contribution of a range of possible mediators to relative educational inequalities in mortality for men and women separately. Methods: Baseline data (1991) of men and women aged 25 to 74 years participating in the prospective Dutch GLOBE study were linked to almost 23 years of mortality follow-up from Dutch registry data (6099 men and 6935 women). Cox proportional hazard models were used to calculate hazard ratios with 95% confidence intervals, and to investigate the contribution of material (financial difficulties, housing tenure, health insurance), employment-related (type of employment, occupational class of the breadwinner), behavioural (alcohol consumption, smoking, leisure and sports physical activity, body mass index) and family-related factors (marital status, living arrangement, number of children) to educational inequalities in all-cause and cause-specific mortality, i.e. mortality from cancer, cardiovascular disease, other diseases and external causes. Results: Educational gradients in mortality were found for both men and women. All factors together explained 62% of educational inequalities in mortality for lowest educated men, and 71% for lowest educated women. Yet, type of employment contributed substantially more to the explanation of educational inequalities in all-cause mortality for men (29%) than for women (- 7%), whereas the breadwinner's occupational class contributed more for women (41%) than for men (7%). Material factors and employment-related factors contributed more to inequalities in mortality from cardiovascular disease for men than for women, but they explained more of the inequalities in cancer mortality for women than for men. Conclusions: Gender differences in the contribution of employment-related factors to the explanation of educational inequalities in all-cause mortality were found, but not of material, behavioural or family-related factors. A full understanding of educational inequalities in mortality benefits from a gender perspective, particularly when considering employment-related factors. abstract_id: PUBMED:20179132 Educational inequalities in mortality over four decades in Norway: prospective study of middle aged men and women followed for cause specific mortality, 1960-2000. Objectives: To determine the extent to which educational inequalities in relation to mortality widened in Norway during 1960-2000 and which causes of death were the main drivers of this disparity. Design: Nationally representative prospective study. Setting: Four cohorts of the Norwegian population aged 45-64 years in 1960, 1970, 1980, and 1990 and followed up for mortality over 10 years. Participants: 359 547 deaths and 32 904 589 person years. Main Outcome Measures: All cause mortality and deaths due to cancer of lung, trachea, or bronchus; other cancer; cardiovascular diseases; suicide; external causes; chronic lower respiratory tract diseases; or other causes. Absolute and relative indices of inequality were used to present differences in mortality by educational level (basic, secondary, and tertiary). Results: Mortality fell from the 1960s to the 1990s in all educational groups. At the same time the proportion of adults in the basic education group, with the highest mortality, decreased substantially. As mortality dropped more among those with the highest level of education, inequalities widened. Absolute inequalities in mortality denoting deaths among the basic education groups minus deaths among the high education groups doubled in men and increased by a third in women. This is equivalent to an increase in the slope index of inequality of 105% in men and 32% in women. Inequalities on a relative scale widened more, from 1.33 to 2.24 among men (P=0.01) and from 1.52 to 2.19 among women (P=0.05). Among men, absolute inequalities mainly increased as a result of cardiovascular diseases, lung cancer, and chronic lower respiratory tract diseases. Among women this was mainly due to lung cancer and chronic lower respiratory tract diseases. Unlike the situation in men, absolute inequalities in deaths due to cardiovascular causes narrowed among women. Chronic lower respiratory tract diseases contributed more to the disparities in inequalities among women than among men. Conclusion: All educational groups showed a decline in mortality. Nevertheless, and despite the fact that the Norwegian welfare model is based on an egalitarian ideology, educational inequalities in mortality among middle aged people in Norway are substantial and increased during 1960-2000. abstract_id: PUBMED:27942745 Long-term trends of inequalities in mortality in 6 European countries. Objectives: We aimed to assess whether trends in inequalities in mortality during the period 1970-2010 differed between Finland, Norway, England and Wales, France, Italy (Turin) and Hungary. Methods: Total and cause-specific mortality data by educational level and, if available, occupational class were collected and harmonized. Both relative and absolute measures of inequality in mortality were calculated. Results: In all countries except Hungary, all-cause mortality declined strongly over time in all socioeconomic groups. Relative inequalities in all-cause mortality generally increased, but more so in Hungary and Norway than elsewhere. Absolute inequalities often narrowed, but went up in Hungary and Norway. As a result of these trends, Hungary (where inequalities in mortality where almost absent in the 1970s) and Norway (where inequalities in the 1970s were among the smallest of the six countries in this study) now have larger inequalities in mortality than the other four countries. Conclusions: While some countries have experienced dramatic setbacks, others have made substantial progress in reducing inequalities in mortality. abstract_id: PUBMED:27885048 Trends in educational inequalities in premature mortality in Belgium between the 1990s and the 2000s: the contribution of specific causes of deaths. Background: Reducing socioeconomic inequalities in mortality, a key public health objective may be supported by a careful monitoring and assessment of the contributions of specific causes of death to the global inequality. Methods: The 1991 and 2001 Belgian censuses were linked with cause-of-death data, each yielding a study population of over 5 million individuals aged 25-64, followed up for 5 years. Age-standardised mortality rates (ASMR) were computed by educational level (EL) and cause. Inequalities were measured through rate differences (RDs), rate ratios (RRs) and population attributable fractions (PAFs). We analysed changes in educational inequalities between the 1990s and the 2000s, and decomposed the PAF into the main causes of death. Results: All-cause and avoidable ASMR decreased in all ELs and both sexes. Lung cancer, ischaemic heart disease (IHD), chronic obstructive pulmonary disease (COPD) and suicide in men, and IHD, stroke, lung cancer and COPD in women had the highest impact on population mortality. RDs decreased in men but increased in women. RRs and PAFs increased in both sexes, albeit more in women. In men, the impact of lung cancer and COPD inequalities on population mortality decreased while that of suicide and IHD increased. In women, the impact of all causes except IHD increased. Conclusion: Absolute inequalities decreased in men while increasing in women; relative inequalities increased in both sexes. The PAFs decomposition revealed that targeting mortality inequalities from lung cancer, IHD, COPD in both sexes, suicide in men and stroke in women would have the largest impact at population level. abstract_id: PUBMED:25492898 Trends in inequalities in premature cancer mortality by educational level in Colombia, 1998-2007. Background: There is a paucity of studies on socioeconomic inequalities in cancer mortality in developing countries. We examined trends in inequalities in cancer mortality by educational attainment in Colombia during a period of epidemiological transition and rapid expansion of health insurance coverage. Methods: Population mortality data (1998-2007) were linked to census data to obtain age-standardised cancer mortality rates by educational attainment at ages 25-64 years for stomach, cervical, prostate, lung, colorectal, breast and other cancers. We used Poisson regression to model mortality by educational attainment and estimated the contribution of specific cancers to the slope index of inequality in cancer mortality. Results: We observed large educational inequalities in cancer mortality, particularly for cancer of the cervix (rate ratio (RR) primary vs tertiary groups=5.75, contributing 51% of cancer inequalities), stomach (RR=2.56 for males, contributing 49% of total cancer inequalities and RR=1.98 for females, contributing 14% to total cancer inequalities) and lung (RR=1.64 for males contributing 17% of total cancer inequalities and 1.32 for females contributing 5% to total cancer inequalities). Total cancer mortality rates declined faster among those with higher education, with the exception of mortality from cervical cancer, which declined more rapidly in the lower educational groups. Conclusions: There are large socioeconomic inequalities in preventable cancer mortality in Colombia, which underscore the need for intensifying prevention efforts. Reduction of cervical cancer can be achieved through reducing human papilloma virus infection, early detection and improved access to treatment of preneoplastic lesions. Reinforcing antitobacco measures may be particularly important to curb inequalities in cancer mortality. abstract_id: PUBMED:29046785 Educational inequalities in premature mortality by region in the Belgian population in the 2000s. Background: In Belgium, socio-economic inequalities in mortality have long been described at country-level. As Belgium is a federal state with many responsibilities in health policies being transferred to the regional levels, regional breakdown of health indicators is becoming increasingly relevant for policy-makers, as a tool for planning and evaluation. We analyzed the educational disparities by region for all-cause and cause-specific premature mortality in the Belgian population. Methods: Residents with Belgian nationality at birth registered in the census 2001 aged 25-64 were included, and followed up for 10 years though a linkage with the cause-of-death database. The role of 3 socio-economic variables (education, employment and housing) in explaining the regional mortality difference was explored through a Poisson regression. Age-standardised mortality rates (ASMRs) by educational level (EL), rate differences (RD), rate ratios (RR), and population attributable fractions (PAF) were computed in the 3 regions of Belgium and compared with pairwise regional ratios. The global PAFs were also decomposed into the main causes of death. Results: Regional health gaps are observed within each EL, with ASMRs in Brussels and Wallonia exceeding those of Flanders by about 50% in males and 40% in females among Belgian. Individual SE variables only explained up to half of the regional differences. Educational inequalities were also larger in Brussels and Wallonia than in Flanders, with RDs ratios reaching 1.8 and 1.6 for Brussels versus Flanders, and Wallonia versus Flanders respectively; regional ratios in relative inequalities (RRs and PAFs) were smaller. This pattern was observed for all-cause and most specific causes of premature mortality. Ranking the cause-specific PAFs revealed a higher health impact of inequalities in causes combining high mortality rate and relative inequality, with lung cancer and ischemic heart disease on top for all regions and both sexes. The ranking showed few regional differences. Conclusions: For the first time in Belgium, educational inequalities were studied by region. Among the Belgian, educational inequalities were higher in Brussels, followed by Wallonia and Flanders. The region-specific PAF decomposition, leading to a ranking of causes according to their population-level impact on overall inequality, is useful for regional policy-making processes. abstract_id: PUBMED:31660978 Educational inequalities in all-cause and cause-specific mortality among people with gout: a register-based matched cohort study in southern Sweden. Background: Gout is the most common inflammatory arthritis with a rising prevalence around the globe. While educational inequalities in incidence and prevalence of gout have been reported, no previous study investigated educational inequality in mortality among people with gout. The aim of this study was to assess absolute and relative educational inequalities in all-cause and cause-specific mortality among people with gout in comparison with an age- and sex-matched cohort free of gout in southern Sweden. Methods: We identified all residents aged ≥30 years of Skåne region with doctor-diagnosed gout (ICD-10 code M10, n = 24,877) during 1998-2013 and up to 4 randomly selected age- and sex-matched comparators free of gout (reference cohort, n = 99,504). These were followed until death, emigration, or end of 2014. We used additive hazards models and Cox regression adjusted for age, sex, marital status, and country of birth to estimate slope and relative indices of inequality (SII/RII). Three cause-of-death attribution approaches were considered for RII estimation: "underlying cause", "any mention", and "weighted multiple-cause". Results: Gout patients with the lowest education had 1547 (95% CI: 1001, 2092) more deaths per 100,000 person-years compared with those with the highest education. These absolute inequalities were larger than in the reference population (1255, 95% CI: 1038, 1472). While the contribution of cardiovascular (cancer) mortality to these absolute inequalities was greater (smaller) in men with gout than those without, the opposite was seen among women. Relative inequality in all-cause mortality was smaller in gout (RII 1.29 [1.18, 1.41]) than in the reference population (1.46 [1.38, 1.53]). The weighted multiple-cause approach generally led to larger RIIs than the underlying cause approach. Conclusions: Our register-based matched cohort study showed that low level of education was associated with increased mortality among gout patients. Although the magnitude of relative inequality was smaller in people with gout compared with those without, the absolute inequalities were greater reflecting a major mortality burden among those with lower education. abstract_id: PUBMED:28679369 Evolution of educational inequalities in site-specific cancer mortality among Belgian men between the 1990s and 2000s using a "fundamental cause" perspective. Background: According to the "fundamental cause" theory, emerging knowledge on health-enhancing behaviours and technologies results in health disparities. This study aims to assess (trends in) educational inequalities in site-specific cancer mortality in Belgian men in the 1990s and the 2000s using this framework. Methods: Data were derived from record linkage between the Belgian censuses of 1991 and 2001 and register data on mortality. The study population comprised all Belgian men aged 50-79 years during follow-up. Both absolute and relative inequality measures have been calculated. Results: Despite an overall downward trend in cancer mortality, educational differences are observed for the majority of cancer sites in the 2000s. Generally, inequalities are largest for mortality from preventable cancers. Trends over time in inequalities are rather stable compared with the 1990s. Conclusions: Educational differences in site-specific cancer mortality persist in the 2000s in Belgium, mainly for cancers related to behavioural change and medical interventions. Policy efforts focussing on behavioural change and healthcare utilization remain crucial in order to tackle these increasing inequalities. abstract_id: PUBMED:34999874 Education-related inequalities in cause-specific mortality: first estimates for Australia using individual-level linked census and mortality data. Background: Socioeconomic inequalities in mortality are evident in all high-income countries, and ongoing monitoring is recommended using linked census-mortality data. Using such data, we provide the first estimates of education-related inequalities in cause-specific mortality in Australia, suitable for international comparisons. Methods: We used Australian Census (2016) linked to 13 months of Death Registrations (2016-17). We estimated relative rates (RR) and rate differences (RD, per 100 000 person-years), comparing rates in low (no qualifications) and intermediate (secondary school) with high (tertiary) education for individual causes of death (among those aged 25-84 years) and grouped according to preventability (25-74 years), separately by sex and age group, adjusting for age, using negative binomial regression. Results: Among 13.9 M people contributing 14 452 732 person-years, 84 743 deaths occurred. All-cause mortality rates among men and women aged 25-84 years with low education were 2.76 [95% confidence interval (CI): 2.61-2.91] and 2.13 (2.01-2.26) times the rates of those with high education, respectively. We observed inequalities in most causes of death in each age-sex group. Among men aged 25-44 years, relative and absolute inequalities were largest for injuries, e.g. transport accidents [RR = 10.1 (5.4-18.7), RD = 21.2 (14.5-27.9)]). Among those aged 45-64 years, inequalities were greatest for chronic diseases, e.g. lung cancer [men RR = 6.6 (4.9-8.9), RD = 57.7 (49.7-65.8)] and ischaemic heart disease [women RR = 5.8 (3.7-9.1), RD = 20.2 (15.8-24.6)], with similar patterns for people aged 65-84 years. When grouped according to preventability, inequalities were large for causes amenable to behaviour change and medical intervention for all ages and causes amenable to injury prevention among young men. Conclusions: Australian education-related inequalities in mortality are substantial, generally higher than international estimates, and related to preventability. Findings highlight opportunities to reduce them and the potential to improve the health of the population. Answer: The trends in educational inequalities in cause-specific mortality in Norway from 1960 to 2010 indicate that there was a turning point for educational inequalities in cause-specific mortality of Norwegian men after the millennium. Relative inequalities in total mortality increased over the five decades for both genders. However, among men, absolute inequalities stabilized during 2000-2010 after significant increases in each preceding decade back to the 1960s. This stabilization was mostly due to a reduction in inequalities in cardiovascular disease (CVD) mortality and lung cancer and respiratory disease mortality. In the last decade, the absolute inequalities in cause-specific mortality among men were mostly due to CVD (34% of total mortality inequality) and lung cancer and respiratory diseases (21%). In contrast, among women, absolute inequalities in mortality continued to increase significantly during the last decade, mostly due to lung cancer and chronic lower respiratory tract diseases (30%) and CVD (27%). Thus, while absolute inequalities in mortality have stopped increasing among men, seemingly due to a reduction in inequalities in CVD mortality, they continue to widen among women, primarily due to death from lung cancer and chronic lung disease. Relative educational inequalities in mortality are still on the rise for both Norwegian men and women (PUBMED:25418052). This turning point in Norway contrasts with trends observed in other European countries during similar periods. For instance, in Belgium, absolute inequalities in mortality decreased among men but increased among women in the 1990s and 2000s, with relative inequalities increasing in both sexes (PUBMED:27885048). In Hungary and Norway, relative inequalities in all-cause mortality generally increased more than in other countries, and absolute inequalities often narrowed but went up in Hungary and Norway (PUBMED:27942745). Overall, these findings suggest that while there has been some progress in reducing educational inequalities in mortality among Norwegian men, particularly in CVD mortality, the challenge persists, and inequalities continue to increase on a relative scale. Additionally, the situation among women is worsening, indicating a need for targeted interventions to address the specific causes contributing to the widening gap.
Instruction: Does moxifloxacin alter oxidant status in the cornea? Abstracts: abstract_id: PUBMED:24964171 Does moxifloxacin alter oxidant status in the cornea? An experimental study. Objective: In this experimental study, we investigated the possible effects of intracameral moxifloxacin on oxidative stress parameters and endothelial cell morphology in corneal tissue. Methods: In total, 30 rats were randomly assigned to three groups of 10 rats: the sham group (Group 1, n = 10); the control group (Group 2), where balanced salt solution (BSS) was administered at a dose of 0.01 cc (n = 10); and the treatment group (Group 3), where moxifloxacin was administered at a dose of 0.05 mg/0.01 cc (n = 10). Total antioxidant status (TAS) and total oxidant status (TOS) in corneal tissue and blood samples were measured and the oxidative stress index (OSI) was calculated. Also, corneal tissue histopathology was evaluated with caspase-3 and caspase-8 staining. Apoptotic activity was also evaluated. Results: In blood samples, TAS, TOS, and OSI levels were not statistically significantly different (all p &gt; 0.05). Compared with the sham and control groups, TOS and OSI levels in cornea tissue were significantly different in the moxifloxacin group (all p &lt; 0.05). However, compared with the control group, no statistically significant difference was found in the sham group (all p &gt; 0.05). Compared with the sham and control groups, apoptotic activity was higher in the moxifloxacin group, in both immunohistochemical staining for caspase-3 and caspase-8. Conclusions: Intracameral moxifloxacin injection seems to be safe systemically, but it may have toxic effects on corneal tissues, as suggested by oxidative stress parameters and a histopathological evaluation. abstract_id: PUBMED:27138688 In vivo 3D measurement of moxifloxacin and gatifloxacin distributions in the mouse cornea using multiphoton microscopy. Moxifloxacin and gatifloxacin are fourth-generation fluoroquinolone antibiotics used in the clinic to prevent or treat ocular infections. Their pharmacokinetics in the cornea is usually measured from extracted ocular fluids or tissues, and in vivo direct measurement is difficult. In this study multiphoton microscopy (MPM), which is a 3D optical microscopic technique based on multiphoton fluorescence, was applied to the measurement of moxifloxacin and gatifloxacin distribution in the cornea. Intrinsic multiphoton fluorescence properties of moxifloxacin and gatifloxacin were characterized, and their distributions in mouse cornea in vivo were measured by 3D MPM imaging. Both moxifloxacin and gatifloxacin had similar multiphoton spectra, while moxifloxacin had stronger fluorescence than gatifloxacin. MPM imaging of mouse cornea in vivo showed (1) moxifloxacin had good penetration through the superficial corneal epithelium, while gatifloxacin had relatively poor penetration, (2) both ophthalmic solutions had high intracellular distribution. In vivo MPM results were consistent with previous studies. This study demonstrates the feasibility of MPM as a method for in vivo direct measurement of moxifloxacin and gatifloxacin in the cornea. abstract_id: PUBMED:35413026 Case Series: Unusual Presentation of Acanthamoeba Coinfection in the Cornea. Significance: The cases illustrate Acanthamoeba coinfection with Pseudomonas aeruginosa or microsporidia in the cornea. Purpose: This case series aimed to alert clinicians toward considering Acanthamoeba coinfection in the cornea when unusual presentation such as perineuritis or epitheliitis was observed in clinical images. Increased suspicion of Acanthamoeba coinfection may facilitate early diagnosis and prompt management, eventually leading to good vision outcomes. Case Series: An 11-year-old boy wearing orthokeratology lens for myopia control complained of pain in the right eye for 1 week. A paracentral corneal ulcer with perineuritis was observed. Culture from corneal tissue revealed P. aeruginosa , and an in vivo confocal microscopic examination showed highly reflective and oval-shaped structures indicating Acanthamoeba coinfection. Corneal lesions gradually improved under 0.02% polyhexamethylene biguanidine, 0.1% propamidine isethionate, and 0.3% ciprofloxacin. At 1 year, the final best-corrected visual acuity was 20/25 with residual paracentral corneal opacity. Another 20-year-old man complained of pain in the right eye for 2 weeks. Multiple raised corneal lesions associated with epitheliitis were found. Moreover, 1% acid-fast staining showed oval-shaped spores, and microsporidia infection was inferred. In addition, polymerase chain reaction results obtained after subjecting the patient to corneal debridement revealed positivity for Acanthamoeba . Polyhexamethylene biguanidine (0.02%) and 0.5% moxifloxacin were prescribed, and the lesions subsided. At a 2-year follow-up, the final best-corrected visual acuity was 20/25. Conclusions: Perineuritis in orthokeratology lens wearers and epitheliitis without any predisposing factor are unusual presentations of Acanthamoeba coinfection in the cornea. These corneal findings should arouse the suspicion of coinfection and enable the clinicians to conduct the appropriate workup and initiate adequate treatment. This case series demonstrated that early diagnosis and prompt treatment can improve visual prognosis. abstract_id: PUBMED:27631351 Two-Photon Microscopy of the Mouse Peripheral Cornea Ex Vivo. Purpose: To investigate the 3-dimensional (3D) cell and extracellular matrix (ECM) structure of mouse peripheral corneas in normal and corneal neovascularization tissues using 2-photon microscopy (TPM) based on both intrinsic and extrinsic moxifloxacin contrasts. Methods: Peripheral corneas in freshly enucleated mouse eyes were imaged by TPM based on both intrinsic and extrinsic contrasts. Intrinsic autofluorescence and second harmonic generation were used to image cells and ECM collagen, respectively. Moxifloxacin ophthalmic solution was applied to image cells. The peripheral cornea, limbus, and sclera were imaged in 3D. In addition to normal mice, mouse models of suture-induced corneal neovascularization were imaged to visualize changes in the microstructure. Results: Complex 3D cell and ECM structures in the cornea, limbus, and sclera were visualized by TPM. TPM images based on intrinsic contrasts visualized both cell and ECM structures, and TPM images based on moxifloxacin visualized cell structures with enhanced contrast. On the limbus side of the mouse peripheral cornea, TPM images visualized the vasculature in the limbus, the trabecular meshwork/Schlemm canal, iris, and ciliary body. On the scleral side, TPM images visualized cell and ECM structures in the sclera and multiple cell layers below the sclera. TPM images of the peripheral cornea in the corneal neovascularization condition visualized the extension of vasculature from the limbus to the cornea. Conclusions: TPM imaging based on both intrinsic and external moxifloxacin contrasts visualized detailed 3D cell and ECM microstructures in the mouse peripheral cornea. TPM based on moxifloxacin might be advantageous for studying cell structures by enhancing image contrast. abstract_id: PUBMED:28678917 Penetration of 0.3% ciprofloxacin, 0.3% ofloxacin, and 0.5% moxifloxacin into the cornea and aqueous humor of enucleated human eyes. We aimed to quantify the penetration of ciprofloxacin, ofloxacin, and moxifloxacin into the cornea and aqueous humor of cadaver eyes. A total of 60 enucleated eyes, not eligible for corneal transplantation, were divided into three groups and immersed in commercial solutions of 0.3% ciprofloxacin, 0.3% ofloxacin, or 0.5% moxifloxacin for 10 min. Whole corneas and samples of aqueous humor were then harvested and frozen, and drug concentrations analyzed by liquid chromatography tandem mass spectrometry. The mean corneal concentration of moxifloxacin was twice as high as ofloxacin, and the latter was twice as high as ciprofloxacin. The mean concentration of moxifloxacin in the aqueous humor was four times higher than the other antibiotics, and the mean concentrations of ciprofloxacin and ofloxacin were statistically similar. The amount of drug that penetrated the anterior chamber after a 10-min immersion was far below the safe limit of endothelial toxicity of each preparation. Moxifloxacin demonstrated far superior penetration into the cornea and anterior chamber of cadaver eyes compared to ciprofloxacin and ofloxacin. One should not expect endothelial toxicity with the commercial eye drops of ciprofloxacin, ofloxacin, and moxifloxacin that reach the anterior chamber through the cornea. abstract_id: PUBMED:29134554 Ex vivo rabbit cornea diffusion studies with a soluble insert of moxifloxacin. The objective of this research was to develop and evaluate an ocular insert for the controlled drug delivery of moxifloxacin which could perhaps be used in the treatment of corneal keratitis or even bacterial endophthalmitis. We have evaluated the ex vivo ocular diffusion of moxifloxacin through rabbit cornea, both fresh and preserved under different conditions. Histological studies were also carried out. Subsequently, drug matrix inserts were prepared using bioadhesive polymers. The inserts were evaluated for their physicochemical parameters. Ophthalmic ex vivo permeation of moxifloxacin was carried out with the most promising insert. The formulate insert was thin and provided higher ocular diffusion than commercial formulations. Ocular diffusion studies revealed significant differences between fresh and frozen corneas. Histological examinations also showed differences in the thickness of stroma between fresh and frozen corneas. The ophthalmic insert we have developed allows a larger quantity of moxifloxacin to permeate through the cornea than existing commercial formulations of the drug. Ocular delivery of moxifloxacin with this insert could be a new approach for the treatment of eye diseases. abstract_id: PUBMED:21058936 Effect of fluoroquinolones on the expression of matrix metalloproteinase in debrided cornea of rats. Matrix metalloproteinases (MMPs) are implicated in regenerative and healing processes in corneal injuries. Based upon reports that topical fluoroquinolones (FQs) may cause perforations during corneal healing by modulating MMPs, this study evaluated the comparative effects of commercially available FQs eye drops on the expression of MMP-2 and MMP-9 in the cornea after ethanol injury. Uniform corneal epithelial defects were created using 70% ethanol in the right eye of the rats (n = 6). The groups studied were (I) sham, (II) normal saline with benzalkonium chloride (NS-BKC), (III) norfloxacin 0.3%, (IV) ciprofloxacin 0.3%, (V) lomefloxacin 0.3%, (VI) sparfloxacin 0.3%, (VII) gatifloxacin 0.3%, and (VIII) moxifloxacin 0.5%. Each treatment was instilled six times/day up to 48 h and rats were sacrificed using excess of anesthesia. The corneas were excised to study the expression of MMP-2 and MMP-9 using gelatin zymography and real-time PCR. All the FQs significantly increased the expression of MMP-2 and MMP-9 as compared to the sham and NS-BKC-treated group. NS-BKC did not show a significant effect on MMPs expression compared to the sham group. Among the studied FQs, ciprofloxacin was observed to exhibit maximal induction of MMP-2 and MMP-9, whereas lomefloxacin exhibited an equivocal effect on both MMP-2 and MMP-9 expression. Findings of the present study demonstrate that topical application of FQs may induce the expression of MMP-2 and MMP-9 in debrided corneal epithelium and, therefore, may delay corneal wound healing. Thus, it can be concluded that selecting a FQ for ophthalmic use having minimal effect on MMPs may impact wound healing in injured or vulnerable cornea. abstract_id: PUBMED:17022866 The effect of moxifloxacin on the normal human cornea. Objective: To investigate the effects of moxifloxacin on the cornea of normal human eyes using confocal microscopy and slit-lamp biomicroscopy. Methods: This study enrolled adult volunteers who had a normal baseline ophthalmic examination. The dose regimen, similar to that of patients undergoing cataract extraction, was one drop of moxifloxacin in one eye four times a day for 3 days. The untreated fellow eye served as the control. Subjects had a baseline examination (Visit 1), started moxifloxacin the next day, and were examined 24 h (Visit 2) and 72 h (Visit 3) after starting medication. At each visit, visual acuity and adverse effects were recorded, slit-lamp examination with fluorescein was used to measure tear break-up time, and endothelial and epithelial cell counts were determined using confocal microscopy. Results: Fifteen volunteers (mean age 37 +/- 7 years) enrolled. No significant difference in visual acuity, tear break-up time, endothelial or epithelial cell counts was noted between the treated and fellow eye. Subjects experienced no significant decrease in visual acuity, tear break-up time, or endothelial cell counts during the 3-day treatment period in either eye. Epithelial cell counts were stable at Visits 1 and 2, and decreased similarly in the treated and control eye at Visit 3. Conclusions: Moxifloxacin was safe for use during the 3-day treatment period. Moxifloxacin causes no significant epithelial or endothelial toxicity, and has no effect on visual acuity or ocular surface integrity in healthy subjects treated using a dosing regimen that simulated prophylactic use following cataract surgery. abstract_id: PUBMED:27670369 Simultaneous quantification of gatifloxacin, moxifloxacin, and besifloxacin concentrations in cornea and aqueous humor by LC-QTOF/MS after topical ocular dosing. The fourth-generation fluoroquinolones are widely used as ophthalmic antimicrobials. This study aimed to validate a new analytical technique for simultaneous quantification of gatifloxacin, moxifloxacin, and besifloxacin concentrations in the cornea and aqueous humor by liquid chromatography (LC) coupled to quadrupole time-of-flight mass spectrometry (QTOF/MS) at 10min and 1h after instillation of topical ophthalmic antimicrobial suspensions. It was used twenty-two male dogs without ocular lesions verified by ophthalmic and histologic examinations. Methanol:water (4:1) was used for the extraction procedure for cornea and acetonitrile:water (4:1) was used for aqueous humor. The chromatographic separations were carried out on a C18 column with a linear gradient of water and methanol, both containing 0.1% formic acid. The total chromatographic run time was 4min. Mass spectrometry analyses were performed on a Xevo™ G2-S QTof tandem mass spectrometer, operated in a positive ion electrospray ionization (ESI+) mode. The retention times were approximately 1.42min for gatifloxacin, 1.87min for moxifloxacin, and 3.01min for besifloxacin. No interference peak was detected for the three tested antimicrobials in samples obtained from both cornea and aqueous humor, ensuring that the peak response was exclusive to the analyte of interest. The limit of detection for the three antimicrobials was 0.11μg/mL and the limit of quantification was 0.42μg/mL for both cornea and aqueous humor samples. At both time points post instillation of the three antimicrobials, moxifloxacin had the highest corneal concentration and besifloxacin demonstrated the highest concentration in the aqueous humor. abstract_id: PUBMED:16371785 Intraocular concentrations of gatifloxacin and moxifloxacin in the anterior chamber via diffusion through the cornea using collagen shields. Purpose: To evaluate the penetration of gatifloxacin and moxifloxacin into the anterior chamber, and any adverse reaction to the cornea, using collagen shields presoaked in oversaturated solutions of the antibiotics. Methods: Collagen shields, presoaked for 10 minutes in an oversaturated solution of gatifloxacin or moxifloxacin, were placed on the surface of each of the corneas of 15 rabbits for a total of 30 eyes (15 in each group). The antibiotics were prepared by dissolving the powder form of the antibiotics in a solution until no further particulate could be further dissolved. Aqueous humor samples were taken 3.5 and 6 hours later. Results: The initial concentrations of gatifloxacin and moxifloxacin were 5.43 +/- 0.16 mg/mL and 3.14 +/- 0.22 mg/mL, respectively. The average concentration of gatifloxacin in the anterior chamber was higher than that of moxifloxacin at the 3.5-hour sample (6.32 +/- 2.67 microg/mL versus 3.55 +/- 3.5 microg/mL, P = 0.0034). The concentrations of both antibiotics, although decreased, remained relatively high at the 6-hour sample (1.39 +/- 1.13 microg/mL versus 0.816 +/- 0.6 microg/mL at 6 hours, respectively, P = 0.22). No obvious clinical or histologic signs of toxicity were noticed in either group. Conclusion: Gatifloxacin and moxifloxacin showed good penetration into the anterior chamber with no obvious adverse reaction to the cornea. The concentrations in the anterior chamber exceeded the minimal inhibitory concentration (MIC) 90 of most organisms responsible for postoperative endophthalmitis (POE). Answer: Yes, moxifloxacin does alter oxidant status in the cornea. An experimental study investigated the possible effects of intracameral moxifloxacin on oxidative stress parameters and endothelial cell morphology in corneal tissue. The results showed that, compared with the sham and control groups, total oxidant status (TOS) and oxidative stress index (OSI) levels in cornea tissue were significantly different in the moxifloxacin group, suggesting that moxifloxacin may have toxic effects on corneal tissues as indicated by oxidative stress parameters and histopathological evaluation (PUBMED:24964171).
Instruction: Medial patellofemoral ligament anatomy: is it a predisposing factor for lateral patellar dislocation? Abstracts: abstract_id: PUBMED:22448067 Trochleoplasty and medial patellofemoral ligament reconstruction for recurrent patellar dislocation. We report a case of recurrent patellar dislocation with high-grade trochlear dysplasia which persisted despite two previous operations. We did a Dejour's sulcus deepening trochleoplasty, medial patellofemoral ligament reconstruction, and lateral retinacular release. Trochleoplasty and medial patellofemoral ligament reconstruction is required in patients with high grade trochlear dysplasia. abstract_id: PUBMED:22912521 Reconstruction of medial patellofemoral ligament for chronic patellar instability. Background: Disruption of the capsule, medial patellar retinaculum, and/or vastus medialis obliqus has been associated with recurrent patellar instability. Biomechanical studies have shown that the medial patellofemoral ligament (MPFL) is the main restraint against lateral patella displacement and reconstruction of the MPFL has become an accepted surgical technique to restore patellofemoral stability in patients having recurrent patellar dislocation. We report a prospective series of patients of chronic patellar instability treated by reconstruction of medial patellofemoral ligament. Materials And Methods: Twelve patients (15 knees) with recurrent dislocation of patella, were operated between January 2006 and December 2008. All patients had generalised ligament laxity with none had severe grade of patella alta or trochlear dysplasia. The MPFL was reconstructed with doubled semitendinosus tendon. Patients were followed up with subjective criteria, patellar inclination angle, and Kujala score. Results: The mean duration of followup after the operative procedures was an average of 42 months (range 24-60 months) 10 knees showed excellent results, 3 knees gave good results, and 2 knees had a fair result. The average patellar inclination angle decreased from 34.3° to 18.6°. The average preoperative Kujala functional score was 44.8 and the average postoperative score was 91.9. Conclusion: MPFL reconstruction using the semitendinosus tendon gives good results in patients with chronic patellar instability without predisposing factors like severe patella alta and high-grade trochlear dysplasia, and for revision cases. abstract_id: PUBMED:37469731 Combined medial patellofemoral ligament and medial patellotibial ligament reconstruction in recurrent patellar instability: A systematic review and meta-analysis. Background: The medial patellofemoral ligament (MPFL), along with the medial patellotibial ligament (MPTL) and medial patellomeniscal ligament, aid in the stabilization of the patellofemoral joint. Although the MPFL is the primary stabilizer and the MPTL is a secondary limiter, this ligament is critical in maintaining joint stability. There have been few studies on the combined MPFL and MPTL reconstruction and its benefits. Aim: To look into the outcomes of combined MPFL and MPTL reconstruction in frequent patellar instability. Methods: By May 8, 2022, four electronic databases were searched: Medline (PubMed), Scopus, Web of Science, and Google Scholar. General keywords such as "patellar instability," "patellar dislocation," "MPFL," "medial patellofemoral ligament," "MPTL," and "medial patellotibial ligament" were co-searched to increase the sensitivity of the search. Results: The pooled effects of combined MPFL and MPTL reconstruction for Kujala score (12-mo follow-up) and Kujala score (24-mo follow-up) were positive and incremental, according to the findings of this meta-analysis. The mean difference between the Cincinnati scores was also positive, but not statistically significant. The combination of the two surgeries reduces pain. According to cumulative meta-analysis, the trend of pain reduction in various studies is declining over time. Conclusion: The combined MPFL and MPTL reconstruction has good clinical results in knee function and, in addition to providing good control to maintain patellofemoral joint balance, the patient's pain level decreases over time, making it a valid surgical method for patella stabilization. abstract_id: PUBMED:24603827 Guidelines for medial patellofemoral ligament reconstruction in chronic lateral patellar instability. The standard surgical approach for chronic lateral patellar instability with at least two documented patellar dislocations is to stabilize the patella by using an anatomic medial patellofemoral ligament reconstruction with a mini-open technique and a graft that is stronger than the native ligament to compensate for the uncorrected predisposing factors underlying patellar instability. Even though medial patellofemoral ligament reconstruction has evolved notably during the past two decades, many aspects of the surgical technique need to be refined, and more information is needed toward this end. Adequate positioning of the graft on the femur, as well as inducing the appropriate degree of tension, are critical steps for the overall outcome of medial patellofemoral ligament reconstruction. Moreover, it is necessary in some cases to pair medial patellofemoral ligament reconstruction with other surgical procedures to address additional patellar instability risk factors, such as trochlear dysplasia, malalignment, and patella alta. abstract_id: PUBMED:21533537 Clinical outcomes of medial patellofemoral ligament repair in recurrent (chronic) lateral patella dislocations. Purpose: The specific aim of this project was to review a homogenous group of patients who had chosen to undergo medial patellofemoral ligament (MPFL) repair to stabilize their patella after repeat (chronic) patellofemoral dislocations and to record the success of this procedure in stabilizing the patella against further lateral dislocations. Methods: This is a retrospective study of 55 knees in 48 patients who underwent a repair of the medial patellofemoral ligament by a single surgeon over a 4-year period. The endpoint of the study was a documented patellofemoral dislocation or a minimum 2-year follow-up. Results: Fifty-five knees in 48 patients underwent a MPFL repair/reattachment. Patella alta, as defined by the Insall-Salvati (I/S) ratio, was within normal limits in 6 (14%). The I/S ratio was greater than 1.2 in 38 patients (86%) and greater than 1.4 in 23 patients (52%). Trochlear dysplasia as defined by a positive crossing sign on a true lateral radiograph and/or a sulcus angle on axial view of greater than 145 was present in 45/47 (96%). Successful management of the knee as defined by no further patella dislocation occurred in 22 knees. Nineteen knees sustained a redislocation event (46%). Conclusions: The surgical procedure of operative repair (reattachment) of the MPFL to its femoral attachment had a significantly high rate of failure in this patient group of chronic (repeat) lateral patellar dislocators. abstract_id: PUBMED:26389065 Medial Patellofemoral Ligament Reconstruction: A Comprehensive Review. It has been suggested that in the majority of patellar dislocation cases, the medial patellofemoral ligament (MPFL) is disrupted with a high recurrence rate especially in female patients. Although MPFL tear is not the primary cause of instability, MPFL reconstruction is effective for stabilizing the knee and may alone prevent lateral patellar dislocation. There is limited but growing evidence that MPFL reconstruction for patellofemoral instability leads to excellent functional outcomes. Growing awareness of the biomechanical contribution of the MPFL has led to an upsurge in the publication of techniques and trials dealing with reconstructive techniques, warranting a review that includes the most recent evidence. The aim of this article was to review and summarize the recent literatures concerning MPFL reconstruction and provide a comprehensive review of previous studies ranging from basic science to current surgical techniques and results. abstract_id: PUBMED:35451251 Reconstruction of the medial patellofemoral ligament with quadricipital tendon autograft Introduction: Alterations of the patellofemoral joint are one of the most common causes of pain and inflammation as well as joint damage and instability of the knee. Patellofemoral instability is a common multifactorial entity, requiring realignment by proximal, distal, or a combination of techniques. Within the proximal procedures in soft tissues, is the medial patellofemoral ligament plasty (MPFL), which aims to realign the patella medially and restore the anatomy between the quadricipital tendon, patella and tubercle of the tibia. Objective: To demonstrate that the plasty of the LPFM with quadriceps autograft is an effective technique with a low level of complications. Material And Methods: Observational, longitudinal, retrospective, descriptive, basic and clinical study. We present a series of 15 patients operated with this technique between October 2014 and September 2019. Results: LPFM plasty with autograft quadricipital is a safe technique, which does not use implants, which reduces the risk of complications and restores the anatomy of the extensor apparatus of the knee. Conclusion: Our technique of repair of the medial patellofemoral ligament, with quadriceps autograft is a safe, reproducible technique, with good results in the medium term, as well as a low incidence of complications. Patients have returned to their previous sports activities without episodes of re-dislocation. abstract_id: PUBMED:29805530 Treatment of recurrent patellar dislocation via knee arthroscopy combined with C-arm fluoroscopy and reconstruction of the medial patellofemoral ligament. Recurrent patellar dislocations were treated via knee arthroscopy combined with C-arm fluoroscopy, and reconstruction of the medial patellofemoral ligaments. Between October 2013 and March 2017, 52 cases of recurrent patellar dislocation [27 males and 25 females; age, 16-47 years (mean, 21.90 years)] were treated. Arthroscopic exploration was performed and patellofemoral joint cartilage injuries were repaired. It was subsequently determined whether it was necessary to release the lateral patellofemoral support belt. Pre-operative measurements were used to decide whether tibial tubercle osteotomy was required. Medial patellofemoral ligaments were reconstructed using autologous semitendinosus tendons. Smith and Nephew model 3.5 line anchors were used to double-anchor the medial patellofemoral margin. On the femoral side, the medial patellofemoral ligament was fixed using 7-cm, absorbable, interfacial compression screws. All cases were followed for 1-40 months (average, 21 months). The Q angle, tibial tuberosity trochlear groove distance, Insall-Salvati index, patellofemoral angle, lateral patellofemoral angle and lateral shift were evaluated on X-Ray images using the picture archiving and communication system. Subjective International Knee Documentation Committee (IKDC) knee joint functional scores and Lysholm scores were recorded. Post-operative fear was absent, and no patellar re-dislocation or re-fracture was noted during follow-up. At the end of follow-up, the patellofemoral angle (0.22±4.23°), lateral patellofemoral angle (3.44±1.30°), and lateral shift (0.36+0.14°) differed significantly from the pre-operative values (all, P&lt;0.05). Furthermore, IKDC and Lysholm scores (87.84+3.74 and 87.48+3.35, respectively) differed significantly from the pre-operative values (both, P&lt;0.05). These findings suggest that, in the short term, recurrent patellar dislocation treatment via knee arthroscopy combined with C-arm fluoroscopy and reconstruction of the medial patellofemoral ligament was effective. abstract_id: PUBMED:28932880 Double-bundle anatomical medial patellofemoral ligament reconstruction with lateral retinaculum plasty can lead to good outcomes in patients with patellar dislocation. Purpose: To evaluate the clinical outcomes of double-bundle anatomical medial patellofemoral ligament (MPFL) reconstruction combined with lateral retinaculum plasty versus isolated MPFL reconstruction for patellar dislocation. Methods: From 2011 to 2013, 64 patients underwent either double-bundle anatomical MPFL reconstruction combined with lateral retinaculum plasty (Group A, 32 patients), or isolated MPFL reconstruction (Group B, 32 patients). Clinical evaluations were performed 2 years post-operatively and included determination of the number of patients with patellar redislocation, the patellar tilt angle, lateral patellar shift, subjective symptoms, and functional outcomes as evaluated with the Kujala score and Lysholm score. Results: No dislocation or subluxation occurred during the 25-month follow-up. Radiographically, the patellar tilt angle was 6.0° ± 1.7° in Group A and 6.7° ± 2.0° in Group B, without a significant difference (n. s). The lateral patellar shift was 8.0 ± 2.6 mm in Group A and 8.5 ± 2.4 mm in Group B (n. s). The mean Kujala score was 91.8 ± 3.7 in Group A and 91.5 ± 3.6 in Group B post-operatively (n. s), and the mean Lysholm score was 92.5 ± 3.8 and 90.8 ± 4.9, respectively (n. s). Conclusions: Double-bundle anatomical MPFL reconstruction with lateral retinaculum plasty is a promising procedure that provides a new option for patellar dislocation. Level Of Evidence: III. abstract_id: PUBMED:35362817 Isolated medial patellofemoral ligament reconstruction is valid to stabilize patellofemoral joint but uncertain to reduce patellar height in setting of lateral patellar dislocation and patella alta. Introduction: Medial patellofemoral ligament reconstruction (MPFLR) is the most commonly used surgical treatment for patients with lateral patellar dislocation (LPD). It is still poorly understood whether or not MPFLR has a contributory effect on decreasing patellar height. Materials And Methods: Forty-five patients who underwent isolated MPFLR for LPD and patella alta were evaluated with a mean follow-up period of 24 months (22-25 months). Knee joint functions were evaluated by Banff patellofemoral instability instrument (BPII) 2.0 scores and Kujala scores. Patellofemoral engagement and stability were assessed by the patella tilt angle (PTA) and patellar congruence angle (PCA) measured by CT scans, and the patellar-glide test. Patellar height was calculated on lateral radiographs according to three methods: Caton-Deschamps ratios (CDR), Insall-Salvati ratios (ISR), and Blackburne-Peel ratios (BPR). A threshold value of p &lt; 0.05 denoted a statistically significant difference. Results: Significant improvements were found in both BPII 2.0 scores, which increased from 41.7 to 77.8 (p &lt; 0.001) and Kujala scores, which increased from 49.2 to 85.5 (p &lt; 0.001). Post-operative PTAs and PCA decreased from 19.6 ± 8.8 to - 3.4 ± 6.2, and from 24.6 ± 7.3 to 13.1 ± 3.8 degrees respectively (p &lt; 0.001). No patients showed lateral translation more than grade II in the patellar-glide test. Regarding patellar height, a tiny reduction (Δ = 0.02, Δ max = 0.09) was discovered in using CDR (p = 0.027), rather than ISR or BPR. All measurements of radiographic indices had an excellent intra- and inter-rater reliability (ICC &gt; 0.75). Conclusions: Isolated anatomic MPFLR is sufficient to achieve good clinical outcomes, as well as patellofemoral stability and high rates of return-to-sport. However, it is unclear if the reconstructed MPFL has a contributory effect on reducing patellar height. Answer: The medial patellofemoral ligament (MPFL) is a critical structure in the stabilization of the patellofemoral joint. Disruption of the MPFL has been associated with recurrent patellar instability, and biomechanical studies have shown that the MPFL is the main restraint against lateral patella displacement (PUBMED:22912521). The MPFL, along with the medial patellotibial ligament (MPTL) and medial patellomeniscal ligament, aid in the stabilization of the patellofemoral joint, with the MPFL being the primary stabilizer and the MPTL serving as a secondary limiter (PUBMED:37469731). In cases of patellar dislocation, it is often found that the MPFL is disrupted, which is not the primary cause of instability but is a significant predisposing factor for lateral patellar dislocation (PUBMED:26389065). The anatomy of the MPFL, therefore, plays a crucial role in patellar stability, and its insufficiency or injury can predispose individuals to lateral patellar dislocation. Reconstruction of the MPFL has become an accepted surgical technique to restore patellofemoral stability in patients with recurrent patellar dislocation (PUBMED:22912521). However, it is also noted that adequate positioning of the graft on the femur and inducing the appropriate degree of tension are critical for the overall outcome of MPFL reconstruction (PUBMED:24603827). Additionally, in some cases, it may be necessary to pair MPFL reconstruction with other surgical procedures to address additional patellar instability risk factors, such as trochlear dysplasia, malalignment, and patella alta (PUBMED:24603827). In summary, the anatomy of the MPFL is indeed a predisposing factor for lateral patellar dislocation, and its disruption is commonly associated with recurrent patellar instability. Reconstruction of the MPFL is a key surgical intervention to address this issue and restore stability to the patellofemoral joint.
Instruction: Are altered pharmacokinetics of non-steroidal anti-inflammatory drugs (NSAIDs) a risk factor for gastrointestinal bleeding? Abstracts: abstract_id: PUBMED:9578191 Are altered pharmacokinetics of non-steroidal anti-inflammatory drugs (NSAIDs) a risk factor for gastrointestinal bleeding? Aims: We hypothesised that pharmacokinetic factors might go some way to explaining the risk of major gastrointestinal haemorrhage with non-steroidal anti-inflammatory drugs (NSAIDs), with bleeders exhibiting a reduced clearance of NSAIDs compared with non-bleeders and set out to investigate this. Methods: Fifty patients presenting to hospital with acute gastrointestinal bleeding while taking piroxicam, indomethacin, diclofenac or naproxen and age, sex, musculoskeletal disease and drug matched community dwelling controls, up to two for each index case, who had not bled were recruited. Clinical details including duration of therapy were recorded. Bleeders discontinued the implicated NSAID at presentation, controls at least five half-lives before the study. Bleeders were contacted by letter 1 month after discharge and invited to take part and were studied after a median delay of 5 months. Subjects received an oral dose of their respective NSAID and venous blood was sampled, over a period determined by the half-life of the NSAID. Plasma concentrations were determined by high performance liquid chromatography. Results: The median length of treatment for the index patients was 1 year (range 2 weeks--28 years) and for the control patients 2 years (1 month--25 years), P&lt;0.0005. There were no significant differences in peak plasma concentration, time to peak plasma concentration or area under the plasma concentration-time curve between bleeders or controls for any of the NSAIDs studied, apart from piroxicam Cmax being lower in bleeders at 2.07 mg l(-1) than in controls at 3.21 mg l(-1), mean difference (95% CI) -1.14 (-1.83 - -0.48), P&lt;0.005. Conclusions: The data failed to support the hypothesis that reduced clearance of NSAIDs, which results in higher plasma concentrations, is a risk factor for acute gastrointestinal haemorrhage. abstract_id: PUBMED:33762373 Non-steroidal anti-inflammatory drugs and the gastrointestinal tract. Non-steroidal anti-inflammatory drugs (NSAIDs) are used commonly but can cause foregut symptoms, peptic ulcer disease and small bowel enteropathy. Such iatrogenic injury can be complicated by gastrointestinal bleeding and perforation. Limiting NSAID use or co-administration with proton pump inhibitors (PPIs) reduce dyspepsia, peptic ulcer disease and rates of complications. Selective cyclo-oxygenase (COX)-2 inhibitors are as effective as adding PPIs in preventing upper and lower gastrointestinal complications. COX-2 inhibitors are suggested in those with high cardiovascular risk and the addition of PPI in those with high risk of bleeding. Where required, COX-2 inhibitor monotherapy may be preferred in unexplained iron deficiency anaemia. abstract_id: PUBMED:19650027 Non steroidal anti-inflammatory drugs (NSAIDs) - balancing gastrointestinal complications and the cardiovascular risk Because of their anti-inflammatory, analgesic and antipyretic properties non-steroidal anti-inflammatory drugs (NSAIDs) are among the most frequently prescribed throughout all medical fields. Both gastrointestinal as well as cerebro- and cardiovascular risk needs to be considered. Especially before starting a long-term medical treatment with NSAIDs the individual gastrointestinal and cardiovascular risk of the patient has to be assessed carefully. abstract_id: PUBMED:26369685 Non-Steroidal Anti-Inflammatory Drugs (NSAIDs) Induced Dyspepsia. Non-steroidal anti-inflammatory drugs (NSAIDs) are the most prescribed group of drugs in the world. They are used primarily for pain relief in chronic inflammatory joint disease and act by inhibiting enzymes COX1 and COX2 and ultimately preventing the production of active prostanoids which are required for the innate inflammatory pathway. The use of NSAIDs have been associated with the development of gastrointestinal (GI) symptoms ranging from simple dyspepsia to life threatening GI bleeds and perforations. The definition of dyspepsia has evolved over the years and this has hampered accurate studies on the prevalence of dyspepsia as different studies used varying criteria to define dyspepsia. It is now known that NSAIDs significantly increase the risk of dyspepsia.The risk of developing peptic ulcer disease vary with specific NSAIDs and dosages but there is no correlation between the symptoms of dyspepsia and underlying peptic ulcers. The pathogenesis of dyspepsia with NSAIDs is not completely understood. Peptic ulceration alone is not able to account for the majority of dyspepsia symptoms encountered by NSAIDs users. Erosive oesophagitis secondary to NSAIDs may be contributing factor to the prevalence of dyspepsia in NSAIDs users. Altered gut permeability and changes in gastric mechanosensory function due to NSAIDs may also be a contributory factor. Management of NSAID induced dyspepsia is involves a multipronged approach. Drug avoidance if possible would be ideal. Other options include using the lowest effective dose, changing to an NSAIDs with a safer GI risk profile, avoiding concurrent use with other NSAIDs or if the patient has a previous history of peptic ulcer disease, and co-prescribing with anti-secretory medications such as proton pump inhibitors. Eradication of Helicobacter pylori has a protective role against developing peptic ulcers and may also improve symptoms of NSAIDs induced dyspepsia. abstract_id: PUBMED:29392089 A Comprehensive Review of Non-Steroidal Anti-Inflammatory Drug Use in The Elderly. NSAIDs, non-steroidal anti-inflammatory drugs, are one of the most commonly prescribed pain medications. It is a highly effective drug class for pain and inflammation; however, NSAIDs are known for multiple adverse effects, including gastrointestinal bleeding, cardiovascular side effects, and NSAID induced nephrotoxicity. As our society ages, it is crucial to have comprehensive knowledge of this class of medication in the elderly population. Therefore, we reviewed the pharmacodynamics and pharmacokinetics, current guidelines for NSAIDs use, adverse effect profile, and drug interaction of NSAIDs and commonly used medications in the elderly. abstract_id: PUBMED:23855222 Present status of gastrointestinal damage due to non-steroidal anti-inflammatory drugs (NSAIDs) Non-steroidal anti-inflammatory drugs (NSAIDs) are roughly divided into a low-dose aspirin group used for primary and secondary prevention of cardiovascular events and non-aspirin NSAIDs used for treatment of bone and joint diseases. Both cause gastrointestinal damage directly or indirectly. In the present study, we reviewed gastrointestinal damage due to non-aspirin NSAIDs with respect to the esophagus, stomach/duodenum, small intestine and colon. Damage due to NSAIDs occurs in all digestive tracts and since the analgesic effect of NSIADs hides subjective symptoms, the symptoms are often not treated until they are advanced to a serious state. Further, patients receiving NSAIDs are mostly elderly and have complications so that the onset of the conditions is serious and prevention is important. It is necessary to investigate a method that is effective for preventing damage for all digestive tracts and the mechanisms of damage must be understood for this reason. abstract_id: PUBMED:26538453 Choosing and using non-steroidal anti-inflammatory drugs in haemophilia. The management of pain and inflammation in haemophilic arthropathy is challenging due to the lack of anti-inflammatory analgesic agents perfectly suitable for this population. Non-steroidal anti-inflammatory drugs (NSAIDs) are widely used in the management of arthritis due to their analgesic and anti-inflammatory effects. Their use in persons with haemophilia (PWH), however, is limited due to increased risk of bleeding mainly from the upper gastrointestinal (UGI) tract. Cyclooxygenase-2 (COX-2) selective NSAIDs which have comparable analgesic effect to traditional NSAIDs (tNSAIDs) but with less UGI bleeding have been considered to be a suitable option for treatment of haemophilic arthropathy. COX-2 inhibitors, however, have an increased in the risk of cardiovascular (CV) disease. Although the atherosclerotic burden in PWH is similar to that in the general population, the risk of CV-related deaths is lower. PWH have a higher risk of GI bleeding and lower risk of thrombotic disease compared to general population. Therefore, when PWH require anti-inflammatory/analgesic agents, it seems reasonable to use lowest dose of COX-2 inhibitors for the shortest period together with a proton pump inhibitor. Helicobacter pylori infection should be tested for and eradicated prior to starting NSAID treatment in PWH. Furthermore, regular blood pressure and renal function test monitoring is required during COX-2 inhibitor treatment. abstract_id: PUBMED:10379468 Non-steroidal anti-inflammatory drugs and gastrointestinal bleeding. Non-steroidal anti-inflammatory drug use carries the risk of gastrointestinal complications (1% over 6 months) which is increased by a factor of 4 to 5, although strong differences are observed between different non-steroidal anti-inflammatory drugs. This risk is present in both the upper and lower gastrointestinal tract which indicates that non-steroidal anti-inflammatory drugs induces bleeding from both peptic ulcer and non-peptic ulcer sources. Symptoms are poor predictors of serious lesions and complications, which may occur without previous symptoms. At present, risk factors for non-steroidal anti-inflammatory drug-associated upper gastrointestinal bleeding are well defined and include ulcer or complication history, age, high non-steroidal anti-inflammatory drug dose, combination with corticosteroid and warfarin. Helicobacter pylori infection is not considered a risk factor for complications in non-steroidal anti-inflammatory drug users. There is a high prevalence of over-the-counter non-steroidal anti-inflammatory drug (especially aspirin) use among those presenting with gastrointestinal complications. Prophylactic aspirin regimens increase the risk of gastrointestinal bleeding. The potential beneficial effect of nitrate treatments (nitric oxide donors) in low dose aspirin users deserves further study. The mechanisms involved in the induction of gastrointestinal bleeding by non-steroidal anti-inflammatory drugs are poorly understood. Platelet activity inhibition associated with an abnormal, but reversible, prolongation of the bleeding time in susceptible individuals using aspirin might be a mechanism affecting no more than a third of patients with gastrointestinal bleeding. abstract_id: PUBMED:30149590 Use of Non-Steroidal Anti-Inflammatory Drugs and Associated Gastroprotection in a Cohort of Workers. Background: This study describes the prevalence of non-steroidal anti-inflammatory drug (NSAID) use, and analyses prescribing patterns of NSAIDs and associated gastroprotection. Methods: The study population consisted of 5650 workers at the General Motors automobile assembly plant in Zaragoza, Spain. NSAID prescription data for 2014 were obtained from the prescription database of Aragon (Spain). NSAID consumption was determined based on the number of defined daily doses purchased per year. Heavy NSAIDs users were identified using Lorenz curves. Results: NSAID use in the cohort was high (40.7% of workers, 95% CI 39.4⁻41.9). The prescription of proton pump inhibitors increased with age. Gastrointestinal protection was lacking in some participants who were being treated with drugs associated with a high risk of gastrointestinal bleeding. Heavy NSAID users (defined as those above the 95th percentile of consumption), accounted for 26% of total DDDs, and consumed a greater proportion of coxibs than non-heavy users. Conclusions: The rate of NSAID consumption in the cohort was high. To reduce the risk of gastrointestinal complications, monitoring and adequate gastroprotection are essential in patients who are prescribed NSAIDs for long periods of time or who are treated concomitantly with drugs that increase the risk of gastrointestinal bleeding. abstract_id: PUBMED:2979282 Ulcer and ulcer complications from non-steroidal anti-inflammatory drugs: what is the risk? Gastroenterologists believe that non-steroidal anti-inflammatory drugs (NSAIDs) cause dyspepsia, may cause ulcers to develop de novo and cause ulcer bleeding and perforation. Regulatory authorities are aware that NSAID-associated adverse events are reported more often than for any other drug class, and that gastrointestinal events are most common and often serious. A case-control study in the UK indicates that those who use NSAIDs may be between two and four times as liable to gastrointestinal bleeding and probably perforation as non-users, particularly if elderly. It has further been suggested that the chances of dying of ulcer complications are very high in NSAID users. By contrast, studies in the USA conducted prospectively and post-marketing surveillance in the UK have appeared to show little risk. Differences may be partially, if not completely, explained by the variable methodologies employed. Dispute also exists about the rank order and significance of toxic effects among the various agents. Answer: Altered pharmacokinetics of non-steroidal anti-inflammatory drugs (NSAIDs) are not supported as a risk factor for gastrointestinal bleeding based on the study presented in the abstract from PUBMED:9578191. The study hypothesized that patients who experienced gastrointestinal bleeding while taking NSAIDs would exhibit reduced clearance of these drugs compared to those who did not experience bleeding. However, the results showed no significant differences in peak plasma concentration, time to peak plasma concentration, or area under the plasma concentration-time curve between bleeders and controls for the NSAIDs studied, except for a lower peak plasma concentration of piroxicam in bleeders. This single finding did not support the overall hypothesis that reduced clearance, resulting in higher plasma concentrations, is a risk factor for acute gastrointestinal hemorrhage. Therefore, the data from this study failed to confirm altered pharmacokinetics of NSAIDs as a risk factor for gastrointestinal bleeding.
Instruction: Beta-human chorionic gonadotropin in semen: a marker for early detection of prostate cancer? Abstracts: abstract_id: PUBMED:29757577 Supposed pituitary-production of human chorionic Gonadotropin induced by androgen deprivation therapy. Introduction: The main cause of slightly elevated human chorionic gonadotropin (HCG) after successful treatment of male germ cell tumors is considered to be pituitary-derived HCG. It is well known that pituitary-derived HCG is frequently detected in postmenopausal women. We evaluated the status of serum HCG in men with elevated gonadotropins, which were induced by androgen deprivation therapy, using commercially available assays. Materials And Methods: We enrolled 44 patients with prostate cancer, who underwent luteinizing-hormone releasing hormone agonist treatment. We measured serum follicle-stimulating hormone (FSH), serum luteinizing hormone (LH), serum total HCG, serum free HCG-β subunit, and urine total HCG 3 times per patient, on the day of treatment initiation, the next day, and 3 months after. Results: On the day after treatment initiation, serum and urine HCG was detected in 61% and 73% of patients, respectively. Markedly strong correlations were observed between serum/urine HCG and FSH/LH. In particular, receiver operating characteristic curve analysis indicated excellent area under the curve (0.977, 95% confidence interval 0.951-1.003)) for serum HCG-detectable LH. At the cutoff value of 21.07 mIU/mL for serum HCG-detectable LH, the sensitivity and specificity were 96.7% and 95.3%, respectively. Serum HCG-β was not detectable at any times in any patients. Conclusions: Suggested pituitary-derived HCG can be frequently detected in patients with elevated gonadotropins, and there is a firm association between HCG detection and gonadotropin levels. abstract_id: PUBMED:6259190 Stimulation of adenosine 3',5'-monophosphate and sex steroids in the spermatic venous blood after human chorionic gonadotropin injection into human testes. To study the participation of cAMP in the action of gonadotropin on testicular steroidogenesis in the human testis in vivo, we have measured the concentrations of cAMP, testosterone, 5 alpha-dihydrotestosterone, estrone, 17 beta-estradiol, and hCG in the spermatic venous blood of the patients with prostatic cancer after hCG injections into the testis. Five minutes after hCG administration, spermatic cAMP increased to 5 times the pretreated level, and after 30 min, it increased to 20 times the pretreated level. Testosterone increased gradually after hCG injection, and the 2-fold increase was demonstrated at 50 min. Although the pattern of the changes in spermatic 5 alpha-dihydrotestosterone was similar to that of testosterone, a statistically significant increase was not observed after hCG administration. Estrogen production was also stimulated by hCG. These results are consistent with the view that cAMP may participate in the action of hCG upon steroidogenesis in the testis of human beings in vivo, as has previously been observed with rat and human testes in vitro. abstract_id: PUBMED:3601278 Receptor binding properties and biologic action of deglycosylated human chorionic gonadotropin in human ovary and testis. The receptor binding properties and biologic actions of chemically deglycosylated-asialo human choriogonadotropin (AHF-hCG) were studied in human ovary and testis. In corpus luteum and testis homogenates, the relative binding affinity of AHF-hCG was two- to fourfold higher in the ovary and five- to tenfold higher in the testis than that of native hCG. When assayed for luteinizing hormone (LH)-like activity in granulosa-luteal cells from in vitro fertilization patients and in testicular minces from patients undergoing orchiectomy for prostatic cancer, AHF-hCG did not stimulate cyclic adenosine monophosphate production. When added with hCG to granulosa-luteal cells or to testicular minces, AHF-hCG inhibited hCG-stimulated cyclic adenosine monophosphate production. These results indicate that the enhanced affinity to LH receptor caused by removal of the sugar moieties from hCG is associated with total inability to activate granulosa-luteal and Leydig cell adenylate cyclase, and that AHF-hCG is, in the human gonad, an hCG antagonist. abstract_id: PUBMED:6135909 Treatment of advanced prostatic cancer with LHRH analogue ICI 118630: clinical response and hormonal mechanisms. 9 of 12 patients with advanced metastatic carcinoma of the prostate treated with luteinising-hormone-releasing-hormone (LHRH) analogue ICI 118630 for a mean period of 6 months showed objective evidence of response to treatment. Of 8 patients with bone pain, 7 obtained relief. After 6 weeks of treatment testosterone concentrations were reduced to castrate levels (range less than 2 to 5.5 nmol/l) from a pretreatment mean value of 15.7 nmol/l (range 10.3-24 nmol/l). Basal gonadotropin levels and gonadotropin responses to acute LHRH stimulation were suppressed within 2 weeks of treatment. However, the testosterone response to stimulation with human chorionic gonadotropin was unimpaired 4 weeks after the start of treatment. Therefore suppression of the basal testosterone concentration by ICI 118630 was due to inhibition of pituitary luteinising-hormone secretion rather than direct inhibition of testicular Leydig-cell function. ICI 118630 offers an alternative treatment to orchidectomy and oestrogen therapy. abstract_id: PUBMED:3023429 Lack of a direct effect of gonadotropin hormone-releasing hormone agonist on human testicular steroidogenesis. In an attempt to determine whether the chronic administration of GnRH agonist (GnRH-A) has a direct inhibitory effect on testicular steroidogenesis in the human, the testes of four men with disseminated prostatic cancer who were treated with GnRH-A daily for at least 1 yr were assayed for intratesticular pregnenolone (5-pregnen-3 beta-ol-20-one), progesterone, dehydroepiandrosterone, 17 alpha-hydroxypregnenolone (5-pregnen-3 beta 17 alpha-diol-20-one), 17 alpha-hydroxyprogesterone, androstenedione, and testosterone (T). In addition, testicular 17 alpha-hydroxylase, 17,20-desmolase, and 17 beta-hydroxysteroid dehydrogenase enzyme activities of the delta 4 pathway were measured. These intratesticular steroids and enzyme activities from four GnRH-A-treated patients were compared to those in five men (controls) who were orchiectomized as the primary treatment for their disseminated prostatic cancer and in three other men who were treated for 3-12 months with GnRH-A daily but received, in addition to the daily GnRH-A, 1000 IUhCG, im, every other day for 3 days immediately before their salvage orchiectomy, which was performed when their disease progressed. In the control group, the delta 5-steroids, particularly dehydroepiandrosterone and pregnenolone, represented the majority of the intratesticular steroids. Compared to control values, all intratesticular steroids except delta 4-P (for which there was no difference) were significantly lowered by treatment with GnRH-A. Intratesticular T was reduced by 98% from 328 +/- 139 (+/- SEM) ng/g testis in the control group to 8 +/- 3 in the GnRH-A-treated group (P less than 0.01). The additional treatment with hCG for 3 days in the GnRH-A-treated group reversed the inhibition of all steroids to either control or above control levels, with intratesticular T rising to 1144 +/- 273 ng/g testis. A similar trend was found for all three enzymatic activities, i.e., GnRH-A alone inhibited each of the enzymatic activities, whereas the addition of hCG reversed this inhibition by GnRH-A. These data indicate that the chronic administration of GnRH-A to elderly men results in inhibition in both the delta 4 and delta 5 pathways, with a subsequent decrease in the intratesticular T concentration. The ability of exogenous hCG to reverse both the reduction in delta 4 and delta 5 intratesticular steroid content and the intratesticular enzyme activities induced by GnRH-A treatment supports the concept that GnRH-A does not have a direct inhibitory effect on testicular T biosynthesis. abstract_id: PUBMED:1514027 Anti-LHRH and anti-pituitary gonadotropin vaccines: their development and clinical applications. Active immunization against hormones involved in the regulation of reproduction is a promising approach to immunocontraception. The hypothalamic peptide, LHRH, controls the synthesis and release of the pituitary gonadotropins, LH and FSH, which regulate gonadal steroidogenesis, sperm production, follicular development and ovulation. Immunizing female primates against LHRH or LH induces infertility, but also disrupts the menstrual cycle. Immunization against the beta subunit of the placental hormone, chorionic gonadotropin (hCG), or its fragment prevents pregnancy without interfering with menstrual cycles or ovulation. hCG vaccines have reached the stage of clinical trials. FSH and LHRH have been tested for immunocontraception in male primates. While active as well as passive immunization against FSH reduced spermatogenesis severely, azoospermia could not be achieved consistently. Immunization against LHRH effectively suppressed spermatogenesis in rats and rabbits. Normal sexual behaviour was maintained by concomitant androgen administration. Fertility was restored when antibody titres declined and no adverse effects were observed. A number of LHRH vaccine preparations are being tested in men in several countries, including the United States. Since the LHRH vaccine reduces serum testosterone levels the first clinical studies involve men with prostate cancer. These trials will be followed by immunization of normal men if the antibody response is sufficient and no adverse effects are observed. abstract_id: PUBMED:8237249 Long-term endocrine effects of administration of either a non-steroidal antiandrogen or a luteinizing hormone-releasing hormone agonist in men with prostate cancer. The claimed ability of non-steroidal antiandrogens to preserve libido and sexual potency is sought as a potential improvement in the palliative management of prostate cancer. A critical issue for the clinical use of these compounds is, however, the reported evidence in the rat of an excessive increase in testosterone concentrations as a consequence of the androgen negative feedback interruption. On the other hand, the recovery of testicular function after long-term inhibition by luteinizing hormone-releasing hormone (LHRH) analogs is also an important concern in view of the proposed use of these compounds for the treatment of several non-malignant conditions. We addressed these issues by studying the long-term endocrine effects induced by the administration of either the non-steroidal antiandrogen nilutamide or the depot preparation of D-Trp6-LHRH in men with prostate cancer. Treatment with the antiandrogen induced a marked increase in gonadotropin levels, LH concentrations rising from a mean (SEM) of 17.5 +/- 1.6 to a maximum of 56.6 +/- 6.9 kU/l (p &lt; 0.001), while mean testosterone and 17 beta estradiol-concentrations rose only by about 50% and 70% over pretreatment values, testosterone levels reaching a plateau after 1 month of treatment. In the subjects treated with the LHRH agonist, 6 months after discontinuation of long-term administration the mean (+/- SEM) LH had risen to 36.9 +/- 6.8 IU/l while mean testosterone levels were still as low as 1.7 +/- 0.7 and rose only to a maximum of 4.2 +/- 1 nmol/l after high-dose human chorionic gonadotropin loadings.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:21273504 Local expression of epidermal growth factor-like growth factors in human testis and its role in spermatogenesis. Microdissection testicular sperm extraction (micro-TESE) has revealed that spermatogenesis in nonobstructive azoospermia (NOA) patients is heterogeneous, even in the same testis, but there is no information regarding growth factors to support spermatogenesis. We investigated the involvement of epidermal growth factor (EGF)-like growth factors, which play important roles in cell proliferation and differentiation in NOA patients who underwent micro-TESE. Testicular samples were obtained from 5 fertile men (15 samples), 5 prostate cancer patients receiving maximum androgen blockade (10 samples), and 13 NOA patients who underwent micro-TESE (50 samples). The expression of the mRNA for EGF, heparin binding (HB)-EGF, amphiregulin, epiregulin, betacellulin, and transforming growth factor (TGF)-α were analyzed by real-time polymerase chain reaction analysis and adjusted using the expression of glyceraldehyde-3-phosphate dehydrogenase. Heterogeneous expression of these EGF-like growth factors were observed even in the same testis. The expression of HB-EGF, amphiregulin and TGF-α in NOA and prostate cancer patients was significantly lower than observed in fertile controls. In NOA patients, expression in the testicular sample comprising mature sperm was significantly higher than those without mature sperm, indicating that HB-EGF, amphiregulin, and TGF-α are considered to participate in creating a suitable niche for spermatogenesis. Considering the findings that ablation of gonadotropin inhibited and human chorionic gonadotropin stimulation increased these EGF-like growth factors, the expressions are presumably under gonadotropin regulation. abstract_id: PUBMED:1538472 Suramin inhibits gonadotropin action in rat testis: implications for treatment of advanced prostate cancer. Suramin is being evaluated for the treatment of metastatic prostate cancer based on its inhibition of growth factor action. In addition, suramin may inhibit the endocrine control of androgen production, which was explored herein. Adult Sprague-Dawley rats were injected (i.p.) daily with varying doses of suramin. At a cumulative dose of 200 mg., suramin significantly depressed serum testosterone (p less than 0.05), and follicle stimulating hormone (p less than 0.002) levels. In vitro studies showed that suramin-mediated suppression of androgen production might be secondary to inhibition of gonadotropin action. In MA-10 cell cultures, suramin inhibited a maximum stimulatory dose of human chorionic gonadotropin with an ED50 of 4.4 microM. Studies in rat Sertoli cell cultures showed that follicle stimulating hormone action was also inhibited by suramin, with an ED50 of 8.6 microM. Using receptor binding assays with calf testis membrane, we showed that suramin inhibited 125I-hFSH binding to receptor in a dose dependent fashion with an ED50 of 10.4 microM; comparable to the ED50 of suramin inhibition of follicle stimulating hormone action in Sertoli cell culture cells. Thus the mechanism of suramin's suppression of androgen production may involve multiple sites of action, including inhibition of gonadotropin binding to its receptor and suppression of pituitary gonadotropin levels in serum. This inhibition of androgen production may be useful in the treatment of advanced prostate cancer. abstract_id: PUBMED:36356295 Pituitary-testis axis dysfunction following adjuvant androgen deprivation therapy. Men with high-risk, non-metastatic prostate cancer receive adjuvant androgen deprivation therapy (ADT) for at least 2 years according to Danish guidelines. It remains unclarified if patients regain the function of the pituitary-testis axis after cessation of ADT. Thus, we aimed to investigate the function of the pituitary-testis axis following adjuvant ADT. In this study, we included men who underwent external beam radiation therapy and ADT for high-risk prostate cancer. All patients underwent assessment of testosterone deficiency (TD) symptoms, full biochemical assessment of the pituitary-testis axis, and dynamic stimulatory tests of gonadotropin (gonadotropin-releasing hormone (GnRH) test) and testosterone production (human chorionic gonadotrophin (hCG) test). Patients were diagnosed with TD based on a combination of TD symptoms and testosterone below age-specific reference ranges. TD was characterized as primary, secondary, or mixed based on serum gonadotropins and stimulatory tests. We found that among the 51 patients included in the study, the median time on ADT was 3.2 years and median time since ADT cessation was 3.8 years. Twenty-eight patients were diagnosed with TD; 10 had primary TD (testicular dysfunction), 11 secondary TD (pituitary dysfunction), and 7 mixed TD (combined pituitary and testicular dysfunction). An inadequate testosterone response to hCG stimulation was shown in 42 patients, whereas only 11 patients had a subnormal gonadotropin response to GnRH. We conclude that persistent TD is a common long-term consequence of adjuvant ADT in prostate cancer survivors, equally distributed between pituitary and testicular dysfunction. The study emphasizes the necessity for systematic follow-up of full pituitary-testis axis function in patients receiving adjuvant ADT. Answer: The provided abstracts do not directly address the use of beta human chorionic gonadotropin (beta-hCG) in semen as a marker for the early detection of prostate cancer. However, they do discuss various aspects of hCG and its relation to testicular function, prostate cancer treatment, and the pituitary-testis axis. For instance, one study investigated the status of serum hCG in men with elevated gonadotropins induced by androgen deprivation therapy (ADT) for prostate cancer treatment. It was found that pituitary-derived hCG can be frequently detected in patients with elevated gonadotropins, suggesting a firm association between hCG detection and gonadotropin levels (PUBMED:29757577). Another study examined the long-term endocrine effects of administering either a non-steroidal antiandrogen or a luteinizing hormone-releasing hormone (LHRH) agonist in men with prostate cancer. It was observed that treatment with the antiandrogen induced a marked increase in gonadotropin levels, while testosterone and estradiol concentrations rose only modestly (PUBMED:8237249). Additionally, research on the pituitary-testis axis dysfunction following adjuvant ADT revealed that persistent testosterone deficiency (TD) is a common long-term consequence of adjuvant ADT in prostate cancer survivors, with dysfunction being equally distributed between pituitary and testicular dysfunction (PUBMED:36356295). While these studies provide insights into the hormonal changes associated with prostate cancer treatment and the potential for hCG to be detected in serum and urine, they do not establish beta-hCG in semen as a specific marker for the early detection of prostate cancer. Further research would be required to determine the validity and utility of beta-hCG in semen for this purpose.
Instruction: Does mirtazapine have a more rapid onset than SSRIs? Abstracts: abstract_id: PUBMED:11411818 Does mirtazapine have a more rapid onset than SSRIs? Background: A single study utilizing a cross-sectional analysis of scores on the Hamilton Rating Scale for Depression (HAM-D) suggested that mirtazapine has a more rapid onset than selective serotonin reuptake inhibitors (SSRIs). Analysis based on the HAM-D may favor drugs with sleep-producing effects. The purpose of the present study was to determine if a review of all studies comparing an SSRI with mirtazapine, utilizing persistent improvement as the dependent variable, would suggest that mirtazapine had a more rapid onset than SSRIs. Method: All double-blind studies comparing mirtazapine with SSRIs were analyzed. Included in the analysis to determine speed of onset were 298 patients taking mirtazapine and 285 taking an SSRI. Pattern analysis, which has been described and used by other researchers, was employed to study speed of onset. Results: At the end of each of the 3 studies, the total number of responders for each of the drugs did not differ. However, the proportion of responders with onset of persistent improvement in week 1 was greater for mirtazapine (13%, 38/298) than for the SSRIs (6%, 18/285; chi2 = 6.95, df = 1, p = .008). Conclusion: These data support the possibility that mirtazapine may have a more rapid onset than SSRIs. This observation should be considered preliminary because of the retrospective nature of the analysis and the absence of a placebo group. abstract_id: PUBMED:37700038 Usefulness of mirtazapine and SSRIs in late-life depression: post hoc analysis of the GUNDAM study. Objective: Mirtazapine and SSRIs are widely prescribed as first-line agents for late-life depression. However, evidence for these drugs is mostly based on non-elderly patients. Therefore, we reanalyzed a randomized controlled trial of mirtazapine versus SSRIs for depression in a sub-population of late-life patients. Methods: A randomized controlled trial was conducted with 141 patients, of whom 41 were elderly, and 100 were non-elderly. This study compared SSRIs and mirtazapine in late-life depression, examined late-onset and early adult-onset separately and compared elderly and non-elderly patients for each drug. Treatment effects and adverse events were assessed using the Hamilton Depression Rating Scale and the Udvalg for Kliniske Undersøgelser Side Effect Rating Scale, respectively. Results: In late-life depression, mirtazapine showed faster HAM-D total score improvement (3.3 points difference, p = 0.021) and higher improvement in insomnia (1.7 points difference, p = 0.001) and appetite (1.2 points difference, p = 0.020). Similar findings were observed for late-onset depression with the HAM-D total score (4.3 points difference, p = 0.007) and appetite (0.9 points difference, p = 0.004), favoring mirtazapine. Depressive symptoms were generally less improved in late-life depression than in non-late-life depression. Regarding the effect of mirtazapine on appetite, late-life depression showed greater improvement (0.7 points difference, p = 0.008). Nausea and micturition disturbances were more common with SSRIs in late-life depression than in non-late-life depression. In contrast, somnolence was less common in late-life depression with mirtazapine. Conclusion: The potential usefulness of mirtazapine in elderly patients was demonstrated. The results also showed differences in the treatment response to SSRIs and mirtazapine between elderly and non-elderly patients. abstract_id: PUBMED:11735614 Third-generation antidepressants: do they offer advantages over the SSRIs? Third-generation antidepressants are a group of antidepressant agents of variable action, not confined to serotonin reuptake inhibition. These agents include venlafaxine, reboxetine, nefazodone and mirtazapine. Claims have been made for these agents in terms of improved efficacy, faster speed of onset of effect and greater safety in the treatment of depression compared with previous medications, such as the selective serotonin reuptake inhibitors (SSRIs). This article reviews the evidence for these improvements. Thirty active comparator studies were reviewed involving the third-generation antidepressant agents. While there were isolated reports of improvements over comparator agents for venlafaxine, reboxetine and mirtazepine, there were no convincing differences between third-generation agents and comparators in terms of overall efficacy, relapse prevention and speed of onset. The third-generation antidepressants were, however, of equivalent safety to SSRIs and maintained improvements in safety over first-generation agents. abstract_id: PUBMED:14969574 SSRIs versus non-SSRIs in post-traumatic stress disorder: an update with recommendations. Post-traumatic stress disorder (PTSD) is a highly prevalent (7.8% lifetime rate) anxiety disorder with impairment in daily functioning, frequent suicidal behaviour and high rates of co-morbidity. Fortunately, PTSD is responsive to pharmacotherapy and psychotherapy. The selective serotonin reuptake inhibitors (SSRIs) are the most studied medications for PTSD, with the largest number of double-blind, placebo-controlled trials. Of the SSRIs, sertraline, paroxetine and fluoxetine have been the most extensively studied, with sertraline and paroxetine being US FDA-approved for PTSD. These studies have demonstrated that SSRIs are effective in short-term trials (6-12 weeks). Furthermore, continuation and maintenance treatment for 6-12 months decrease relapse rates. Besides being the most studied and effective drugs for PTSD, SSRIs have a favourable adverse effect profile, making them the first-line treatment for PTSD. If SSRIs are not tolerated or are ineffective, non-SSRIs should be considered. Serotonin-potentiating non-SSRIs, such as venlafaxine, nefazodone, trazodone and mirtazapine, have been evaluated in PTSD only in open-label and case studies. Because of their promising results and relatively good safety profile, they should be considered as second-line treatment. Monoamine oxidase inhibitors (MAOIs) and tricyclic antidepressants (TCAs) have both been evaluated in a small number of double-blind, placebo-controlled studies. The results have been inconsistent but promising. In the limited comparative studies, MAOIs appeared superior to TCAs but patients continued to have residual symptoms. These drugs have significant adverse effects, such as cardiovascular complications, and safety issues, such as ease of overdose. Therefore, TCAs and MAOIs should be considered as third-line treatment. Anticonvulsants have been evaluated in PTSD in open-label studies and results have been positive for carbamazepine, valproic acid, topiramate and gabapentin. A small double-blind, placebo-controlled study demonstrated efficacy of lamotrigine for PTSD. Anticonvulsants should be considered where co-morbidity of bipolar disorder exists, and where impulsivity and anger predominate. Bupropion (amfebutamone), a predominantly noradrenergic reuptake inhibitor, was ineffective in PTSD in an open-label study. Benzodiazepines were ineffective in a double-blind, placebo-controlled study despite encouraging case reports. They should be avoided or used only short term because of potential depressogenic effects, and the possibility that they may promote or worsen PTSD. Buspirone, a non-benzodiazepine anxiolytic, was found to be effective in PTSD only in open-label studies. Recently, atypical antipsychotics were as effective as monotherapy and as an augmenter to SSRIs in open-label/case studies and small double-blind, placebo-controlled trials; atypical antipsychotics should be considered in PTSD where paranoia or flashbacks are prominent and in potentiating SSRIs in refractory cases. abstract_id: PUBMED:11444761 Pharmacology of rapid-onset antidepressant treatment strategies. Although selective serotonin reuptake inhibitors (SSRIs) block serotonin (5-HT) reuptake rapidly, their therapeutic action is delayed. The increase in synaptic 5-HT activates feedback mechanisms mediated by 5-HT1A (cell body) and 5-HT1B (terminal) autoreceptors, which, respectively, reduce the firing in 5-HT neurons and decrease the amount of 5-HT released per action potential resulting in attenuated 5-HT neurotransmission. Long-term treatment desensitizes the inhibitory 5-HT1 autoreceptors, and 5-HT neurotransmission is enhanced. The time course of these events is similar to the delay of clinical action. The addition of pindolol, which blocks 5-HT1A receptors, to SSRI treatment decouples the feedback inhibition of 5-HT neuron firing and accelerates and enhances the antidepressant response. The neuronal circuitry of the 5-HT and norepinephrine (NE) systems and their connections to forebrain areas believed to be involved in depression has been dissected. The firing of 5-HT neurons in the raphe nuclei is driven, at least partly, by alpha1-adrenoceptor-mediated excitatory inputs from NE neurons. Inhibitory alpha2-adrenoceptors on the NE neuroterminals form part of a feedback control mechanism. Mirtazapine, an antagonist at alpha2-adrenoceptors, does not enhance 5-HT neurotransmission directly but disinhibits the NE activation of 5-HT neurons and thereby increases 5-HT neurotransmission by a mechanism that does not require a time-dependent desensitization of receptors. These neurobiological phenomena may underlie the apparently faster onset of action of mirtazapine compared with the SSRIs. abstract_id: PUBMED:12404667 Onset of action of antidepressants: results of different analyses. Because the value of antidepressants is hampered by their delay in onset of action, considerable attention has been focused on developing a drug that acts more rapidly. However, although specific studies are now ongoing, there have been no peer-reviewed prospective onset of action trials published in the literature to date. Some data are currently available from post-hoc pooled analyses and numerous methods have been developed for evaluating the onset of action; these include the time to response, the time to onset of therapeutic effect, pattern analysis and survival analyses. Such an analysis of four large-scale, double-blind studies has provided evidence for an earlier onset of action with mirtazapine than with the SSRIs (fluoxetine, paroxetine and citalopram). Significant differences were seen between mirtazapine and the SSRIs after 1 week of treatment. This effect was consistent across the four different methodologies and appears to be due to a specific antidepressant effect rather than an early effect on, for example, sleep. These findings await confirmation from specifically designed prospective onset of action studies. abstract_id: PUBMED:23796469 New onset alcohol dependence linked to treatment with selective serotonin reuptake inhibitors. Background: Genetic and environmental factors influence the development of alcohol dependence and alcohol dependence increases the risk of developing Major Depressive Disorder-MDD (vice versa). Amongst antidepressants, the selective serotonin reuptake inhibitors (SSRIs) are likely the most frequently prescribed for MDD. However, findings on the role of SSRIs in alleviating alcoholism are conflicting. Case Description: A review of the literature is highlighted with a case of middle-aged lady with new onset alcohol dependence syndrome after commencement of SSRI, which resolved following discontinuation of the SSRIs and the introduction of Mirtazapine. Discussion: The serotonin transporter gene has been linked to excessive drinking, early-onset problem drinking, alcohol dependence, anxiety and impulsiveness. While the evidence for antidepressant use appears consistent in alleviating depressive symptoms in patients with comorbid alcohol dependence and depression, some groups of patients may show an increase in alcohol consumption. Alternatively, there are a series of studies suggesting that antagonism of S-3 receptors can lead to diminished cravings for alcohol. This case highlights the need for further research into the effects of SSRIs on alcohol consumption in those with and without previous alcohol dependence syndromes. It also indicates a need to monitor changes in alcohol consumption and behaviour while on SSRIs. abstract_id: PUBMED:12650947 The pharmacology of putative early-onset antidepressant strategies. Depression is a serious and burdensome illness. Although selective serotonin reuptake inhibitors (SSRIs) have improved safety and tolerability of antidepressant treatment efficacy, the delay in the onset of action have not been improved. There is evidence to suggest that the delay in onset of therapeutic activity is a function of the drugs, rather than the disease. This suggests that research into the biological characteristics of depression and its treatments may yield faster-acting antidepressants. Emerging evidence from clinical studies with mirtazapine, venlafaxine and SSRI augmentation with pindolol suggests that these treatments may relieve antidepressant symptoms more rapidly than SSRIs. The putative mechanism of action of faster-acting antidepressant strategies presented here purports that conventional antidepressants acutely increase the availability of serotonin (5-hydroxytryptamine, 5-HT) or noradrenaline (NA), preferentially at their cell body level, which triggers negative feedback mechanisms. After continued stimulation, these feedback mechanisms become desensitised and the enhanced 5-HT availability is able to enhance 5-HT and/or NA neurotransmission. Putative fast-onset antidepressants, on the other hand, may uncouple such feedback control mechanisms and enhance 5-HT and/or NA neurotransmission more rapidly. Further studies are required to characterise in detail the interactions between NA and 5-HT systems and to definitively establish the early onset of candidate antidepressants such as mirtazapine, venlafaxine and pindolol augmentation. abstract_id: PUBMED:10362442 Efficacy of SSRIs and newer antidepressants in severe depression: comparison with TCAs. Background: The significant morbidity and mortality associated with severe depression and its psychotic or melancholic subtypes necessitate effective and well-tolerated therapy. This review evaluates antidepressant treatments for patients with severe depression. Data Sources: Comparative clinical trials conducted on patients with severe depression were found by an English-language MEDLINE search (1985 to present). Additional studies were identified in article bibliographies. Search terms included depressive disorders, depression and severe, hospitalized, melancholic or melancholia, psychotic, and endogenous. Study Findings: Evidence for efficacy of SSRIs in severe or melancholic depression comes from a small but growing number of controlled studies with adequate samples, as well as meta-analyses and retrospective subgroup analysis of premarketing trials. In studies that defined response as a 50% or greater reduction in Hamilton Rating Scale for Depression (HAM-D) scores, response rates ranged from 53% to 64% for SSRIs and 43% to 70% for TCAs. In separate trials on severe depression, venlafaxine and mirtazapine were both more effective than placebo and an active comparator. Nefazodone and bupropion were each found to be more effective than placebo in studies of severe depression. Venlafaxine and mirtazapine have been found to be more effective than fluoxetine. Conclusion: SSRIs and TCAs are comparably effective for the treatment of severe or melancholic depression. SSRIs and other newer agents appear to be better tolerated than TCAs, specifically lacking adverse anticholinergic and cardiovascular effects that may limit the use of TCAs. Emerging data with venlafaxine and mirtazapine in severely depressed patients with or without melancholia support the efficacy of these treatments. Nefazodone and bupropion were found to be effective in hospitalized depressed patients. Electroconvulsive therapy (ECT) or combined antidepressant therapy may be useful in some patients with severe depression. Patients with severe psychotic depression may respond better to an antipsychotic-antidepressant combination. abstract_id: PUBMED:35255222 Psychopharmacology in the Pediatric Oncology and Bone Marrow Transplant Units: Antidepressant Treatment. Objectives: The aim of this study was to characterize the clinical profiles, tolerability, and efficacy of two groups of antidepressants, selective serotonin reuptake inhibitors (SSRIs), and the atypical antidepressant, mirtazapine, in children and adolescents treated in a large pediatric Hematology-Oncology center. Methods: A review of computerized medical charts of 32 pediatric patients with cancer, from December 2011 to April 2020, was conducted. Efficacy and tolerability of antidepressant medications were retrospectively analyzed. The Clinical Global Impressions-Severity (CGI-S) and Clinical Global Impressions-Improvement (CGI-I) Scales were used to evaluate psychiatric symptoms severity before and following treatment, while the data on adverse events and drug-drug interactions were retrieved from the computerized medical records. Results: Thirty-two children and adolescents with cancer, 2-21 years of age (mean 14.1 ± 4.6 years), were treated with antidepressants. Fourteen patients (44%) received mirtazapine, whereas 18 patients (56%) received SSRIs: sertraline (25%), escitalopram (25%), or fluoxetine (6%). Treatment choice was dictated either by physician preference or informed by potential drug-drug interactions. The most common psychiatric diagnoses were major depressive disorders (47%), anxiety disorders (19%), and medication-induced psychiatric disorders (19%). The most common psychiatric-medical symptoms were depressed mood (94%) and anxiety (62%). CGI-S improved significantly (p &lt; 0.05) between pretreatment and on-treatment assessments, with no statistically significant difference between SSRI and mirtazapine-treated patients. CGI-I scores at reassessment indicated improvement in most patients (84%). Adverse events of treatment were mild in all patients. Conclusions: The antidepressants used in this study, SSRIs and mirtazapine, were effective and well tolerated in children and adolescents with cancer and psychiatric comorbidities. Given the high rates of depression and anxiety in children with cancer, large-scale, multisite, prospective clinical trials of antidepressants are warranted. Answer: The evidence suggests that mirtazapine may have a more rapid onset of action than SSRIs. A study analyzing double-blind studies comparing mirtazapine with SSRIs found that the proportion of responders with onset of persistent improvement in the first week was greater for mirtazapine (13%) than for SSRIs (6%), indicating a possibility of more rapid onset for mirtazapine (PUBMED:11411818). Additionally, in a post hoc analysis of the GUNDAM study focusing on late-life depression, mirtazapine showed faster improvement in HAM-D total scores and higher improvement in insomnia and appetite compared to SSRIs (PUBMED:37700038). However, a review of third-generation antidepressants, which includes mirtazapine, did not find convincing differences between these agents and SSRIs in terms of overall efficacy, relapse prevention, and speed of onset, although they were found to be of equivalent safety to SSRIs (PUBMED:11735614). Another study on the pharmacology of rapid-onset antidepressant treatment strategies suggests that mirtazapine may have a faster onset of action compared with SSRIs due to its mechanism of action, which does not require time-dependent desensitization of receptors (PUBMED:11444761). Further analysis of four large-scale, double-blind studies provided evidence for an earlier onset of action with mirtazapine than with SSRIs, with significant differences observed after just one week of treatment (PUBMED:12404667). This finding is supported by the pharmacology of putative early-onset antidepressant strategies, which suggests that mirtazapine may enhance neurotransmission more rapidly than SSRIs (PUBMED:12650947). In summary, while some studies and analyses suggest that mirtazapine may have a more rapid onset of action than SSRIs, the evidence is not conclusive, and further research, particularly prospective onset of action studies, is needed to confirm these findings.
Instruction: General practitioners' altered preferences for private practice vs. salaried positions: a consequence of proposed policy regulations? Abstracts: abstract_id: PUBMED:25890250 General practitioners' altered preferences for private practice vs. salaried positions: a consequence of proposed policy regulations? Background: General practitioners (GPs) in most high-income countries have a history of being independent private providers with much autonomy. While GPs remain private providers, their autonomous position appears to be challenged by increased policy regulations. This paper examines the extent to which GPs' preferences for private practice vs. salaried contracts changed in a period where a new health care reform, involving proposed increased regulations of the GPs, was introduced. Methods: We use data collected from Norwegian GPs through structured online questionnaires in December 2009 and May 2012. Results: We find that the proportion of GPs who prefer private practice (i.e. the default contract for GPs in Norway) decreases from 52% to 36% in the period from 2009 to 2012. While 67% of the GPs who worked in private practice preferred this type of contract in 2009, the proportion had dropped by 20 percentage points in 2012. Salaried contracts are preferred by GPs who are young, work in a small municipality, have more patients listed than they prefer, work more hours per week than they prefer, have relatively low income or few patients listed. Conclusion: We find that GPs' preferences for private practice vs. salaried positions have changed substantially in the last few years, with a significant shift towards salaried contracts. With the proportions of GPs remaining fairly similar across private practice and salaried positions, there is an increasing discrepancy between GPs' current contract and their preferred one. abstract_id: PUBMED:27585603 Private or salaried practice: how do young general practitioners make their career choice? A qualitative study. Background: Young French postgraduates in general practice increasingly prefer salaried practice to private practice in spite of the financial incentives offered by the French government or local communities to encourage the latter. This study aimed to explore the determinants of choice between private or salaried practice among young general practitioners. Methods: A qualitative study was conducted in the South West of France. Semi-structured interviews of young general practitioners were audio-recorded until data saturation. Recordings were transcribed and then analyzed according to Grounded Theory by three researchers working independently. Results: Sixteen general practitioners participated in this study. For salaried and private doctors, the main factors governing their choice were occupational factors: working conditions, need of varied scope of practice, quality of the doctor-patient relationship or career flexibility. Other factors such as postgraduate training, having worked as a locum or self-interest were also determining. Young general practitioners all expected a work-life balance. The fee-for-service scheme or home visits may have discouraged young general practitioners from choosing private practice. Conclusions: National health policies should increase the attractiveness of ambulatory general practice by promoting the diversification of modes of remuneration and encouraging the organization of group exercises in multidisciplinary medical homes and community health centers. abstract_id: PUBMED:23050804 Remuneration and organization in general practice: do GPs prefer private practice or salaried positions? Objective: In Norway the default payment option for general practice is a patient list system based on private practice, but other options exist. This study aimed to explore whether general practitioners (GPs) prefer private practice or salaried positions. Design: Cross-sectional online survey (QuestBack). Setting: General practice in Norway. Intervention: Participants were asked whether their current practice was based on (1) private practice in which the GP holds office space, equipment, and employs the staff, (2) private practice in which the GPs hire office space, equipment, or staff from the municipality, (3) salary with bonus arrangements, or (4) salary without bonus arrangement. Furthermore, they were asked which of these options they would prefer if they could choose. Subjects: GPs in Norway (n = 3270). Main Outcome Measures: Proportion of GPs who preferred private practice. Results: Responses were obtained from 1304 GPs (40%). Among these, 75% were currently in private practice, 18% in private practice with some services provided by the municipality, 4% had a fixed salary plus a proportion of service fees, whereas 3% had salary only. Corresponding figures for the preferred option were 52%, 26%, 16%, and 6%, respectively. In multivariate logistic regression analysis, size of municipality, specialty attainment, and number of patients listed were associated with preference for private practice. Conclusion: The majority of Norwegian GPs had and preferred private practice, but a significant minority would prefer a salaried position. The current private practice based system in Norway seems best suited to the preferences of experienced GPs in urban communities. abstract_id: PUBMED:22164554 General practitioners in private practice in the city of Quebec and the application of population responsibility Objectives: The aim of this study is to examine the way in which general practitioners (GPs) in private practice view the idea of population responsibility, proposed by the Quebec Health and Social Services Ministry in 2004. We then look at how these views impact primary health care practice in Quebec City. Method: A qualitative exploratory approach was used; 18 semi-directed interviews were performed with private practice GPs, administrators and health professionals from community health and social services centres (CSSS). A content thematic analysis of the data was performed based on St-Pierre et al.'s model and grounded on Giddens' structuring theory. Results: Because neither the population meant to be served nor the underlying responsibility are perceived the same way by the GPs and the CSSS health professionals, the respective practices do not always converge. Consequently, methods of communication, offers of services and management of resources impact on the operationalization of the concept of population responsibility, which has to be negotiated. Discussion: In these negotiations, because physicians are the ones solicited by the other partners, the application of population responsibility increasingly becomes an opportunity to develop a medically oriented primary care organization. abstract_id: PUBMED:11843012 General dental practitioners' beliefs on the perceived effects of and their preferences for remuneration mechanisms. Objective: To identify GDPs preferences for differing remuneration mechanisms and their beliefs on the effect of the mechanisms in care provision. Design: Postal questionnaire survey of 300 GDPs holding an NHS contract with a London Health Authority. Results: GDPs perceive that remuneration mechanisms are important in determining the provision of care but not overall disease levels. There were differences in the preferred remuneration mechanisms when working under the NHS compared with the non-NHS sector. When providing care under the NHS, either the current remuneration system or a salaried plus bonus would be the preferred choice, while for non-NHS care a fee-per-item mechanism is preferred. Fee-per-item arrangement was the preferred choice of younger general practitioners compared with older practitioners. Females showed a greater preference for a salaried with bonus arrangement compared with males. Conclusions: If policy makers are to use remuneration mechanisms to influence the provision of care effectively, the beliefs that care providers hold about various mechanisms are important to understand how they would respond to changes in the system. abstract_id: PUBMED:30665065 A comparison of professional and private time preferences of General Practitioners. In a perfect agency relationship, doctors consider all information and select the patient's 'utility maximising' option given the patient's preferences. The patient's time preferences are important as treatments vary in the timing and length of their benefits. However, doctors often do not have full information on patients' preferences and may apply their own preferences. This has generated empirical interest in estimating doctors' time preferences. However, these studies generally elicit doctors' private preferences (preferences for their own health) rather than professional preferences (preferences for the patient). We hypothesise that private and professional preferences may differ. Professional time preferences may be 'taught' in medical school or learned through repeated interactions with patients. If preferences differ then estimates of doctors' private preferences are less informative for medical decision-making. This study compares private and professional time preferences for health in a national sample of General Practitioners, using a between sample design. Time discounting is explored using exponential and quasi-hyperbolic models. We elicit time preferences using multiple price lists. We find no significant difference between the time preference for the self or the patient. This result holds for axiomatic discounting classification and maximum likelihood estimates. We do not find evidence of present-bias. There are a high proportion of increasingly impatient GPs, potentially implying a maximum 'willingness to wait' for treatment benefits. GPs value the health state differently between themselves or for a patient. These results suggest that we can use estimates of private preferences from doctors to inform medical decision-making. abstract_id: PUBMED:31901584 Adherence to the Malaysian clinical practice guideline for depression by general practitioners in private practice in Penang. Introduction: Most primary care in Malaysia is provided by general practitioners in private practice. To date, little is known about how Malaysian General Practitioners (GPs) manage patients with depression. We surveyed privately practising primary care physicians in the state of Penang, Malaysia, in relation to their experience of the Malaysian Clinical Practice Guideline (CPG) in Major Depressive Disorder, their current practice and perceived barriers in managing depression effectively. Material And Methods: A questionnaire based on the study aims and previous literature was developed by the authors and mailed to all currently registered GPs in private clinics in Penang. Survey responses were analysed using SSPS version 21. Results: From a total of 386 questionnaires distributed, 112 (29%) were returned. Half of the respondents were unaware of the existence of any CPG for depression. One quarter reported not managing depression at all, while one third used anxiolytic monotherapy in moderate-severe depression. Almost 75 % of respondents reported making referrals to specialist psychiatric services for moderate-severe depression. Time constraints, patient non-adherence and a lack of depression management skills were perceived as the main barriers to depression care. Conclusions: Our findings highlight the need to engage privately practising primary care physicians in Malaysia to improve their skills in the management of depression. Future revisions of the Malaysian Depression CPG should directly involve more GPs from private practices at the planning, development and implementation stages, in order to increase its impact. abstract_id: PUBMED:37271572 General practitioners' perceptions of the provision of direct-acting antivirals for hepatitis C within Australian private general practice: an exploratory qualitative study. Background: There is a need to increase the involvement of Australian general practitioners (GPs) working in private practice to realise the potential of direct-acting antiviral (DAA) treatments for people with the hepatitis C virus. Methods: Semi-structured interviews were conducted in 2018 with seven GPs and two practice nurses working in private general practice to elicit the experiences and perceptions of their involvement in providing care for patients with hepatitis C virus in this setting. The interviews were recorded, transcribed and thematically analysed to inform interventions to maximise the provision of DAA in private general practice. Results: Participants described individual GPs purposely limiting their scope of clinical practice (SOCP) and expressed an expectation that DAA provision would not be included in all GP's SOCP. When GPs delineate their SOCP, their confidence to competently provide quality health care to their patients and GPs' professional special interests are important considerations. Conclusion: Providing DAA training, skill development, support and resources to GPs is necessary, but may not ensure that individual private GPs will provide this care. Where GPs do not include DAA in their SOCP, care pathways need to be developed for patients who will benefit from DAA, including GP-to-GP referral. These findings may be applicable to other areas of unmet need that rely on GPs including provision of care in their SOCP. abstract_id: PUBMED:32175039 Managing adolescent pregnancy: The unique roles and challenges of private general practitioners in Malaysia. Introduction: Managing adolescent pregnancy in the primary care setting is complex, as it requires doctors to navigate through a combination of medical, social, financial and legal needs. Objective: This study explores the perspectives of private general practitioners on their roles and challenges in managing adolescent pregnancy in Malaysia. Methods: Nineteen private general practitioners in Selangor and Kuala Lumpur participated in in-depth interviews in 2015. A topic guide was used for interview navigation. Participants were asked to discuss their experiences and approaches in managing pregnant adolescents. We used purposive sampling to recruit consenting private general practitioners who had experience in managing adolescent pregnancy. The verbatim transcripts of the audio-recorded interviews were analyzed using thematic analysis. Data reached saturation at the nineteenth in-depth interview. Results: Two themes emerged. Under the theme 'inadvertent advocator,' participants described their tasks with regards to building trust, calming angry parents and delivering comprehensive counseling and care related to the sexual and reproductive health of adolescents, including requests for abortions. Theme two, 'challenges of private general practitioners,' refers mainly to personal and religious conflicts arising from a request for an abortion and deficiencies in support and multidisciplinary integration within their practice settings. Conclusion: General practitioners practicing in the private sector identify themselves as active players in supporting pregnant adolescents but face many challenges arising from the personal, religious, professional and community levels. Addressing these challenges is important for optimal care delivery to pregnant adolescents in this community. abstract_id: PUBMED:31714120 Exploring the beliefs and attitudes of private general practitioners towards national health insurance in Cape Town, South Africa. Background: Private general practitioner (GP) participation in the national health insurance (NHI) is necessary to address doctor shortages and achieve universal health coverage. An in-depth understanding of GP's views on the NHI is needed to inform implementation strategies. Aim: To explore the beliefs and attitudes of GPs towards the proposed NHI system. Setting: Cape Town, South Africa. Methods: This was a descriptive, exploratory, qualitative study using semi-structured interviews. Eleven GPs were recruited using purposeful snowball sampling from different practices and communities. Thematic data analysis was conducted using the framework approach and Atlas.ti software. Results: Although GPs saw the need for NHI, they felt that the government was antagonistic towards the private sector and had not engaged in a dialogue. They were wary of integration into a nurse-led primary care system and of being coerced. They felt that the public sector lacked the necessary financial and administrative capacity, and were concerned about the level, efficiency and sustainability of reimbursement, and the criteria to be used to accredit practices. General practitioners anticipated that the NHI would favour multidisciplinary teams and group practices. They also had mixed ideas about the impact on practice with some expecting higher workloads, stress and costs with reduced quality of care, while others saw more comprehensive care, better incomes and increased patient satisfaction. Conclusions: While GPs are essential for the success of the NHI, there are many concerns regarding government policy, plans for implementation and the consequences for GP practice. Many of the concerns expressed could be tackled by greater policy dialogue and clarification. Answer: The shift in general practitioners' (GPs) preferences from private practice to salaried positions appears to be influenced by proposed policy regulations and other factors. In Norway, the introduction of a new health care reform involving increased regulations of GPs coincided with a decrease in the proportion of GPs who preferred private practice, dropping from 52% to 36% between 2009 and 2012 (PUBMED:25890250). This suggests that increased policy regulations may have contributed to the change in preferences. Similarly, in France, young GPs increasingly prefer salaried practice over private practice despite financial incentives to encourage private practice. Factors influencing this choice include working conditions, the need for a varied scope of practice, the quality of the doctor-patient relationship, career flexibility, postgraduate training, and work-life balance. The fee-for-service scheme and home visits may also discourage young GPs from choosing private practice (PUBMED:27585603). In Quebec, the concept of population responsibility proposed by the Quebec Health and Social Services Ministry has impacted primary health care practice, with GPs and community health professionals having differing perceptions of the population to be served and the responsibility involved. This has led to negotiations that increasingly develop a medically oriented primary care organization (PUBMED:22164554). Furthermore, general dental practitioners (GDPs) have shown different preferences for remuneration mechanisms when working under the NHS compared to the non-NHS sector, indicating that remuneration mechanisms are important in determining the provision of care (PUBMED:11843012). In Australia, GPs working in private practice have expressed that individual GPs limit their scope of clinical practice (SOCP), and not all include the provision of direct-acting antiviral (DAA) treatments for hepatitis C in their SOCP. This suggests that GPs' preferences for their practice scope can affect the services they provide (PUBMED:37271572). Overall, these studies indicate that GPs' altered preferences for private practice versus salaried positions are indeed a consequence of proposed policy regulations, as well as other factors such as working conditions, remuneration mechanisms, and the scope of clinical practice.
Instruction: Does acupuncture activate endogenous analgesia in chronic whiplash-associated disorders? Abstracts: abstract_id: PUBMED:22968837 Does acupuncture activate endogenous analgesia in chronic whiplash-associated disorders? A randomized crossover trial. Background: Many patients with chronic pain, including those with chronic whiplash-associated disorders (WAD), show features of central sensitization. Randomized trials examining whether treatments are able to influence the process of central sensitization in patients with chronic WAD are emerging. Therefore, the present study aimed at examining whether acupuncture results in activation of endogenous analgesia and relief in symptoms in patients with chronic WAD. Methods: In this randomized crossover pilot trial with blinded assessors, each patient (n = 39) received two treatment sessions of identical duration, with acupuncture and relaxation therapy randomly crossed over in visit 2. Primary outcome measurement included immediate activation of endogenous analgesia i.e., pressure pain sensitivity and conditioned pain modulation. Secondary outcome measurements included pain relief and reduced disability level. Results: Local pressure pain sensitivity at baseline and during conditioned pain modulation decreased significantly more following acupuncture compared with relaxation (time × group interactions: p &lt; 0.001), both in the neck and at a site distinct from the painful region. When comparing the effects of acupuncture versus relaxation, no differences were observed on conditioned pain modulation, temporal summation of pressure pain, neck disability or symptom severity (all p-values &gt;0.05). Conclusion: It was shown that one session of acupuncture treatment results in acute improvements in pressure pain sensitivity in the neck and calf of patients with chronic WAD. Acupuncture had no effect on conditioned pain modulation or temporal summation of pressure pain. Both acupuncture and relaxation appear to be well-tolerated treatments for people with chronic WAD. These findings suggest that acupuncture treatment activates endogenous analgesia in patients with chronic WAD. abstract_id: PUBMED:10536802 Whiplash-associated disorders Whiplash-associated disorders (WAD) represent a class of clinical complaints which commonly result from rear-end car accidents. An automobile collision can generate major forces which are transferred to the neck by an acceleration-deceleration mechanism (whiplash), resulting in bony or soft-tissue injuries (whiplash injury). Incidence of WAD is estimated to be 0.1 to 3.8/1000/year; WAD cost $29 billion a year in the USA. They can be classified clinically into 5 degrees of severity, namely WAD grades 0 to IV. Signs and symptoms typically crescendo during the first few days after an accident. Pathological findings (especially of musculo-skeletal or neurological types) must often be sought actively and should be documented at the earliest stage. Prevention of possible chronicity is the most important goal in clinical management of WAD. WAD grade IV patients are treated in the way their fracture or dislocation demands. Therapy of WAD grades I to III has three main aspects: non-narcotic analgesics, early active mobilisation (to the extent possible consistent with pain) and education of the patient. Soft collars should not be used (or only temporarily and sparingly). Most patients with WAD grades I-III feel well again relatively soon. Symptoms and signs that persist for longer than two months are important warning signs for imminent chronicity, which occurs at rates of 14-42%. In such cases, an interdisciplinary approach is recommended. Risk factors are accident severity, head position at the time of accident, age and pretraumatic existence of headache. Patients with chronic complaints can develop additional psychic and cognitive problems, which are caused by--and not the cause of--their chronic disorder. Therapy of chronic whiplash-associated disorders involves all the problems inherent in therapies of chronic pain. There are many therapeutic concepts, but little evidence that anything helps. Prevention of whiplash injuries is therefore very important in view of the lack of powerful treatment options. Although there is a substantial body of scientific literature about WAD, many unanswered questions remain. In particular the most important questions (how can patients with acute and chronic disorders be helped best) have no clear answer yet. Furthermore, there are many opinions and prejudices (especially concerning psycho-social factors of WAD) which have no scientific basis. Therefore, an intensive exchange of information between health care professionals, patients and the general public appears to be very important. abstract_id: PUBMED:26606004 Acupuncture-Analgesia Following a Single Treatment Session in Chronic Whiplash is Unrelated to Autonomic Nervous System Changes: A Randomized Cross-over Trial. Background: An acupuncture treatment can reduce pain sensitivity in patients with chronic whiplash associated disorders (WAD). But it has been hypothesized that many of the experimental results in acupuncture research could be interpreted as stress-induced analgesia. Objective: The present study aimed at examining whether acupuncture has an effect on the autonomic nervous system response in patients with chronic WAD and if this response is related to the pain inhibition after an acupuncture session. Study Design: Randomized crossover trial with blinded assessor. Setting: Two private practices. Methods: Thirty-nine patients with chronic WAD received 2 treatment sessions of identical duration, with acupuncture and relaxation therapy randomly crossed over in the 2 visits. The primary outcome measurement was the registration of autonomic nervous system parameters (heart rate, skin conductance, and heart rate variability parameters) during the administration of experimental pain. Endogenous analgesia was the secondary outcome. Results: Following one acupuncture treatment session, there was a significant change for 2 parameters: the heart rate was slightly reduced and the skin conductance was raised. Comparing the effects of acupuncture and relaxation, no differences were found with respect to the change in any of the autonomic parameters. Further, the reduction in pain sensitivity in response to acupuncture treatment was unrelated to any of the changes in autonomic measurements. Limitations: The results were observed after only one session of acupuncture. Conclusion: In patients with chronic WAD, in response to a single treatment session, no acupuncture specific effects on the autonomic response to pain assessment were present and the analgesia after one session of acupuncture is not caused by stress-induced analgesia but is more likely the result of an acupuncture specific reaction. abstract_id: PUBMED:21494196 A randomized trial comparing acupuncture and simulated acupuncture for subacute and chronic whiplash. Study Design: A randomized controlled trial with 3 and 6 months follow-up. Objective: To compare the effectiveness of acupuncture with simulated acupuncture in patients with subacute and chronic whiplash-associated disorders. Summary Of Background Data: Acupuncture is widely used for the treatment of neck and other musculoskeletal pain, and there is some evidence supporting its effectiveness for short-term pain relief. The effectiveness of acupuncture in the treatment of whiplash-associated disorders is not clear. Methods: A total of 124 patients between 18 and 65 years with chronic (85%) or subacute whiplash-associated disorders (Grade I or II) were randomly allocated to real or simulated electroacupuncture treatment for 12 sessions during a 6-week period. Both treatments involved skin penetration with acupuncture needles and were provided by a single university-trained acupuncturist in a University Clinic in Sydney, Australia. Primary outcome measures were pain intensity (10-cm visual analog scale), disability (Neck Disability Index), and health-related quality of life (SF-36). Secondary outcomes were patient-specific activity scales, and the McGill Pain Rating Index. Results: Mean initial pain intensity for all participants was 5.6 cm. Participants receiving the real electroacupuncture treatment had significantly greater reduction in pain intensity at 3 and 6 months, 0.9 cm (P = 0.05) and 1.3 cm (P = 0.007), respectively, in comparison to the sham electro-acupuncture group. After adjustment for baseline status, there was no significant reduction in disability, or improvement in health-related quality of life. There was an improvement in the activity scales of a similar size to the reduction in pain, but no difference in the McGill Index. Conclusion: Real electroacupuncture was associated with a significant reduction in pain intensity over at least 6 months. This reduction was probably not clinically significant. There was no improvement in disability or quality of life. abstract_id: PUBMED:9051168 Acupuncture treatment for balance disorders following whiplash injury. In this study, by means of computerized static posturography, we evaluated the postural changes after acupuncture treatment in a group of 15 patients with balance disorders caused by cervical torsion due to Whiplash Injury (WI). The acupuncture treatment consisted of 3 sessions (one weekly session for 3 weeks) during which the acupuncture points Bladder 10 (Bl.10) and Gall Bladder 20 (G.B.20). VB were stimulated by means of piercing with needles, and manipulating the needles for 20 seconds. Each patient underwent posturographic evaluations before and just after each session of acupuncture. The posturographic tests were performed with open eyes (OE), closed eyes (CE) and closed eyes with retroflexed head (CER). As a control group, we used 17 patients complaining of the same symptoms as the study group due to WI, but treated with drugs (FANS and myorelaxing) and physiotherapy only. The patients of the control group also underwent posturographic tests once a week for three weeks. We observed a significant difference between the two groups regarding the reduction of the CER Length of the statokinesigram just before each session of acupuncture and reduction of the frequency oscillations (FFT) of the patients on the sagittal plane in the study group, in CER, whereas in the control group we observed a progressive increase in these values. The high percentage of positive results in our WI patients leads us to advocate the therapeutic efficacy of acupuncture for balance disorders due to cervical pathology, where it can be associated with or be a valid alternative to pharmacological treatment. abstract_id: PUBMED:15732514 Acupuncture treatment of whiplash injury. We evaluated by computerized static posturography the postural changes after acupuncture treatment in a group of 27 patients (12 men and 15 women; mean age, 35.7+/-6.8 SD) having balance disorders caused by cervical torsion due to whiplash injury. Acupuncture was performed by piercing deeply and bilaterally acupuncture points bladder 10 and gall bladder 20 with steel needles that were twirled manually for 20 seconds. All patients underwent posturographic evaluations before and just after each session of acupuncture. The posturography was performed with open eyes (OE), closed eyes (CE), and closed eyes with retroflexed head (CER). The control group consisted of 25 patients complaining of the same symptoms as those recorded by the study group due to whiplash injury but treated with nonsteroidal anti-inflammatory drugs and myorelaxation or with physiotherapy only. The data presented in this study revealed a considerable difference between the two groups as regards the reduction of the CE and CER length of the statokinesigram just before each session of acupuncture; the frequency oscillation on the sagittal plane in CER was reduced in the study group, whereas we noticed a progressive increase of its values in the control group. The high percentage of positive results in whiplash injury patients leads us to advocate acupuncture for balance disorders due to cervical pathology. abstract_id: PUBMED:26707074 Are manual therapies, passive physical modalities, or acupuncture effective for the management of patients with whiplash-associated disorders or neck pain and associated disorders? An update of the Bone and Joint Decade Task Force on Neck Pain and Its Associated Disorders by the OPTIMa collaboration. Background Context: In 2008, the Bone and Joint Decade 2000-2010 Task Force on Neck Pain and Its Associated Disorders (Neck Pain Task Force) found limited evidence on the effectiveness of manual therapies, passive physical modalities, or acupuncture for the management of whiplash-associated disorders (WAD) or neck pain and associated disorders (NAD). Purpose: This review aimed to update the findings of the Neck Pain Task Force, which examined the effectiveness of manual therapies, passive physical modalities, and acupuncture for the management of WAD or NAD. Study Design/setting: This is a systematic review and best evidence synthesis. Sample: The sample includes randomized controlled trials, cohort studies, and case-control studies comparing manual therapies, passive physical modalities, or acupuncture with other interventions, placebo or sham, or no intervention. Outcome Measures: The outcome measures were self-rated or functional recovery, pain intensity, health-related quality of life, psychological outcomes, or adverse events. Methods: We systematically searched five databases from 2000 to 2014. Random pairs of independent reviewers critically appraised eligible studies using the Scottish Intercollegiate Guidelines Network criteria. Studies with a low risk of bias were stratified by the intervention's stage of development (exploratory vs. evaluation) and synthesized following best evidence synthesis principles. Funding was provided by the Ministry of Finance. Results: We screened 8,551 citations, and 38 studies were relevant and 22 had a low risk of bias. Evidence from seven exploratory studies suggests that (1) for recent but not persistent NAD grades I-II, thoracic manipulation offers short-term benefits; (2) for persistent NAD grades I-II, technical parameters of cervical mobilization (eg, direction or site of manual contact) do not impact outcomes, whereas one session of cervical manipulation is similar to Kinesio Taping; and (3) for NAD grades I-II, strain-counterstrain treatment is no better than placebo. Evidence from 15 evaluation studies suggests that (1) for recent NAD grades I-II, cervical and thoracic manipulation provides no additional benefit to high-dose supervised exercises, and Swedish or clinical massage adds benefit to self-care advice; (2) for persistent NAD grades I-II, home-based cupping massage has similar outcomes to home-based muscle relaxation, low-level laser therapy (LLLT) does not offer benefits, Western acupuncture provides similar outcomes to non-penetrating placebo electroacupuncture, and needle acupuncture provides similar outcomes to sham-penetrating acupuncture; (3) for WAD grades I-II, needle electroacupuncture offers similar outcomes as simulated electroacupuncture; and (4) for recent NAD grades III, a semi-rigid cervical collar with rest and graded strengthening exercises lead to similar outcomes, and LLLT does not offer benefits. Conclusions: Our review adds new evidence to the Neck Pain Task Force and suggests that mobilization, manipulation, and clinical massage are effective interventions for the management of neck pain. It also suggests that electroacupuncture, strain-counterstrain, relaxation massage, and some passive physical modalities (heat, cold, diathermy, hydrotherapy, and ultrasound) are not effective and should not be used to manage neck pain. abstract_id: PUBMED:21528604 Preliminary study on syndrome differentiation types and acupuncture for whiplash injuries Whiplash injury is a relatively common injury of clinical acupuncture and moxibustion in the United States. The mechanism and clinical manifestation of whiplash injuries as well as its pathogenesis described in TCM were analyzed in this present article. The authors introduced the TCM syndrome differentiation of whiplash injuries and claimed that both the location and the stage of disease should be considered. For the different injury locations, the meridian musculature differentiation was applied to classify the whiplash injuries as Taiyang, Yangming, Shaoyang and Shaoyin Meridian syndromes. Considering the duration of the injury, qi stagnation and blood stasis types were classified in the acute stage and phlegm accumulation, insufficiency of the liver and kidney and qi and blood deficiencies types were classified during the chronic stage. An acupuncture protocol for whiplash injuries and typical cases were also introduced. abstract_id: PUBMED:18604250 Acupuncture anesthesia and analgesia for clinical acute pain in Japan. Acupuncture anesthesia has been practiced in China since about 1960. In Japan, Hyodo reported 30 cases of acupuncture anesthesia in 1972. However, from around 1980, the direction of acupuncture investigations turned from anesthesia to analgesia. Acupuncture analgesia is presently considered a way to activate the body's endogenous analgesic system. Recently, with the rise of acupuncture as one of the most well known CAM therapies, acupuncture or moxibustion treatment has been reported for both acute and chronic pain. Even so, few clinical reports and original articles have been reported in Japan. This review illustrates how acupuncture is being used in Japan for acute pain such as surgical operations, post- operative pain (POP), neuropathic pain, pain associated with teeth extractions and after the extraction of impacted wisdom teeth. abstract_id: PUBMED:28931977 Epidemiology and psychological factors of whiplash associated disorders in Japanese population. [Purpose] This study was designed to examine the epidemiological background of Whiplash-associated disorders in Japanese adults and to investigate the psychological factors associated with prolonged treatment for Whiplash-associated disorders. [Subjects and Methods] An online survey was completed by 127,956 participants, of whom 4,164 had been involved in a traffic collision. A random sample of the collision participants (n=1,698) were provided with a secondary questionnaire. From the 974 (57.4%) participants who returned the questionnaire, 183 cases (intractable neck pain treated over a period of 6 months) and 333 controls (minor neck pain treated within 3 months) were selected. Among the control group, the psychological factors associated with prolonged treatment for Whiplash-associated disorders were investigated. [Results] Among the 4,164 collision participants, 1,571 (37.7%) had experienced Whiplash-associated disorders. The prevalence in the general population was 1.2% (1.3% in male and 1.0% in female). Significant differences were observed between the cases and controls for all psychological factors, although both groups had similar distributions of age and gender. [Conclusion] Poor psychological factors were associated with prolonged treatment for whiplash-associated disorders in Japanese adults. These psychological factors should be considered during the treatment of whiplash-associated disorders. Answer: Yes, acupuncture does activate endogenous analgesia in chronic whiplash-associated disorders (WAD). A randomized crossover pilot trial with blinded assessors found that one session of acupuncture treatment resulted in acute improvements in pressure pain sensitivity in the neck and calf of patients with chronic WAD. The study showed that local pressure pain sensitivity at baseline and during conditioned pain modulation decreased significantly more following acupuncture compared with relaxation therapy. However, no differences were observed on conditioned pain modulation, temporal summation of pressure pain, neck disability, or symptom severity when comparing the effects of acupuncture versus relaxation. These findings suggest that acupuncture treatment activates endogenous analgesia in patients with chronic WAD (PUBMED:22968837). Additionally, another randomized crossover trial examined whether acupuncture has an effect on the autonomic nervous system response in patients with chronic WAD and if this response is related to the pain inhibition after an acupuncture session. The study found that after one acupuncture treatment session, there was a significant change in heart rate and skin conductance, but no acupuncture-specific effects on the autonomic response to pain assessment were present. The reduction in pain sensitivity in response to acupuncture treatment was unrelated to any of the changes in autonomic measurements, indicating that the analgesia after one session of acupuncture is not caused by stress-induced analgesia but is more likely the result of an acupuncture-specific reaction (PUBMED:26606004).
Instruction: Skin responses to topical dehydroepiandrosterone: implications in antiageing treatment? Abstracts: abstract_id: PUBMED:20698844 Skin responses to topical dehydroepiandrosterone: implications in antiageing treatment? Background: Although low dehydroepiandrosterone (DHEA) is suspected to have a role in skin ageing, little information is available on the mechanisms potentially involved. Objectives: To obtain information on androgen receptor (AR) and procollagen expression in ageing skin during DHEA treatment. Methods: A placebo-controlled, randomized, prospective study was performed with 75 postmenopausal women aged 60-65 years. The women were treated twice daily for 13 weeks with 3·0 mL of placebo or 0·1%, 0·3%, 1% or 2% DHEA cream applied on the face, arms, back of hands, upper chest and right thigh where 2-mm biopsies were collected before and after treatment. Results: Although the overall structure of the epidermis was not significantly affected at the light microscopy level, AR expression examined by immunocytochemistry was markedly increased by DHEA treatment. In the dermis, the expression levels of procollagen 1 and 3 mRNA estimated by in situ hybridization were increased by DHEA treatment. In addition, the expression of heat shock protein (HSP) 47, a molecule believed to have chaperone-like functions potentially affecting procollagen biosynthesis, was also found by immunocytochemistry evaluation to be increased, especially at the two highest DHEA doses. Conclusion: These data suggest the possibility that topical DHEA could be used as an efficient and physiological antiageing skin agent. abstract_id: PUBMED:17326004 Sexual hormones in human skin. The skin locally synthesizes significant amounts of sexual hormones with intracrine or paracrine actions. The local level of each sexual steroid depends upon the expression of each of the androgen- and estrogen-synthesizing enzymes in each cell type, with sebaceous glands and sweat glands being the major contributors. Sebocytes express very little of the key enzyme, cytochrome P450c17, necessary for synthesis of the androgenic prohormones dehydroepiandrosterone and androstenedione, however, these prohormones can be converted by sebocytes and sweat glands, and probably also by dermal papilla cells, into more potent androgens like testosterone and dihydrotestosterone. Five major enzymes are involved in the activation and deactivation of androgens in skin. Androgens affect several functions of human skin, such as sebaceous gland growth and differentiation, hair growth, epidermal barrier homeostasis and wound healing. Their effects are mediated by binding to the nuclear androgen receptor. Changes of isoenzyme and/or androgen receptor levels may have important implications in the development of hyperandrogenism and the associated skin diseases such as acne, seborrhoea, hirsutism and androgenetic alopecia. On the other hand, estrogens have been implicated in skin aging, pigmentation, hair growth, sebum production and skin cancer. Estrogens exert their actions through intracellular receptors or via cell surface receptors, which activate specific second messenger signaling pathways. Recent studies suggest specific site-related distribution of ERalpha and ERbeta in human skin. In contrast, progestins play no role in the pathogenesis of skin disorders. However, they play a major role in the treatment of hirsutism and acne vulgaris, where they are prescribed as components of estrogen-progestin combination pills and as anti-androgens. These combinations enhance gonadotropin suppression of ovarian androgen production. Estrogen-progestin treatment can reduce the need for shaving by half and arrest progression of hirsutism of various etiologies, but do not necessarily reverse it. However, they reliably reduce acne. Cyproterone acetate and spironolactone are similarly effective as anti-androgens in reducing hirsutism, although there is wide variability in individual responses. abstract_id: PUBMED:20977120 Modern approach to topical treatment of aging skin. The main processes involved in skin aging are intrinsic and extrinsic. Apart from them, so called stochastic aging connotes cell damage caused by metabolic processes, free radicals and cosmic irradiation. The clinical expression of intrinsic aging include smooth, dry, and thinned skin with accentuated expression lines. It is inevitable and time dependent. Extrinsically aged skin shows signs of photodamage which include appearance of wrinkles, pigmented lesions, actinic keratoses and patchy hypopigmentations. Therapeutic modalities imply photoprotection with sunscreens that prevent sunburns and block ultraviolet irradiation. Other modalities include use of retinoids which regulate gene transcription with subsequent cellular differentiation and proliferation. The topical and peroral administration of network antioxidants, such as vitamin E and C, coenzyme Q10, alpha-lipoic acid and glutathione, enhance antiaging effect. The other antioxidants such as green tea, dehydroepiandrosterone, melatonin, selenium and resveratrol, have also antiaging and anti-inflammatory effects. Topical bleaching agents such as hydroquinone, kojic acid and azelaic acid can reduce signs of aging. Studies confirm the efficacy of these topical agents in combination with superficial and/or medium depth or deep peeling agents for photodamaged skin treatment. Indications for type of chemical peels according to various clinical diagnosis are done, as well as advantages and disadvantages of different types of chemical peels. abstract_id: PUBMED:18242894 Effects of topical DHEA on aging skin: a pilot study. Objectives: Dehydroepiandrosterone (DHEA) is a steroid hormone involved in physiological aging. When administered by oral route, it has been shown to positively affect skin condition on aged people. The purpose of this pilot study was to observe the in vivo effects on skin aging of topical DHEA (1%). Methods: The DHEA formulation (1%) or the vehicle was topically applied for 4 months to facial and hand skin, in two groups of 20 post-menopausal women. The efficacy of the treatment was evaluated on the basis of clinical and biophysical signs linked to skin aging. Results: We showed that DHEA treatment increased the rate of sebum, which was perceived rather positively by a menopausal population usually affected with a declining sebum level. Topical DHEA tends to improve skin brightness, to counteract papery appearance of skin and epidermal atrophy, a characteristic feature of hormone-related skin aging. Topical DHEA could also act on skin process related to wrinkles, but this result remains to be confirmed. Conclusions: This pilot study showed beneficial effects on skin characteristics that are rarely provided by topical treatments. It raised some interesting clues towards the treatment of skin aging. abstract_id: PUBMED:18547986 Effect on skin and hormonal hyperandrogenic manifestations of an oral estroprogestin association containing ethynilestradiol 30 mg and drospirenone 3 mg Aim: This study evaluated hormonal and skin effects in hyperandrogenic women of an oral estroprogestin (EP) association containing ethynilestradiol 30 mcg plus drospirenone 3 mg. Methods: Thirty two women with signs and symptoms of hyperandrogenism (seborrhea, acne, increased hair); hormonal assessment (follicle-stimulating hormone, [FSH]; luteinizing hormone, LH; 17-hydroxi-progesterone, 17OHP; androstenedione, A, testosterone, T; dehydroepiandrosterone sulfate, DHEAS; sex hormone binding globulin, [SHBG]; Free Androgen Index [FAI, Tx100/SHBG] was performed before the start of treatment, and after 3 and 6 months of administration of EP. The impact on seborrhea, acne, and hair pattern (Ferriman-Gallwey score) was assessed, and, by non-invasive technique, hydration, water transpiration, and homogeneity of the skin were evaluated. Results: Treatment with this EP for 6 months decreased significantly circulating androgen levels (A, T, DHEAS) and FAI, and increased SHBG levels, also reducing seborrhea, acne and hirsutism. Moreover, EE/DRSP increased hydration and improved overall appearance of skin surface (homogeneity). Conclusion: Treatment with EE 30 mcg+DRSP 3 mg improves hormonal pattern and skin appearance in hyperandrogenic patients, potentially with subsequent, beneficial effects on quality of life of these women. abstract_id: PUBMED:11595810 Intracrinology and the skin. The skin, the largest organ in the human body, is composed of a series of androgen-sensitive components that all express the steroidogenic enzymes required to transform dehydroepiandrosterone (DHEA) into dihydrotestosterone (DHT). In fact, in post-menopausal women, all sex steroids made in the skin are from adrenal steroid precursors, especially DHEA. Secretion of this precursor steroid by the adrenals decreases progressively from the age of 30 years to less than 50% of its maximal value at the age of 60 years. DHEA applied topically or by the oral route stimulates sebaceous gland activity, the changes observed being completely blocked in the rat by a pure antiandrogen while a pure antiestrogen has no significant effect, thus indicating a predominant or almost exclusive androgenic effect. In human skin, the enzyme that transforms DHEA into androstenedione is type 1 3beta-hydroxysteroid dehydrogenase (type 1 3beta-HSD) as revealed by RNase protection and immunocytochemistry. The conversion of androstenedione into testosterone is then catalyzed in the human skin by type 5 17beta-HSD. All the epidermal cells and cells of the sebaceous glands are labelled by type 5 17beta-HSD. This enzyme is also present at a high level in the hair follicles. Type 1 is the 5alpha-reductase isoform responsible in human skin for the conversion of testosterone into DHT. In the vagina, on the other hand, DHEA exerts mainly an estrogenic effect, this effect having been demonstrated in the rat as well as in post-menopausal women. On the other hand, in experimental animals as well as in post-menopausal women, DHEA, at physiological doses, does not affect the endometrial epithelium, thus indicating the absence of DHEA-converting enzymes in this tissue, and avoiding the need for progestins when DHEA is used as hormone replacement therapy. abstract_id: PUBMED:15507105 Androgen action on human skin -- from basic research to clinical significance. Androgens affect several functions of the human skin, such as sebaceous gland growth and differentiation, hair growth, epidermal barrier homeostasis and wound healing. Their effects are mediated by binding to nuclear androgen receptors. Androgen activation and deactivation are mainly intracellular events. They differ from cell type to cell type and between cells at different locations. The major circulating androgens, dehydroepiandrosterone sulfate and androstenedione, are predominantly produced in the adrenal glands, and testosterone and 5alpha-dihydrotestosterone are mainly synthesized in the gonads. Testosterone in women and 5alpha-dihydrotestosterone in both genders are also synthesized in the skin. Skin cells express all androgen metabolizing enzymes required for the independent cutaneous synthesis of androgens and the development of hyperandrogenism-associated conditions and diseases, such as seborrhea, acne, hirsutism and androgenetic alopecia. The major thrust of drug design for the treatment of androgen-associated disorders has been directed against several levels of androgen function and metabolism. Partial effectiveness has only been achieved either by androgen depletion, inhibition of androgen metabolism or blockade of the androgen receptor. abstract_id: PUBMED:15654949 Gender differences in mouse skin morphology and specific effects of sex steroids and dehydroepiandrosterone. Sex steroids play an important role in skin morphology and physiology. To evaluate the specific effects of sex steroids, the thickness of each skin layer was measured in intact and gonadectomized (GDX) male and female mice, as well as in GDX animals treated for 3 wk with 17beta-estradiol (E2), dihydrotestosterone (DHT), or their precursor dehydroepiandrosterone (DHEA). Morphological analysis shows that the dorsal skin of intact male is thicker than in the female, whereas the epidermis and hypodermis are thicker in the female. After GDX, epidermal thickness decreases only in the female to become similar to that of the intact male. Epidermal thickness in GDX animals of both sexes increases after E2 treatment to a value similar to that of intact females, whereas an increase is observed only in females after DHEA treatment. Both DHEA and DHT increased dermal thickness whereas E2, DHT, and DHEA markedly reduced hypodermal thickness in GDX animals of both sexes. Under all conditions, the hypodermis remains thicker in females. GDX triggers a rapid hair growth from telogen to anagen with a thicker hair shaft diameter in females. This data shows that DHT, E2, and DHEA exert specific effects on the different skin layers and appendages. abstract_id: PUBMED:28300625 Potential prevention by oral DHEA of superficial tears in elderly atrophic skin. Oral DHEA improves skin hydration and sebum production as well as bone mineral density in older patients while limiting progression of their skin atrophy. Topical DHEA cream stimulates local collagen production. Trials of oral DHEA should be undertaken to examin their potential for preventing tears in fragile aging skin. abstract_id: PUBMED:11535422 Hormonal effects on skin aging. With aging, there is a decrease in the level of hormones, such as estrogen, testosterone, dehydroepiandrosterone sulfate, and growth hormone. The effect of this decrease on the skin has been poorly documented, although more data are available for estrogen than for other hormones. This article reviews the effect of decreasing hormone levels on the skin and the possible cutaneous benefits of hormone replacement therapy. Answer: Topical dehydroepiandrosterone (DHEA) has been studied for its potential antiaging effects on the skin. A placebo-controlled, randomized, prospective study involving postmenopausal women showed that DHEA treatment increased androgen receptor (AR) expression and levels of procollagen 1 and 3 mRNA in the dermis, suggesting that DHEA could be an efficient and physiological antiaging skin agent (PUBMED:20698844). Additionally, DHEA is known to be synthesized in the skin with intracrine or paracrine actions, affecting functions such as sebaceous gland growth and differentiation, hair growth, and wound healing (PUBMED:17326004). A pilot study also indicated that topical DHEA increased sebum production, improved skin brightness, countered papery appearance, and potentially acted on processes related to wrinkles, suggesting beneficial effects on skin characteristics associated with aging (PUBMED:18242894). Moreover, the skin is capable of converting DHEA into more potent androgens like testosterone and dihydrotestosterone, which are involved in several skin functions and could contribute to the antiaging effects (PUBMED:15507105). Furthermore, DHEA has been shown to stimulate sebaceous gland activity, which is linked to skin hydration and sebum production, both of which are important for maintaining skin health and appearance in older individuals (PUBMED:11595810). Oral DHEA has been reported to improve skin hydration and sebum production, as well as increase local collagen production when applied topically, which could help in preventing tears in fragile aging skin (PUBMED:28300625). In summary, the evidence suggests that topical DHEA has the potential to act as an antiaging agent for the skin by increasing AR expression, procollagen production, and sebum production, as well as improving skin brightness and texture. These effects could be mediated through the skin's ability to synthesize and respond to androgens, which play a significant role in skin physiology and aging (PUBMED:20698844, PUBMED:17326004, PUBMED:18242894, PUBMED:15507105, PUBMED:11595810, PUBMED:28300625).
Instruction: Does mandibular edentulous bone height affect prosthetic treatment success? Abstracts: abstract_id: PUBMED:20708062 Does mandibular edentulous bone height affect prosthetic treatment success? Objectives: The aim of this study is to determine whether mandibular bone height affects patients' ratings of satisfaction and function with mandibular 2-implant overdentures (IODs) and conventional dentures (CDs). Methods: 214 edentulous elders were randomly allocated into 2 groups and treated with maxillary CDs and either mandibular CDs or IODs. Classifications of mandibular bone height were carried out on panoramic radiographs using 4 published methods. At baseline and 6 months after delivery, all participants rated their satisfaction with their prostheses using the McGill Denture Satisfaction Instrument. Independent t-tests and a linear multivariable regression model were used for statistical analyses. Results: Mandibular bone height has no effect on patients' ratings of general satisfaction, nor on ratings of ability to chew, stability, comfort, aesthetics and ability to speak at 6 months (p&gt;0.05, linear regression). There were significant between treatment differences in ratings of general satisfaction, comfort, stability and ability to chew from all mandibular bone height categories, with higher ratings assigned to IODs (p&lt;0.01, t-tests). Linear regression analyses confirmed that, for general satisfaction, as well as ability to chew, stability, comfort, aesthetics and ability to speak, treatment with IODs contributes to higher satisfaction ratings (p&lt;0.001), while mandibular bone height does not. Conclusions: The evidence demonstrates that mandibular bone height has no effect on patients' satisfaction with the function, chewing ability and comfort of their prostheses. Furthermore, no matter how much mandibular bone, these results suggest that edentulous elders will benefit more from mandibular IODs than from CDs. abstract_id: PUBMED:22788219 Reconstruction of the moderately atrophic edentulous maxilla with mandibular bone grafts. Background: Mandibular bone grafts are used for alveolar reconstruction in partially edentulous patients. Few reports describe the use of mandibular bone for total maxillary reconstruction. Aim: To describe indications to the use of extensive mandibular bone grafts for maxillary reconstructions in edentulous patients. Methods: A retrospective evaluation of a group of patients with edentulous resorbed maxillae reconstructed with mandibular bone grafts in preparation for implant placement was performed. The studied cohort consisted of 15 patients who were treated with mandibular onlay blocks and sinus augmentation with mandibular bone in an outpatient setting under local anesthesia (MG). This approach was chosen when residual bone height in the anterior maxilla was at least 8 mm combined with posterior vertical deficiency and anterior horizontal defects. Success and survival rates of the implants and prostheses together with the achievement of the planned prosthetic plan were recorded. Patients also performed an evaluation of the outcome with the aid of Visual Analog Scales. Mean follow-up time was 19 months after prosthesis delivery. Results: No major surgical complications occurred at recipient or donor sites. A total of 81 implants were inserted, and survival and success rates were 97.6% and 93%, respectively. Planned prostheses could be delivered to all patients (eight overdentures and seven fixed dental prostheses). Conclusions: With the limitations of the present clinical study, it can be stated that edentulous maxillae with a moderate atrophy may be successfully augmented with mandibular bone grafts in an outpatient setting under local anesthesia and e.v. sedation. This is the case when posterior maxillary deficiency is combined with horizontal defects, but with a residual height of 8 mm or more in the anterior maxilla. When the residual anterior bone height is less than 8 mm, or when the inter-arch discrepancy needs to be corrected with an osteotomy, extraoral bone harvesting needs to be considered. abstract_id: PUBMED:8997437 Height of mandibular basal bone in dentate and edentulous subjects. Age-related differences in the height of the mandibular basal bone were compared between young and old dentate subjects and between dentate and old edentulous subjects by means of panoramic radiographs. The distances of the mental foramen and mandibular canal from the lower border of the mandible, when measured, showed the mandibular basal bones in old dentate and in old edentulous men to be higher than those of young dentate men in the molar region (P &lt; 0.05). The height of the basal bone below the mental foramen was smaller in old edentulous women than in young and old dentate women (P &lt; 0.05). A slight bone deposition along the lower border of the mandible in the molar region seems to take place with increasing age. The mental foramina of edentulous women move toward the lower border of the mandible as a result of alveolar bone resorption. abstract_id: PUBMED:26574862 Implant-supported mandibular overdentures can minimize mandibular bone resorption in edentulous patients: results of a long-term radiologic evaluation. Purpose: It has been suggested that functional loading and light irritative stimuli could lead to changes in bone architecture, shape, and volume, and that by placing implants in the edentulous mandible and subsequently loading them, functional conditions could be created to limit bone resorption or even stimulate bone apposition (the latter was reported only for fixed implant-supported prosthetic reconstructions) in the distal area of the mandibular osseous crest. The aim of this study was to radiographically assess the bone height changes in the posterior area of the mandible after implant placement and loading with an overdenture on two or four implants over a mean follow-up period of 10.5 years. Materials And Methods: Panoramic radiographs were taken of 82 totally edentulous patients before implant placement and at repeated follow-up intervals spread over a mean observation time of 10.5 years. All patients received an implant-supported overdenture as prosthetic treatment. The mandibular bone height in the distal part of the mandible was measured on each of the available radiographs and the initial, intermediate, and final values were compared. Results: A mean mandibular bone height reduction of 0.5 mm was measured. Conclusion: No clinically relevant difference was found between the posterior mandible height before implant placement and at follow-up after functional loading with an implant-supported mandibular overdenture. abstract_id: PUBMED:26514887 Mandibular molar protraction as an alternative treatment for edentulous spaces: Focus on changes in root length and alveolar bone height. Background: The authors conducted a retrospective cohort study to investigate external apical root resorption (EARR) and alveolar bone loss (ABL) after protraction of the mandibular molars by using miniscrews. Methods: The authors protracted 51 mandibular molars in 37 adults into an edentulous area by using sliding mechanics with a lever arm or a miniscrew-supported root spring. The authors measured root length and alveolar bone height on panoramic radiographs and corrected according to the crown length registration method. The authors measured the amounts of crown movement (CM) and root movement (RM) on the superimposed lateral cephalometric radiographs along the mandibular occlusal plane. The authors used a linear mixed model to determine the risk factors for EARR and ABL. Results: All edentulous spaces closed successfully in an average of 31.7 months, and the average CM and RM were 4.97 and 8.64 millimeters, respectively, with an RM:CM ratio of 2.81. The root length decreased significantly by 0.80 mm (5.53%), but EARR of more than 2 mm occurred in only 4.0% of molars. The alveolar bone height was reduced significantly by 0.56 mm, but ABL of more than 2 mm occurred in only 2.0% of molars. Linear mixed model results revealed that EARR and ABL correlated only with RM and age, respectively. Conclusions: Protraction of the mandibular molars into an edentulous area by using miniscrews was effective and safe, especially in younger adults. Practical Implications: Protraction of the mandibular molars can be considered as an alternative treatment to conventional prosthetic treatment in the restoration of edentulous space, especially in young adults, if patients accept longer treatment duration. abstract_id: PUBMED:28523324 Anatomical features and prosthetic considerations of edentulous patients with mandibular reconstruction with autograft performed more than 40 years ago. In this manuscript there are presented two cases, both with mandibular reconstruction with autograft harvested from the iliac crest, performed more than 40 years ago, but with different bone defect etiology, i.e., mandibular dysplasia and mandibular fracture, both having currently extensive tooth loss in the mandible, being in need for prosthetic rehabilitation. These cases confirm that reconstructive surgery of mandibular bone defects with autograft from iliac crest has a satisfactory long-term outcome, providing acceptable conditions for prosthetic rehabilitation. Bone resorption in the grafted area seems to be related to the etiology of bone defect, and influenced by preservation of dental occlusion and masticatory function. Reconstructive surgery made decades ago has sequelae that increase the difficulty of prosthetic treatment of edentulous patient, that need to be proper evaluated and included in the treatment plan, in an individualized approach, in order to obtain an acceptable functional outcome. abstract_id: PUBMED:30364892 Prosthetic Treatment Possibilities for Extended Partially Edentulous and Completely Edentulous Patients, Depending on the Clinico-Morphological Characteristics of the Oral Mucosal Substrate. The number of partially and completely edentulous patients requiring removable prosthetic treatment has increased constantly and denture wearers often have an associated oral status pathology. The purpose of this article is to present removable prosthetic treatment options that we chose for the extended partially edentulous and completely edentulous patients who required prosthetic rehabilitation treatment. We conducted this study on 52 extended partially edentulous and completely edentulous patients. We recorded in the examination chart the characteristics of the mucosal membrane covering the denture supporting structure and the peripheral structures and we performed a statistical analysis using Microsoft Excel. The prosthetic treatment phases were individualized according to morphological characteristics of the oral mucosal substrate. abstract_id: PUBMED:31802471 Evaluation of trabecular bone microstructure of mandibular condyle in edentulous, unilateral edentulous and fully dentate patients using cone-beam computed tomography. Background: The aim of this study was to compare the trabecular bone microstructure of the mandibular condyle in edentulous, unilateral edentulous (Kennedy Class II), and fully dentate patients. Materials And Methods: The study used the cone-beam computed tomography (CBCT) images of 17 fully dentate (34 condyles), 16 edentulous (32 condyles), and 17 unilateral edentulous patients (34 condyles) aged 19 to 80 years. The trabecular bone microstructure of the mandibular condyle was evaluated on 8 consecutive cross-sectional images of these patients. In the microstructure analysis, structural model index (SMI), ellipsoid factor (EF), bone volume fraction (BV/TV), trabecular thickness (Tb.Th) and trabecular seperation (Tb.Sp) were measured. Results: There was no significant difference between the mean SMI, BV/TV, EF and Tb.Th microstructure values of each group (p = 0.243, p = 0.095, p = 0.962, p = 0.095, respectively). However, there was significant difference in terms of mean Tb.Sp between the groups (p = 0.021). The trabecular structure in all three groups was more rod-shaped. No correlation was found between age factor and microstructure values. Conclusions: Considering the in vivo microstructure analysis of CBCT images, it can be said that teeth loss does not have a significant effect on the microstructure parameters excluding Tb.Sp of mandible condyles and does not affect mandibular condyle trabecular endurance. abstract_id: PUBMED:28293321 Bone Height Changes of the Mandibular Edentulous Ridge in Screw Retained Versus Telescopic Restorations for Completely Edentulous Patients. Aim: This study was established to evaluate the amount of bone height changes in the posterior mandibular area of edentulous patients receiving screw-retained prostheses versus removable telescopic implant overdentures. Material And Methods: Every patient received four inter-foraminal implants regarding the mandibular arch and four anterior implants for the maxillary arch, computer guided surgical guides were planned for the insertion of the implants accurately with a flapless technique. Panoramic radiographs were made immediately, six months and twelve months after the prostheses` use proportional area and vertical measurements were applied to determine changes in the bone height of the posterior mandibular edentulous area. Results: After twelve months, a statistically non-significant amount of bone resorption was reported for both groups. Conclusion: Up to the limitations of this study both treatment options the screw retained and telescopic overdenture can be used for rehabilitation of completely edentulous patients. These cases must be followed for a longer period to have a definite answer regarding their efficiency in the long run. abstract_id: PUBMED:34924491 A new mandibular deformation index predicts amount of bone deformation in edentulous patients treated with an implant-supported fixed prosthesis. Purpose: The present study was performed to examine the mandibular deformation during mouth opening in edentulous patients, treated with an implant-supported fixed prosthesis using strain gauges, and identify factors affecting deformation. Methods: Twenty patients with a fully edentulous mandible who received either 4 or ≥6 implants were included. The distal-most implants were placed mesial to the mental foramen (premolar region) in patients with 4 implants and distal to the mental foramen (molar region) in patients with ≥6 implants. Mandibular deformation during mouth opening was measured using strain gauges in two directions: anteroposterior direction and lateral direction between the distal-most implants on the left and right sides (arch width). The mandibular anatomy was evaluated using computed tomography. Results: Arch width reduction between the left and right implants during mouth opening ranged from 47.38 to 512.80 µm; the range of deformation was 0.12 to 15.14 µm in the anteroposterior direction. Furthermore, a significant positive correlation was noted between arch width reduction in the premolar region and the ratio between the symphyseal bone height and width (P = 0.0003, r = 0.72). Conclusion: The reduction in arch width was higher in the molar region than in the premolar region during mouth opening. Moreover, the reduction could be high in the mandibular symphyseal bone because of its greater height and lesser width. The ratio between the symphyseal bone height and width is defined as the mandibular deformation index (MDI) and is used to predict the rate of mandibular bone deformation. Answer: Mandibular edentulous bone height does not appear to significantly affect the success of prosthetic treatments according to the studies provided. Specifically, one study found that mandibular bone height had no effect on patients' ratings of general satisfaction, nor on ratings of ability to chew, stability, comfort, aesthetics, and ability to speak with mandibular 2-implant overdentures (IODs) and conventional dentures (CDs) at 6 months (PUBMED:20708062). This suggests that regardless of the amount of mandibular bone present, edentulous elders will benefit more from mandibular IODs than from CDs. Additionally, another study reported that implant-supported mandibular overdentures can minimize mandibular bone resorption in edentulous patients, with no clinically relevant difference found between the posterior mandible height before implant placement and at follow-up after functional loading with an implant-supported mandibular overdenture (PUBMED:26574862). This indicates that the use of implant-supported overdentures may help maintain mandibular bone height over time. Furthermore, a study on mandibular molar protraction as an alternative treatment for edentulous spaces found that protraction was effective and safe, especially in younger adults, suggesting that mandibular bone height was not a significant factor in the success of this treatment (PUBMED:26514887). In the context of mandibular reconstruction, a study highlighted that autografts from the iliac crest provided satisfactory long-term outcomes for mandibular bone defects, offering acceptable conditions for prosthetic rehabilitation, again implying that the initial bone height was not a critical determinant of prosthetic success (PUBMED:28523324). Lastly, a study evaluating the trabecular bone microstructure of the mandibular condyle in edentulous patients found that tooth loss did not have a significant effect on the microstructure parameters of mandible condyles, excluding trabecular separation (Tb.Sp), and did not affect mandibular condyle trabecular endurance (PUBMED:31802471). This suggests that the underlying bone structure, rather than just the height, is also an important consideration in the success of prosthetic treatments.
Instruction: Is quality of life associated with cognitive impairment in schizophrenia? Abstracts: abstract_id: PUBMED:26341538 The quality of life in schizophrenia The World Health Organization defines quality of life as individuals' perception of their position in life in the context of the culture and value systems in which they live and in relation to their goals, expectations, standards, and concerns. Quality of life (QoL) is a concept, which reflects multiple as well subjective as objective dimensions. In patients with schizophrenia, quality of life has been negatively correlated with depressive and anxiety symptoms (results seem more unconvincing concerning positive symptoms and cognitive deficits); the remission of positive and negative symptoms has been associated with a better quality of life, but the persistence of depressive symptoms decreases quality of life even when patients were or not in remission; second generation antipsychotics significantly increase more quality of life than first generation antipsychotics; and psychotherapies (rehabilitation, case management...) improve quality of life. Several general and disease-specific QoL scales have been developed and successfully tested in patients with schizophrenia. The most appropriate disease-specific scale is the Quality of Life Scale (Heinrichs et al., 1984) since it takes patients' cognitive deficits into account and because it allows to subtly measuring the patients' subjective feeling during a hetero-evaluation. The Quality of Life Scale is a 21-item scale based on a semi-structured interview, which is comprised of four subscales: interpersonal relations, instrumental role functioning, intra-psychic foundations, and use of common objects and activities. It has been designed initially to assess deficit symptoms in schizophrenia. It is a simple and quite short tool, which is intended for the use as an outcome criterion, a measure of change and an indicator of the efficacy of therapeutic interventions. Convincing metrological qualities have been described: content, construct and nomological validities; inter-raters and test-retest fidelities; it is sensitive to change and to treatments and negatively correlated with symptoms (PANSS) and with clinical state (CGI). Two of the recent major antipsychotic efficacy trials, CATIE and CUtLASS, both adopted the Quality of Life Scale as a measure of quality of life. abstract_id: PUBMED:15694230 Is quality of life associated with cognitive impairment in schizophrenia? Background: The subjectively assessed quality of life of schizophrenia patients is mostly lower than healthy subjects, and cognitive impairment is an integral feature of schizophrenia. The aims of the present study were to compare the quality of life and neurocognitive functioning between the patients with schizophrenia and the healthy subjects, and to examine the relationships between quality of life and neurocognitive functions among the patients with schizophrenia. Methods: Thirty-eight patients with schizophrenia (15 women and 23 men) and 31 healthy individuals (18 women and 13 men) were included in the study. All participants were administered World Health Organization Quality of Life-Brief Form (WHOQOL-BREF) to assess their quality of life, and Digit Span Test (DST) and Controlled Oral Word Association Test (COWAT) for cognitive functions. Results: The patients with schizophrenia demonstrated lower scores in physical (F=25.6, p=0.0001), psychological (F=15.85, p=0.0001) and social (F=37.7, p=0.0001) domains compared to control group. The patients with schizophrenia showed significantly lower scores on COWAT compared to healthy subjects (F=4.22, p=0.04). The social domain scores of WHOQOL correlated to DST total scores (r=0.45, p=0.007), DST forwards scores (r=0.54, p=0.001) and COWAT total scores (r=0.40, p=0.04) in patients with schizophrenia but not in the control group. The patients with lower level of cognitive functioning had lower scores on social domain of WHOQOL-BREF (z=-2.01, p=0.04). Conclusion: Our results confirm that the cognitive deficits in executive function and working memory appear to have direct impact on the patients' perceived quality of life especially in social domain which can either be a cause or a consequence of social isolation of patients with schizophrenia. abstract_id: PUBMED:9262045 The influence of neurocognitive deficits and symptoms on quality of life in schizophrenia. The purpose of the present study was to examine the relationship between neurocognitive deficits and self-reported quality of life in order to examine whether neurocognitive impairment interferes with any aspects of quality of life for patients with schizophrenia. Forty-two outpatients with stable chronic schizophrenia were assessed for neurocognitive deficits using a computerized test battery, and all patients completed a version of the Sickness Impact Profile (SIP) to assess their quality of life across a variety of domains. The neurocognitive assessment tests revealed significant deficits compared with normal control subjects, particularly with respect to impaired iconic memory and frontal functioning. Patients reported that their quality of life was compromised. Despite the substantiation of marked neurocognitive deficits and reduced quality of life, correlations between neurocognitive deficits and quality of life were largely nonsignificant or very weak. Symptom expression, however, particularly with regard to general psychopathology on the Positive and Negative Syndrome Scale (PANSS), was significantly associated with quality of life. These results suggest that neurocognitive deficits in schizophrenia, while often profound, appear to have little direct impact on the patient's perceived quality of life. abstract_id: PUBMED:32415431 A comprehensive model of predictors of quality of life in older adults with schizophrenia: results from the CSA study. Background: Numerous factors are known to influence quality of life of adults with schizophrenia. However, little is known regarding the potential predictors of quality of life in the increasing population of older adults with schizophrenia. The main objective of the present study was to propose a comprehensive model of quality of life in this specific population. Methods: Data were derived from the Cohort of individuals with Schizophrenia Aged 55 years or more (CSA) study, a large (N = 353) multicenter sample of older adults with schizophrenia or schizoaffective disorder recruited from French community mental-health teams. We used structural equation modeling to simultaneously examine the effects of six broad groups of clinical factors previously identified as potential predictors of quality of life in this population, including (1) severity of general psychopathology, (2) severity of depression, (3) severity of cognitive impairment, (4) psychotropic medications, (5) general medical conditions and (6) sociodemographic characteristics. Results: General psychopathology symptoms, and in particular negative and depressive symptoms, cognitive impairment, reduced overall functioning and low education were significantly and independently associated with diminished quality of life (all p &lt; 0.05). Greater number of medical conditions and greater number of antipsychotics were also independently and negatively associated with quality of life, although these associations did not reach statistical significance in sensitivity analyses, possibly due to limited statistical power. Conclusion: Several domains are implicated in quality of life among older adults with schizophrenia. Interventions targeting these factors may help improve importantly quality of life of this vulnerable population. abstract_id: PUBMED:26038220 Subjective perceptions of cognitive deficits and their influences on quality of life among patients with schizophrenia. Purpose: Functional outcomes in schizophrenia may be more closely related to social cognition than to neurocognition; however, the extent to which social cognition influences quality of life (QoL) remains unclear. We conducted a cross-sectional survey study of the impact of patients' and clinicians' subjective perceptions of neurocognitive and social cognitive deficits on quality of life. Methods: The study included 253 patients with schizophrenia and their clinicians from public mental health clinics in Bolivia, Chile, and Peru. We utilized the GEOPTE Scale of Social Cognition for Psychosis, the Schizophrenia Quality of Life Questionnaire, and the Positive and Negative Syndrome Scale for schizophrenia. Results: Patients' subjective perceptions of their neurocognitive deficits (B = -1.13; CI -1.56 to -0.70) were significantly associated with QoL, whereas there was no independent association between the clinicians' ratings of the patients' neurocognitive deficits and QoL (B = -0.33; CI -0.98 to 0.31). However, patients' subjective perceptions of their neurocognitive deficits were no longer associated with QoL (B = -0.23; CI -0.71 to 0.24) once their perceptions of social cognitive impairments were accounted for (B = -1.03; CI -1.39 to -0.68). Conclusion: Patients' perceptions of their social cognitive function (but not neurocognitive functioning) have a significant impact on their QoL. Clinicians' ratings of patients' cognitive deficits were only weakly correlated with patients' subjective perceptions of their own neurocognitive, suggesting a mismatch between clinician and patient assessments of such deficits. Closer attention should therefore be paid toward patients' perception of their own deficits by clinicians in order to improve QoL. abstract_id: PUBMED:21921416 Quality of life and its predictors in people with schizophrenia. The author reviewed measurement of quality of life (QOL) of schizophrenia patients and the clinical factors related to their QOL. As schizophrenia patients were thought to be unable to assess their own QOL because of their cognitive impairment, objective QOL measures had been frequently used. However, nowadays, there is general agreement that symptomatically stabilized patients could assess their QOL by themselves. Therefore, researchers gradually have become interested in subjective QOL measure. Although most researchers often evaluate schizophrenia patients' QOL using only subjective or objective QOL measure, considering the fact that there is a discrepancy between the two types of measures, it is recommended to use both of them as complementary measures. As for clinical factors related to lowered QOL, several studies reported that depressive symptom was most associated with lowered subjective QOL, negative symptom was strongly related to lowered objective one and poor life skill was associated with both. Moreover, several studies found that cognitive dysfunctions in some cognitive domains were related to lowered objective QOL but the effects of them were much smaller than those of negative symptoms. It is suggested that improving depressive and negative symptoms and life skills may contribute to enhancement of QOL of schizophrenia patients. abstract_id: PUBMED:35870969 The association of cognitive impairment with quality of life and functional impairment in Ugandan first-episode psychosis patients: a cross sectional study. Introduction: Cognitive impairment is common in first-episode psychosis patients and often associated with poor quality of life and functional impairment. However, most literature on this association is from high income countries and not low resource countries like Uganda. We aimed to determine the association between cognitive impairment with quality of life and functional impairment in Ugandan first-episode psychosis patients. Methods: At Butabika national psychiatric hospital of Uganda, we enrolled 94 first-episode psychosis patients aged 18-60 years with a confirmed first-episode of psychosis and no previous treatment with antipsychotic medication. Neuropsychological assessment was performed using the MATRICS consensus cognitive battery (MCCB). Quality of life and functional impairment were assessed using the brief version of the World Health Organisation Quality of Life scale (WHOQOL-BREF) and the MINI International Neuropsychiatric Inventory (MINI) respectively. Linear regression analyses determined the association between impairment in different cognitive domains with various quality of life and functional impairment domains while controlling for age, gender and level of education. Results: High scores in the reasoning and problem solving cognitive domain were associated with better quality of life in the psychological domain of WHOQOL-BREF (p = 0.029). For functional impairment, high cognitive scores in the domains of speed of processing (p = 0.018), reasoning and problem solving (p = 0.015), working memory (p = 0.017) and visual learning and memory (p = 0.002) were associated with psychosis "having a greater impact on other members of the family" on the MINI. Higher scores in the social cognition domain were associated with "less aggressive and disruptive behaviour" (p = 0.003). Conclusion: Cognitive impairment in Ugandan first-episode psychotic patients is associated with both poorer quality of life and functional impairment. Remediation of cognitive function may be a plausible intervention to improve outcomes in Ugandan first-episode psychosis patients. abstract_id: PUBMED:32140339 Cognitive Deterioration and Quality of Life in Patients with Schizophrenia: A Single Institution Experience. The present article aimed to analyze and describe the relationship between cognitive impairment and the perception of quality of life. The study was carried out in the psychiatric hospital within the Ecuadorian "Instituto de Neurociencias," with 50 patients, both men and women, between the ages of 18 and 65 years. An instrument for the screening of cognitive impairment in psychiatric patients, a questionnaire that measures quality of life, and a sociodemographic questionnaire were applied individually. Data collection took between 30 and 40 minutes per person. The results obtained allowed to verify the relationship between cognitive deterioration and quality of life. First, it was shown that while there was greater cognitive impairment, the perception of quality of life was low and vice versa. It was found that the areas of verbal learning, processing speed, and working memory are those that present greater difficulty for patients with this disorder. abstract_id: PUBMED:32074778 Effect of cognitive impairment on social function and quality of life in chronic schizophrenia Objective: To clarify the effect of cognitive impairment on social function and quality of life of chronic schizophrenia, and provide clinical cognitive strategies for improving the social function and quality of life of patients with schizophrenia. Methods: Atotal of 158 patients with chronic schizophrenia were selected from May 2017 to October 2017 in the Psychiatry Department of the Third Affiliated Hospital of Sun Yat-sen University received psychological assessments, such as, MATRICS Consensus Cognitive Battery(MCCB), the Brief Psychiatric Rating Scale(BPRS), the Personal and Social Performance scale(PSP), and Schizophrenia Quality of Life Scale(SQLS). We further explored the effects of neurocognitive and social cognitive functions on their individual and social performance and quality of life in patients with schizophrenia. Results: (1) The scores of SQLS in the group with impaired social cognitive function were higher than those with good social function(101±46 vs 76±40, P=0.002). (2) The digital sequence and continuous performance test of the socially functional group were higher than the defect group. (3) There was a significant correlation between the years of education(R(2)=0.334, F=25.542), continuous performance (R(2)=0.316, F=35.647), BPRS (R(2)=0.280, F=60.386) and social function (P&lt;0.001). (4) BPRS (R(2)=0.486, F=228.28), and emotional management (MSCEIT) (R(2)=0.510, F=124.789), education (R(2)=0.531, F=90.161), age (R(2)=0.539, F=69.644) significantly affected the SQLS score of patients with schizophrenia(P&lt;0.001). Conclusion: The social function and quality of life of patients with schizophrenia are significantly correlated with their years of education and disease severity. Continuous performance in neurocognition significantly affects the social function of patients with schizophrenia, and emotional management in social cognition significantly affects their quality of life. Socially functional schizophrenia patients have higher digital sequences (working memory) and continuous performance (attention/alertness) scores. abstract_id: PUBMED:9417398 Quality of life of schizophrenic patients and clozapine The authors review the various methods of evaluating quality of life in schizophrenics and note the limitations of that type of method in terms of both form and content. In practical terms, the patients' cognitive disorders may constitute a bias when the questionnaire is addressed. Using a 'subjective quality of life profile' by Gerin et al., the authors interviewed 22 patients presenting with 'recalcitrant schizophrenia' and treated with clozapine for more than 3 years. The patients were asked to assess the change in their quality of life by comparison with previous neuroleptic chemotherapies. For all the items investigated, the responses were in favor of clozapine. Answer: Yes, quality of life (QoL) is associated with cognitive impairment in schizophrenia. Studies have found that cognitive deficits, particularly in executive function and working memory, can directly impact patients' perceived quality of life, especially in the social domain. This can contribute to or result from the social isolation experienced by individuals with schizophrenia (PUBMED:15694230). Although some research suggests that neurocognitive deficits may have little direct impact on perceived quality of life (PUBMED:9262045), other studies indicate that cognitive impairment is indeed associated with both poorer quality of life and functional impairment (PUBMED:35870969). Furthermore, subjective perceptions of cognitive deficits, especially social cognitive impairments, have been shown to significantly affect QoL (PUBMED:26038220). Additionally, cognitive deterioration has been observed to correlate with a lower perception of quality of life, with areas such as verbal learning, processing speed, and working memory presenting greater difficulty for patients (PUBMED:32140339). It is suggested that interventions targeting cognitive deficits may help improve the quality of life for individuals with schizophrenia (PUBMED:32415431).
Instruction: Should the Celiac Artery Be Used as an Anatomical Marker for the Descending Thoracic Aorta During Transesophageal Echocardiography? Abstracts: abstract_id: PUBMED:26096344 Should the Celiac Artery Be Used as an Anatomical Marker for the Descending Thoracic Aorta During Transesophageal Echocardiography? Background: Transesophageal echocardiography (TEE) is a pivotal tool for diagnosis of aortic diseases. However, there are no suitable anatomical markers to describe location of disease in the descending thoracic aorta. In the past, we have used distance from the dental incisors to report location of disease, but this has no anatomical relevance. Our recent data comparing TEE and computed tomography (CT) have shown that the celiac artery is an accurate anatomical marker where disease is described as distance relative to the celiac artery. This study was designed to determine whether utilization of the celiac artery as an anatomical marker with TEE is practically feasible. Methods: A clinical trial was performed at the University of Illinois at Chicago and the Jesse Brown VA Medical Center. A total of 160 patients referred for TEE were consecutively enrolled over a period of 18 months. Results: The celiac artery was visualized in 97% of the patients. Conclusions: The celiac artery is a reliable anatomical marker with TEE and should be routinely used to locate disease in the descending thoracic aorta. abstract_id: PUBMED:21039811 Transesophageal echocardiography of the descending thoracic aorta: establishing an accurate anatomic marker using the celiac artery. Objective: We performed transesophageal echocardiography (TEE) and computed tomography (CT) on patients with aortic pathology to describe a more anatomically coherent marker on TEE studies by using the celiac artery as a reference. Background: As there are no anatomic markers for the eight-vertebrae long descending thoracic aorta, aortic disease (AD) is routinely reported as a distance from the dental incisors in TEE studies. This method does not provide an anatomically accurate location of AD relative to recognizable anatomy. Methods: Ten patients referred for contrast CT and TEE for evaluation of embolic events were studied. Twenty-seven focal aortic lesions found on TEE were measured relative to their distance from the celiac artery. The agreement among the two methods of quantification of aortic lesions from the celiac artery was assessed by univariate regression analysis and by the Bland-Altman method. Difference in the mean values between the two groups was analyzed by paired t-test. Results: The P value for the two-tailed t-test for the difference between the two modalities was 0.122. The Pearson coefficient for comparison of CT and TEE data was 0.98. The coefficient of determination for TEE versus CT data was 0.96. Conclusions: The TEE CA reference marker proves to be an accurate method for localizing disease in the descending thoracic aorta relative to CT findings. Reporting of aortic disease relative to this marker is more clinically applicable than the current method of measurement. The CA reference marker should be routinely reported in TEE studies to enhance the diagnosis of aortic disease. abstract_id: PUBMED:14608291 Transesophageal echocardiographic detection of thoracic aorta dissection extending into the celiac artery. Special positioning of a transesophageal echocardiography probe allows visualization of the superior portion of the abdominal aorta including the celiac artery branch. This case report describes a thoracic aorta dissection that extended into the abdomen and involved the celiac artery. abstract_id: PUBMED:9046498 Tortuosity of the descending thoracic aorta simulating dissection on transesophageal echocardiography. In an 80-year-old patient with syncope, a markedly tortuous descending thoracic aorta produced images on transesophageal echocardiography which were suggestive of an intimal flap caused by dissection. A magnetic resonance aortogram clearly showed that the trans-esophageal echocardiogram was a false positive. In addition, multiplanar reconstructed images of the magnetic resonance aortogram through the tortuous descending thoracic aorta could reproduce images similar to that seen by transesophageal echocardiography. Because transesophageal echocardiography has become a popular imaging modality for the detection of aortic dissection, it is essential for echocardiographers to be aware of possible pitfalls which may create false positive findings. abstract_id: PUBMED:22931198 Diagnosis of celiac artery in-stent thrombosis by transesophageal echocardiography. We present a case of a 70-year-old woman with severe peripheral arterial disease presenting with celiac artery in-stent thrombosis diagnosed by transesophageal echocardiography (TEE). Routine assessment of the visceral arteries is not performed in most TEE studies. A review of the literature shows that the celiac and superior mesenteric arteries can be successfully visualized during transgastric views of the abdominal aorta. We propose that two-dimensional and color Doppler echocardiography have several intrinsic advantages over computed tomography, magnetic resonance angiography and catheterization when assessing aortic and visceral arterial pathology. abstract_id: PUBMED:16856550 Usefulness of intraoperative transesophageal echocardiography in stent graft implantation for thoracic descending aorta Background: Stent graft implantation for thoracic descending aorta is a promising alternative to open repair. Transesophageal echocardiography (TEE) is a sensitive imaging modality for aortic disease. We reviewed our experience with TEE in stent graft implantation for thoracic descending aorta. Method: Five patients underwent stent graft implantation for thoracic descending aorta under general anesthesia. Intraoperative angiography and TEE were used to identify the extent of the aneurysm and the placement of the stent. Results: TEE showed stent graft configuration and presence of leakage in all cases. In three cases, additional stent graft placement or bypass was performed. Conclusions: Useful information was obtained by TEE in enhancing the accuracy of stent graft positioning potentially improving outcomes. TEE may facilitate repair by confirming aortic pathology, identifying endograft placement, and assessing the adequacy of aneurysm sack isolation, presence of leakage, as well as dynamic intraoperative cardiac performance. abstract_id: PUBMED:1866682 Descending thoracic aorta-splenic artery graft for occlusive disease of the celiac axis. A new procedure is presented for the restoration of normal blood flow to the celiac axis in cases of orifice lesions of the celiac artery. The operation consists of inserting a graft between the descending thoracic aorta end-to-side and the divided splenic artery end-to-end. abstract_id: PUBMED:10962409 Primary endoleakage in endovascular treatment of the thoracic aorta: importance of intraoperative transesophageal echocardiography. Objectives: Endovascular treatment of the thoracic aorta has developed as an efficacious alternative to open surgical repair. However, despite the high primary success rate, perigraft leakage constitutes the major concern in long-term follow-up. Endoleaks are widely reported both in abdominal and thoracic endovascular series and are usually identified by intraoperative angiography. Transesophageal echocardiography is a sensitive imaging technique in the evaluation of aortic diseases, widely used to monitor cardiac surgery. The aim of this study was to evaluate the efficacy of transesophageal echocardiography in leakage detection during endovascular stent procedures of the thoracic aorta. Methods: Intraoperative transesophageal echocardiography was used in conjunction with angiography in 25 patients subjected to endovascular stent treatment of the descending thoracic aorta. Spiral computed tomographic scanning was performed before discharge and 3, 6, and 12 months after treatment. Results: Information from transesophageal echocardiography was relevant in the selection of the landing zone in 62% of cases. In 8 patients, transesophageal echocardiography with color Doppler sonography showed a perigraft leak, 6 of which were not visible on angiography, suggesting the need for further balloon expansion or graft extension. Postoperative computed tomographic scanning in the 25 patients showed 1 endoleak, which sealed spontaneously. At 3 months, computed tomographic examination confirmed the absence of perigraft leakage in all patients. Conclusions: During implantation of a stent-graft in the descending thoracic aorta, transesophageal echocardiography provides information in addition to that provided by angiography, improving immediate and late procedural results. abstract_id: PUBMED:27762248 Transesophageal echocardiography evaluation of the thoracic aorta. Transesophageal echocardiography (TEE) can be used to identify risk factors such as aortic atherosclerosis [2] before any sort of surgical manipulations involving aorta and its related structures. TEE has become an important noninvasive tool to diagnose acute thoracic aortic pathologies. TEE evaluation of endoleaks helps early detection and immediate corrective interventions. TEE is an invaluable imaging modality in the management of aortic pathology. TEE has to a large extent improved the patient outcomes. abstract_id: PUBMED:9053950 Thrombus in the thoracic aorta as cause of peripheral embolism diagnosed using transesophageal echocardiography The authors report the cases of two patients, with no risk factors for thromboembolism, in whom a mobile thrombus of the thoracic aorta was diagnosed by transesophageal echocardiography, after an episode of systemic embolism. The outcomes of the two cases were very different. In one patient the thrombus was no longer present after anticoagulant treatment with no recurrent embolic event. The other patient had a new embolic event and the thrombus persisted while having anticoagulant therapy. This patient underwent surgery, and the thrombus was removed. These cases illustrate the value of transesophageal echocardiography in the detection of embolic source. Answer: Yes, the celiac artery should be used as an anatomical marker for the descending thoracic aorta during transesophageal echocardiography (TEE). Studies have shown that the celiac artery is a reliable anatomical marker that can be visualized in a high percentage of patients undergoing TEE, with one study reporting visualization in 97% of patients (PUBMED:26096344). The use of the celiac artery as a reference point allows for a more anatomically accurate location of aortic disease (AD) relative to recognizable anatomy, as opposed to the traditional method of reporting AD as a distance from the dental incisors, which lacks anatomical relevance (PUBMED:21039811). The celiac artery as a reference marker has been proven to be an accurate method for localizing disease in the descending thoracic aorta, with a high Pearson coefficient of 0.98 when comparing TEE data to computed tomography (CT) data, indicating a strong agreement between the two modalities (PUBMED:21039811). This suggests that reporting of aortic disease relative to the celiac artery marker is more clinically applicable and should be routinely reported in TEE studies to enhance the diagnosis of aortic disease (PUBMED:21039811). Furthermore, TEE has been shown to be a sensitive imaging modality for aortic disease, providing useful information for procedures such as stent graft implantation for the thoracic descending aorta (PUBMED:16856550), and it can also be used to diagnose conditions such as celiac artery in-stent thrombosis (PUBMED:22931198). Additionally, TEE is valuable in the evaluation of thoracic aortic pathologies and can help identify risk factors and assist in the management of aortic pathology, thereby improving patient outcomes (PUBMED:27762248). In conclusion, the evidence supports the use of the celiac artery as an anatomical marker during TEE for the descending thoracic aorta, as it provides a more accurate and clinically relevant method for localizing aortic disease.
Instruction: Does carotid intima-media thickness regression predict reduction of cardiovascular events? Abstracts: abstract_id: PUBMED:22801066 Assessment of subclinical atherosclerosis by carotid intima media thickness: technical issues. Carotid intima-media thickness assessed by ultrasonography of carotid arteries is a safe, non-expensive, feasible and accurate method for detecting early signs of atherosclerosis and carotid intima-media thickness and change in carotid intima-media thickness over time reflect cardiovascular disease risk. Technical aspects impact on the measurement, variability and interpretation of carotid intima-media thickness. These include device aspects, inter- and intra-sonographer variability and the ultrasound protocol used. The mean common carotid intima-media thickness and the mean maximum common carotid intima-media thickness are the most widely used carotid intima-media thickness measurements. Common carotid intima-media thickness values of around 0.5 mm are considered 'normal' in young adults. Values are higher in men than in women, in African-Americans than Caucasians and increase with age. Carotid intima-media thickness values at or above the 75th percentile of a reference population indicate increased cardiovascular risk. Guidelines differ in their recommendations for the use of carotid intima-media thickness measurements for risk assessment in primary prevention because evidence suggesting that it improves upon conventional risk scores is inconsistent. Carotid intima-media thickness is frequently used in clinical trials as a surrogate endpoint for cardiovascular events on the assumption that regression or slowed progression of carotid intima-media thickness, induced by cardiovascular risk interventions, reflects a reduction in cardiovascular events. However, further data are required to confirm this linear relationship. No international guidelines exist on the use of carotid intima-media thickness as a research tool. Quality control in acquisition, measurement and interpretation of carotid intima-media thickness are important considerations and the carotid intima-media thickness protocol used should be determined by the research question under investigation. abstract_id: PUBMED:28026904 Weight loss and carotid intima-media thickness-a meta-analysis. Objective: Obesity is a risk factor for clinical cardiovascular disease, putatively via increased burden of atherosclerosis. It remains contentious as to whether weight loss in people with obesity is accompanied by a reduction in intima-media thickness, a noninvasive marker of subclinical atherosclerosis, consistent with a lowering of risk of cardiovascular events. Methods: A systematic literature search was performed to identify all surgical and nonsurgical weight loss interventions that reported intima-media thickness. A meta-analysis was undertaken to obtain pooled estimates for change in intima-media thickness. Results: From the 3,197 articles screened, 9 studies were included in the meta-analysis, with a total of 393 participants who lost an average of 16 kg (95% CI 9.4-22.5) of body weight over an average follow-up of 20 months. The pooled mean change in carotid intima-media thickness was -0.03 mm (95% CI -0.05 to -0.01), which was similar between surgical and nonsurgical interventions. Conclusions: In people with obesity, weight loss was associated with a reduction in carotid intima-media thickness, consistent with a lowering in risk of cardiovascular events. abstract_id: PUBMED:33837500 Carotid intima-media thickness in patients with hyperuricemia: a systematic review and meta-analysis. Objective: Despite the high incidence and mortality of cardiovascular events in hyperuricemia patients, the role of serum uric acid in cardiovascular diseases is still controversial. The aim of this meta-analysis was to explore the difference of carotid intima-media thickness in hyperuricemia and control groups. Methods: We performed this meta-analysis by searching the PubMed, Cochrane Library, Embase and Web of Science databases up to July 2020. The 95% confidence intervals and standard mean differences were calculated to analyze the differences in carotid intima-media thickness in hyperuricemia groups and control groups. Sensitivity analysis, subgroup analysis and meta-regression were used to explore the sources of heterogeneity. Publication bias was evaluated by funnel plot and Begg's regression test. We used Stata 14.0 software to complete our analyses. Results: A total of 8 articles were included. The results showed that there was a significant increase in carotid intima-media thickness in the hyperuricemia groups compared with the control groups [SMD = 0.264, 95% CI (0.161-0.366), P &lt; 0.001]. Subgroup analyses showed that age, sample size, blood pressure and body mass index were not the source of heterogeneity. Meta-regression enrolled the method of CIMT measurement, location, age, smoking and diabetes mellitus as categorical variables, but none of these factors was found to be significant in the model. The Begg's test value (P = 0.174) was greater than 0.05, indicating there was no publication bias. Conclusion: The results showed that carotid intima-media thickness was increased in hyperuricemia patients compared with controls, which indicated that hyperuricemia patients may have a higher risk of cardiovascular diseases. abstract_id: PUBMED:31804033 Intima-media thickness and carotidofemoral plaques for cardiovascular risk stratification Cardiovascular (CV) diseases are still one of the most important -killers in developed countries. In the last 40 years, great progresses have been achieved in recognizing and treating these diseases and their underlying risk factors. Risk assessment models using tradi-tional risk factors estimate the probability of a cardiovascular event in most cases and drive preventive treatment. However, these risk assessment models have weaknesses, and it is estimated that about 30 % of cardiovascular events are related to undetected risks. Many surrogate parameters have been investigated and have the potential to better predict CV risk beyond classical risk factors. The aim of this article is to assess the value of measuring the intima-media thickness and the detection of atheroma of the carotid and femoral bifurcation by ultrasound. abstract_id: PUBMED:23021836 Carotid intima-media thickness in the Spanish population: reference ranges and association with cardiovascular risk factors. Introduction And Objectives: Carotid intima-media thickness as measured with ultrasonography is an inexpensive and noninvasive predictor of cardiovascular events. The objectives of this study were to determine the population reference ranges of carotid intima-media thickness for individuals aged 35-84 years in Spain and to analyze the association of carotid intima-media thickness with cardiovascular risk factors (age, smoking, diabetes, pulse pressure, lipid profile, and body mass index). Methods: Population-based cross-sectional study conducted in Gerona (Spain). We described the mean and maximal values of carotid intima-media thickness of the carotid artery and of its 3 segments (common carotid, carotid bulb and internal carotid). We assessed cardiovascular risk factors and analyzed their association with carotid intima-media thickness using adjusted linear regression models. Results: A total of 3161 individuals (54% women) were included, with mean age 58 years. Men showed significantly higher mean common carotid intima-media thickness than did women (0.71 vs 0.67 mm). The strongest predictors of this measure were age (coefficients for 10-year increase: 0.65 and 0.58 for women and men, respectively), smoking in men (coefficient: 0.26), high-density lipoprotein cholesterol in women (coefficient for 10 mg/dL, increase: -0.08) and pulse pressure in both sexes (coefficients for 10 mmHg increase: 0.08 and 0.23 for women and men, respectively). The results were similar for the mean carotid intima-media thickness of all the segments. Conclusions: This population-based study presents the reference ranges for carotid intima-media thickness in the Spanish population. The main determinants of carotid intima-media thickness were age and pulse pressure in both sexes. abstract_id: PUBMED:12817310 Intima media thickness of the carotid arteries: early pointer to arteriosclerosis and therapeutic endpoint An early sign of atherosclerosis is hypertrophy of the arterial wall. Increased intima-media thickness (IMT) is a non-invasive marker of arterial wall alteration, which can easily be assessed in the carotid arteries by high-resolution B-mode ultrasound. There are important differences in B-mode measurements of carotid IMT between laboratories. These might concern on IMT image acquisition (in relation to the segment and/or the wall of measure) as well as determination of the echo boundary defining the IMT interfaces and the difference in relationship between anatomic and sonographic structures of the near and far arterial wall. Measurements derived from the near wall reflect thickness of a part of media and intima influenced by sonographic artefacts, rather than that of the total intima-media complex. Two main approaches are used for measuring IMT: 1) manual measurement at multiple extracranial carotid sites in near and far walls, and 2) automated computerized measurement, restricted to the far wall of the distal common carotid artery. Intra- and inter-observer variabilities have been found to vary in manual measurements between 0.09 to 0.13 mm and 0.12 to 0.18 mm (absolute differences). The best reproducibility of manual IMT-measurements is found at the far wall of the common carotid artery 1 cm from the bulb in the site of two parallel wall contours without local thickening. For automated measurements lower intra- and inter-observer variabilities have been reported. IMT of the common carotid artery is strongly influenced by age. For younger age groups (20 to 30 years) mean IMT values of 0.5 mm have been reported, while IMT values of 0.9 mm have been found for older subjects (60 to 70 years). Carotid IMT correlates with other risk factors such as systolic blood pressure, serum lipids and smoking, as well as being positively linked to the presence and extend of coronary artery disease in both women and men. The demonstration of a correlation between IMT and cardiovascular events such as stroke and myocardial infarction in clinical studies of atherosclerosis progression and cardiovascular outcomes have led to FDA accepting 2D ultrasound as a valid technique in clinical studies of atherosclerosis. Regression or slowing of progression of increased carotid IMT by various antihypertensive and lipid-lowering drugs agents have been reported. In addition to carotid IMT measurement as a predictive value of future vascular events the presence of plaques or stenosis in the carotid artery tree increases drastically the risk for cardiovascular events. Therefore the authors of this article propose the use of high resolution Duplex-Sonography for the assessment IMT of the common carotid artery as well as the detection of plaques and stenosis in clinical practice. In conclusion, high-resolution Duplex-sonography seems promising for the detection, quantification and serial investigations of structural alterations of the arterial wall. The method is sensitive enough to be applied in clinical studies of the progression and regression of early preintrusive atherosclerotic lesions in extracranial carotid arteries. Moreover IMT of the common carotid artery can be used as a therapeutic endpoint. abstract_id: PUBMED:21929345 Questioning the predictive role of carotid intima-media thickness. Interest in carotid intima-media thickness (IMT), as a tool to evaluate cardiovascular risk has been driven by studies that demonstrate a relationship between carotid IMT and the incidence of cardiovascular events. However, no study was designed and powered to demonstrate a relationship between changes in carotid IMT during follow-up and cardiovascular events. Therefore, a pooled analysis of existing clinical studies was performed to investigate this relationship. This analysis failed to demonstrate a predictive role of changes in carotid IMT for cardiovascular events. The reason for the lack of clear evidence for a predictive role for changes in IMT are uncertain but may reflect methodological problems related to intra- and inter-observer variability, as it seems unlikely that progression of carotid atherosclerosis would not predict outcome. A further meta-analysis based on individual patient-data has been planned, that may better address this issue. The variability of ultrasound measurements of carotid IMT are likely to be reduced by further development of automatic calculation of this index by MRI. abstract_id: PUBMED:11229312 Measurement of the carotid intima-media thickness. An innovative technique for assessing cardiovascular risk ASSESSMENT OF CARDIOVASCULAR RISK: Measurement of the intima-media thickness in the carotid artery is optimized by coupling high-resolution ultrasonography with automatic data processing systems, allowing improved precision. Although the optimal site of measurement remains controversial (common carotid, bifurcation, internal carotid), there appears to be a consensus on need for bilateral automatic measurement. The intima-media thickness is considered a marker of atheromatous disease and its diffusion. This parameter probably integrates the deleterious effect of different cardiovascular risk factors accumulated over decades. In addition, several prospective observation studies have reported a positive relationship between measurement of the intima-media thickness of the carotid artery and risk of cardiovascular events (myocardial infarction and stroke). Detection Aid: Although the results issuing from epidemiology observation studies are still too preliminary, to evaluate the positive and negative predictive value for occurrence of clinical events in relation to different levels of thickness, this simple rapid and noninvasive measurement could be a useful tool for subjects with high cardiovascular risk. Perspectives: Measurement of arterial parameters, determined by high-resolution ultrasonography, will probably shortly become an integral part of the evaluation strategies for cardiovascular risk. Future comparative studies will provide an assessment of the comparative predictive value of these different parameters (quantitative structural analysis versus quantitative and qualitative analysis of the structure versus structural and functional analysis of the arteries). abstract_id: PUBMED:22696103 Carotid intima-media thickness and the prediction of vascular events. Carotid intima-media thickness (cIMT) has received interest as a predictor of cardiovascular events in recent years. Use of cIMT in a clinical setting is limited by the variability in measurement and the lack of evidence for its use in clinical risk prediction. This review examines the major studies that have assessed the relationship between cIMT and cardiovascular event risk and discusses the current role of IMT in cardiovascular risk prediction. abstract_id: PUBMED:25531661 Intima-media thickness of carotid arteries. Carotid intima-media thickness (CIMT) is a validated predictive marker of increased plaque occurrence and the incidence of major cardiovascular events. However, due to technical issues associated with the measurement of CIMT, a well-trained and certified sonographer is needed to overcome causes of variability due to the patient, device, sonographer, and quantification tool. The recently updated Mannheim consensus defined and described how to differentiate CIMT from plaques. These definitions allow for the better analysis and quantification of early atherosclerosis. Indications for CIMT measurements largely include the detection of coronary heart disease risk among intermediate-risk patients. CIMT is frequently used in clinical trials, and recent technical recommendations have been provided to improve the quality of the procedures. The final choice of a CIMT protocol depends on the purpose of the measurement, the research question at hand, the cost effectiveness, the quality of the data and the added value provided by the additional information. Answer: The relationship between carotid intima-media thickness (CIMT) regression and the reduction of cardiovascular events is not definitively established. While CIMT is a non-invasive marker of subclinical atherosclerosis and is associated with cardiovascular risk factors and events, evidence for its predictive value regarding changes in CIMT and cardiovascular outcomes is inconsistent. Some studies and reviews suggest that CIMT is a validated predictive marker of increased plaque occurrence and the incidence of major cardiovascular events (PUBMED:25531661), and that regression or slowing of progression of increased carotid IMT by various interventions has been reported (PUBMED:12817310). However, no study was specifically designed and powered to demonstrate a direct relationship between changes in CIMT during follow-up and cardiovascular events (PUBMED:21929345). A pooled analysis of existing clinical studies failed to demonstrate a predictive role of changes in CIMT for cardiovascular events, which may be due to methodological issues such as intra- and inter-observer variability (PUBMED:21929345). Furthermore, while CIMT is frequently used in clinical trials as a surrogate endpoint for cardiovascular events, further data are required to confirm a linear relationship between CIMT changes and actual cardiovascular outcomes (PUBMED:22801066). The lack of international guidelines on the use of CIMT as a research tool and the importance of quality control in acquisition, measurement, and interpretation of CIMT are also highlighted (PUBMED:22801066). In summary, while there is an association between CIMT and cardiovascular risk, the evidence for using CIMT regression as a predictor for the reduction of cardiovascular events is not conclusive. More research and better methodological approaches may be needed to clarify this relationship (PUBMED:21929345, PUBMED:22801066).