input
stringlengths
6.82k
29k
Instruction: Is there any association between inhaled ipratropium and mortality in patients with COPD and asthma? Abstracts: abstract_id: PUBMED:12645834 Is there any association between inhaled ipratropium and mortality in patients with COPD and asthma? Background: To determine whether ipratropium was associated with premature death in patients with asthma and chronic obstructive pulmonary disease (COPD). Methods: A longitudinal cohort of 827 patients with COPD and 273 with asthma who were evaluated for compensation by two specialists in pulmonary medicine. Results: By June 1999, 538 of the patients with COPD and 77 of those with asthma had died. Atthe consultation, 77% ofthe COPD patients and 8.1% of the asthmatic patients were treated with inhaled ipratropium. Ipratropium was associated with mortality risk ratio (RR) of 2.0 (95% confidence interval: 1.5-2.6) for COPD and 3.6 (1.8-7.1) for asthma patients. After adjustment for confounding factors [forced expiratory volume 1 s (FEV1), smoking habits, asthma medication, and presence of cor pulmonale] the RR for COPD was 1.6 (1.2-2.1) and for asthma 24 (1.2-5.0). Conclusions: The increased risk of premature death associated with ipratropium is of concern and necessitates further evaluation, e.g., in a randomised study. abstract_id: PUBMED:26405325 Inhaler Use in Hospitalized Patients with Chronic Obstructive Pulmonary Disease or Asthma: Assessment of Wasted Doses. Background: Hospitalized patients with chronic obstructive pulmonary disease (COPD) or asthma routinely have inhaled medications ordered for acute and maintenance therapy. Treatment may be administered via metered-dose inhaler (MDI) or dry-powder inhaler (DPI). These products must be appropriately labeled to be released home with the patient or discarded before discharge. Objective: To assess the amount and estimated cost of wasted doses of medications via MDI or DPI for hospitalized patients with COPD/asthma. Methods: A retrospective study was conducted at a university-affiliated hospital. Patients admitted between January 2011 and June 2012 with a primary diagnosis of COPD or COPD with asthma and who were ≥40 years of age were included. Information collected included use of albuterol, ipratropium, inhaled corticosteroids, long-acting beta agonist, or tiotropium and whether treatments were given by nebulizer, MDI, MDI plus valved holding chamber (VHC), or DPI. The number of doses dispensed, as well as doses not used, via MDI, MDI + VHC, or DPI were collected from electronic medical records. Costs associated with wasted medications were evaluated. Results: Of 555 patient admissions screened, 478 (mean age, 66 years; 58% women; 74% African American) met study criteria. Of the total MDI or DPI doses dispensed, 87% were wasted, and associated hospital cost was approximately $86,973. Conclusions: Substantial waste of inhaled medications was found in our study. Practical strategies are needed to reduce wasted inhalers. Further assessment of this problem is needed in other US hospitals. abstract_id: PUBMED:10679537 Lack of association between ipratropium bromide and mortality in elderly patients with chronic obstructive airway disease. Background: Ipratropium is commonly used for the management of elderly patients with obstructive airway disease. However, a recent report suggested that its use might be associated with a significant increase in mortality. A study was therefore conducted to compare all-cause mortality rates between users and non-users of ipratropium in elderly patients with either asthma or chronic obstructive pulmonary disease (COPD). Methods: A retrospective cohort study was performed using linked data from the Canadian Institute for Health Information, the Ontario Drug Benefit Program, the Ontario Health Insurance Plan, and the Ontario Registered Persons database. A total of 32 393 patients were identified who were aged 65 years or older and who had been discharged from hospital with asthma or COPD between 1 April 1992 and 31 March 1997. All-cause mortality rates were compared between those treated and those not treated with ipratropium following discharge from hospital. Results: In total, 49% of patients received ipratropium within 90 days of discharge. After adjusting for age, sex, comorbidity, use of health services, and other airway medications there was no significant association in patients with COPD between the use of ipratropium and mortality (relative risk (RR) 1.03; 95% confidence interval (CI) 0.98 to 1.08). In patients with asthma, however, there was a slight increase in the relative risk of mortality associated with the use of ipratropium (RR 1.24; 95% CI 1.11 to 1.39). A dose-response increase in the mortality rate was not observed with increasing use of ipratropium in either COPD or asthma. Conclusions: The use of ipratropium in patients with COPD was not associated with an increase in mortality. However, in asthma there was a small increase in the mortality rate. Since asthmatic patients who received ipratropium had greater use of other airway medications and health services, the difference in mortality rate between users and non-users may be a reflection of unmeasured differences in asthma severity. abstract_id: PUBMED:17085795 Systematic review of clinical outcomes in chronic obstructive pulmonary disease: beta-agonist use compared with anticholinergics and inhaled corticosteroids. Much controversy surrounds the use of beta-agonists in obstructive lung disease. Regular beta2- agonist use in asthma results in tolerance to its effects and an increase in asthma-related deaths. Less is known about clinical outcomes in chronic obstructive pulmonary disease (COPD). This systematic review and meta-analysis evaluates the long-term effect of beta2-agonist use on severe exacerbations requiring hospitalization or trial withdrawal, respiratory deaths, and total mortality in patients with COPD. Results for beta2-agonists are compared with results for anticholinergics and inhaled corticosteroids. Pooled results from randomized controlled trials show that anticholinergics, such as tiotropium and ipratropium, significantly reduce severe exacerbations and respiratory deaths compared with placebo. Conversely, beta2-agonists increase respiratory deaths, probably because of tolerance that develops to their bronchodilator and bronchoprotective effects. Anticholinergics significantly reduce exacerbations and total mortality compared with beta-agonists. The combination of the two bronchodilators is not more effective than anticholinergics alone in improving long-term clinical outcomes. Inhaled corticosteroids significantly reduce severe exacerbations and the decline in lung function over time, without affecting mortality. In conclusion, inhaled anticholinergic bronchodilators and corticosteroids should be used to improve long-term clinical outcomes in patients with COPD. beta-Agonists increase respiratory deaths in COPD, possibly as a result of poorer disease control. abstract_id: PUBMED:10705266 Inhaled corticosteroids may reduce neutrophilic inflammation in patients with stable chronic obstructive pulmonary disease. Background: Although both inhaled and oral corticosteroids have anti-inflammatory effects causing improvement in clinical symptoms and spirometry in the treatment of asthma, the role of corticosteroids in the management of chronic obstructive pulmonary disease (COPD) is controversial. Objective: To evaluate the effects of inhaled corticosteroids on sputum neutrophilia in clinically stable COPD patients. Methods: In total, 18 patients were enrolled in the study. During 2 months, 9 patients in group A inhaled fluticasone propionate (FP) 500 microg 3 times daily. In group B 9 patients received placebo. All of the patients continued to inhale both salbutamol and ipratropium bromide. In 9 patients, sustained-released theophylline was also administered. Blood samples, spirometric tests, blood gas analyses, and either spontaneous or induced sputum cultures were evaluated on entry into the study, after a 2 months of treatment and following the 6-week washout period. Results: After the 2-month FP treatment, no significant changes in the number of peripheral blood neutrophils, blood gas and spirometry data were observed in both groups. In group A, the total cell number and the number of neutrophils decreased from a mean of 3. 4 +/- 1.3 x 10(6) cells/g and 0.6 +/- 0.3 x 10(6) neutrophils/g on entry into study to 1.9 +/- 0.6 x 10(6) cells/g and 0.02 +/- 0.01 x 10(6) neutrophils/g after 8-week treatment with FP, returning to 3.3 +/- 1.1 x 10(6) cells/g and 0.5 +/- 0.3 x 10(6) neutrophils/g following the washout period. The percentages of neutrophils were 55. 6 and 77.9% in groups A and B after 2 months of FP treatment. There was no significant change in group B values during the study. Conclusion: These data suggest that neutrophilic inflammation in sputum may be decreased by inhaled corticosteroids in clinically stable COPD patients. abstract_id: PUBMED:7596005 Periodic treatment regimens with inhaled steroids in asthma or chronic obstructive pulmonary disease. Is it possible? Objective: To determine whether inhaled corticosteroids can be discontinued in the stable phase of asthma or chronic obstructive pulmonary disease (COPD) or if this therapy should be continued. Design: Nonrandomized open uncontrolled 5-year trial. Setting: Prospective study in general practice. Patients: Forty-eight patients with steroid-dependent asthma or COPD who had shown a decline in forced expiratory volume in 1 second (FEV1) of at least 80 mL per year and at least one exacerbation per year during the first 2 years of bronchodilator treatment. Subjects were treated additionally with inhaled steroids for another 2 years and were finally given the option to stop using steroids. Sixteen patients were willing to stop using beclomethasone and were studied for another year. No recruitment bias took place in this consecutive sample in the fifth year of follow-up. Two of 16 patients developed carcinomas and dropped out. Interventions: Two years of bronchodilator treatment alone (400 micrograms of salbutamol or 40 micrograms of ipratropium bromide four times daily), followed by 2 years of additional inhaled corticosteroid treatment (400 micrograms of beclomethasone two times daily), and finally 1 year of bronchodilator treatment alone. Main Outcome Measures: Decline in lung function (FEV1), change in bronchial hyperresponsiveness, indicated by a provocative concentration of histamine causing a 20% fall in FEV1 (PC20), morning peak expiratory flow rate (PEFR), diurnal PEFR, week-to-week variation of PEFR, bronchial symptoms, and exacerbations. Results: The course of FEV1 during the year in which beclomethasone was discontinued was not significantly different when compared with the 2-year period of beclomethasone treatment. Neither did the course of PC20, morning PEFR, diurnal PEFR, symptom score, and exacerbation rate change. Only the week-to-week variation of the PEFR increased after discontinuing steroids. Conclusions: Discontinuing inhaled steroids is possible in some patients with asthma or COPD after 2 years of regular treatment. This might indicate that for certain groups of patients with mild asthma or COPD, periodic treatment schedules with inhaled steroids is the treatment policy for the future. abstract_id: PUBMED:25746966 Treatment responsiveness of phenotypes of symptomatic airways obstruction in adults. Background: Asthma and chronic obstructive pulmonary disease (COPD) are heterogeneous disorders encompassing different phenotypes of airflow obstruction, which might differ in their response to treatment. Objective: The aim of this study was to determine distinct phenotypes comprising the syndromes of asthma and COPD and the treatment responsiveness of these phenotypes to inhaled β-agonist, antimuscarinic, and corticosteroid therapy. Methods: We undertook a cross-sectional study with 3 phases. In phase 1, 1,264 participants aged 18 to 75 years with self-reported current wheeze and breathlessness were identified from a random population sample of 16,459. In phase 2, 451 participants attended for detailed assessment, including responsiveness to inhaled salbutamol and ipratropium bromide. In phase 3, 168 steroid-naive participants were enrolled in a 12-week trial of inhaled budesonide. Cluster analysis was performed in 389 participants who completed phase 2 with full data. Treatment responsiveness was compared between phenotypes. Results: Cluster analysis identified 5 phenotypes: moderate-to-severe childhood-onset atopic asthma, asthma-COPD overlap, obese-comorbid, mild childhood-onset atopic asthma, and mild intermittent. Bronchodilation after salbutamol was equal to or greater than that after ipratropium for all phenotypes. The moderate-to-severe childhood-onset atopic asthma, asthma-COPD overlap, and obese-comorbid phenotypes had greater efficacy with inhaled corticosteroid treatment than the mild intermittent group. Conclusion: Cluster analysis of adults with symptomatic airflow obstruction identifies 5 disease phenotypes, including asthma-COPD overlap and obese-comorbid phenotypes, and provides evidence that patients with the asthma-COPD overlap syndrome might benefit from inhaled corticosteroid therapy. abstract_id: PUBMED:22619266 A new user cohort study comparing the safety of long-acting inhaled bronchodilators in COPD. Objective: To investigate a possible increased risk observed in tiotropium clinical trials of stroke and other adverse events. Design: New users of long-acting anticholinergic therapy (tiotropium HandiHaler®) were compared with new users of long-acting β-agonist (LABA) monotherapy, and propensity scores were used to control confounding. Setting: UK healthcare system general practitioner electronic medical record database. Participants: 10 840 patients newly prescribed tiotropium (n=4767) or LABA (n=6073), at least 40 years old, and not having asthma as their only respiratory illness. Primary And Secondary Outcome Measures: Incidence rates of total stroke, myocardial infarction, angina and other adverse events. Results: Tiotropium was associated with increased rates of stroke (HR=1.49, 95% CI 0.91 to 2.45), angina (HR=1.38, 95% CI 0.88 to 2.16) and myocardial infarction (HR=1.26, 95% CI 0.72 to 2.21). Groups had similar rates of chronic obstructive pulmonary disease exacerbation (HR=0.95, 95% CI 0.80 to 1.12) and pneumonia (HR=0.96, 95% CI 0.58 to 1.58). Tiotropium was associated with a lower rate of total mortality (HR=0.70, 95% CI 0.56 to 0.89) and asthma exacerbations (HR=0.46, 95% CI 0.36 to 0.57) than users of LABA. Conclusion: Small increased risks of serious ischaemic cardiovascular events have been reported with inhaled anticholinergic medication from randomised and nonrandomized studies of ipratropium, tiotropium HandiHaler® and tiotropium Respimat®. Additional research is needed to understand the full extent of cardiovascular effects of inhaled anticholinergic medications and the patients who may be most susceptible. abstract_id: PUBMED:1357553 A comparison of bronchodilator therapy with or without inhaled corticosteroid therapy for obstructive airways disease. Dutch Chronic Non-Specific Lung Disease Study Group. Background: The morbidity from obstructive airways disease (asthma and chronic obstructive pulmonary disease) is considerable, and the mortality rate is rising in several countries. It has been hypothesized that long-term improvement in prognosis might result from vigorous bronchodilator or antiinflammatory therapy. Methods: In a multicenter trial we compared three inhalation regimens in which a beta 2-agonist (terbutaline, 2000 micrograms daily) was combined with a corticosteroid (beclomethasone, 800 micrograms daily), an anticholinergic bronchodilator (ipratropium bromide, 160 micrograms daily), or placebo. Patients with airways hyperresponsiveness and obstruction who were 18 to 60 years old were followed for 2 1/2 years. Results: Of the 274 patients enrolled, 56 percent had allergies. The mean forced expiratory volume in one second (FEV1) was 64 percent of the predicted value. The mean PC20 (the concentration of inhaled histamine causing a 20 percent decrease in FEV1, a measure of hyperresponsiveness) was 0.26 mg per milliliter. Withdrawal from the study, due mainly to pulmonary symptoms, was less frequent in the corticosteroid group (12 of 91 patients) than in the anticholinergic-drug group (45 of 92 patients) or the placebo group (44 of 91 patients; P < 0.001). The mean FEV1 (+/- SE) increased by 10.3 +/- 1.3 percent of the predicted value in the corticosteroid group within three months and remained stable thereafter, whereas it did not change in the other two groups (P < 0.001). The PC20 increased by 2.0 doubling concentrations in the corticosteroid group but did not change in the other groups (P < 0.001). In the corticosteroid group, patients who did not smoke, who had allergies, or who were less than 40 years old benefited more from their treatment than did those who smoked, did not have allergies, or were over 40, but all subgroups of the corticosteroid group had improvement as compared with the anticholinergic-drug or placebo group. Conclusions: The addition of an inhaled corticosteroid--but not an inhaled anticholinergic agent--to maintenance treatment with a beta 2-agonist (terbutaline) substantially reduced morbidity, hyperresponsiveness, and airways obstruction in patients with a spectrum of obstructive airways disease. abstract_id: PUBMED:17594728 Use of inhaled anticholinergic agents in obstructive airway disease. In the last 2 decades, anticholinergic agents have been generally regarded as the first-choice bronchodilator therapy in the routine management of stable chronic obstructive pulmonary disease (COPD) and, to a lesser extent, asthma. Anticholinergics are particularly important bronchodilators in COPD, because the vagal tone appears to be the only reversible component of airflow limitation in COPD. The inhaled anticholinergics approved for clinical use are synthetic quaternary ammonium congeners of atropine, and include ipratropium bromide, oxitropium bromide, and tiotropium bromide. This article reviews the most current evidence for inhaled anticholinergics in obstructive airway disease and summarizes outcomes reported in randomized controlled trials. Answer: The association between inhaled ipratropium and mortality in patients with COPD and asthma has been investigated in several studies, with mixed results. One study found that inhaled ipratropium was associated with an increased risk of premature death in patients with COPD and asthma, with a mortality risk ratio (RR) of 2.0 for COPD and 3.6 for asthma patients. After adjusting for confounding factors, the RR for COPD was 1.6 and for asthma 2.4, indicating a persistent increased risk of premature death associated with ipratropium use in these populations (PUBMED:12645834). Conversely, another study found no significant association between the use of ipratropium and mortality in patients with COPD after adjusting for various factors. However, in patients with asthma, there was a slight increase in the relative risk of mortality associated with the use of ipratropium (PUBMED:10679537). A systematic review and meta-analysis comparing clinical outcomes in COPD found that anticholinergics, such as tiotropium and ipratropium, significantly reduce severe exacerbations and respiratory deaths compared with placebo. In contrast, beta2-agonists increase respiratory deaths, possibly due to tolerance that develops to their effects. Anticholinergics also significantly reduce exacerbations and total mortality compared with beta-agonists (PUBMED:17085795). Another study suggested that inhaled corticosteroids may reduce neutrophilic inflammation in patients with stable COPD, which could have implications for mortality and morbidity, although the study did not directly address mortality outcomes (PUBMED:10705266). Lastly, a new user cohort study comparing the safety of long-acting inhaled bronchodilators in COPD found that tiotropium was associated with increased rates of stroke, angina, and myocardial infarction, but not with an increase in COPD exacerbation or pneumonia. Tiotropium was associated with a lower rate of total mortality and asthma exacerbations than users of long-acting beta-agonists (LABA) (PUBMED:22619266). In summary, there is evidence suggesting an association between inhaled ipratropium and increased mortality in patients with asthma and possibly COPD, although the results are not entirely consistent across studies.
Instruction: Role of atopy in chronic rhinosinusitis with nasal polyps: does an atopic condition affect the severity and recurrence of disease? Abstracts: abstract_id: PUBMED:27268094 Role of atopy in chronic rhinosinusitis with nasal polyps: does an atopic condition affect the severity and recurrence of disease? Background: The role of atopy in chronic rhinosinusitis is unclear: it is particularly controversial in chronic rhinosinusitis with nasal polyps. Methods: A prospective study of 210 patients with chronic rhinosinusitis with nasal polyps was performed. Patient demographics, visual analogue scale scores, Lund-Kennedy endoscopy scores, Lund-Mackay computed tomography scores, serum total immunoglobulin E levels, serum eosinophil cationic protein (ECP) levels and Phadiatop test findings were analysed. Results: There were no significant differences in age, sex, visual analogue scale score, Lund-Mackay computed tomography score, total serum immunoglobulin E level, serum ECP level or Phadiatop test results between patients with primary and recurrent chronic rhinosinusitis with nasal polyps. A total of 99 patients (47 per cent) had positive atopy tests. No significant differences in sex, visual analogue scale score, Lund-Kennedy endoscopy score, Lund-Mackay computed tomography score or recurrence rates were found between atopic and non-atopic patients; however, atopic patients were significantly younger than non-atopic patients. Atopy status did not correlate with disease severity. Conclusion: There was no association between atopy status and either disease severity or recurrence in patients with chronic rhinosinusitis with nasal polyps, although atopic patients were younger than non-atopic patients. abstract_id: PUBMED:31425833 Asthma Is the Dominant Factor for Recurrence in Chronic Rhinosinusitis. Background: Factors associated with recurrence of chronic rhinosinusitis (CRS) are still poorly recognized. Objective: To evaluate which risk factors could influence the risk of recurrence among patients undergoing endoscopic sinus surgery in long-term follow-up. Methods: Patients with CRS who underwent endoscopic sinus surgery were followed for an average period of 12 years in a nonconcurrent cohort. After surgery, patients were considered to an additional endoscopic sinus surgery if appropriate medical therapy failed during this period. The presence of nasal polyps, asthma, nonsteroidal anti-inflammatory drug-exacerbated respiratory disease, smoking habits, peripheral blood eosinophilia, and atopy were assessed. The recurrence-free interval between groups (with or without these risk factors) was analyzed by Kaplan-Meyer curves, and the indication for a revisional surgery was considered to be the unfavorable event. Results: A total of 201 patients were enrolled in this study. Patients with chronic rhinosinusitis with nasal polyps (CRSwNP) were more likely than patients with chronic rhinosinusitis without nasal polyps (CRSsNP) to need revisional surgery (adjusted hazard ratio, 2.02). Asthma was the only factor that was significantly related to recurrence both in patients with CRSsNP (hazard ratio, 5.54) and in patients with CRSwNP (hazard ratio, 3.27). Although eosinophilia itself was not related to a higher chance of recurrence, its presence influenced the outcome of CRSwNP compared with CRSsNP and the impact of asthma among patients with CRSwNP. Conclusions: Prognosis in patients with CRSwNP was inferior to that in patients with CRSsNP. Asthma was the only factor that increased the chance of recurrence in patients with either CRSsNP or CRSwNP. abstract_id: PUBMED:28122135 Programmed cell death-1 expression correlates with disease severity and IL-5 in chronic rhinosinusitis with nasal polyps. Background: Programmed cell death-1 (PD-1) is a negative regulator of T-cell responses. Expression of PD-1 and its ligands PD-L1 and PD-L2 in chronic rhinosinusitis with nasal polyps (CRSwNP) is poorly studied. Methods: Expression of PD-1, PD-L1, PD-L2, TGF-β, IL-5, and IL-10 mRNA was measured by real-time quantitative PCR on tissue homogenates of patients with CRSwNP (n = 21) and healthy controls (n = 21) and on primary epithelial cells. Disease severity was scored using the Lund-Mackay scores of maxillofacial computed tomography (CT) scans. Expression of PD-1 and PD-L1/L2 was evaluated at the cellular and tissue levels (n = 6) by flow cytometry and immunohistochemistry. Results: Programmed cell death-1 mRNA expression was increased in tissue homogenates from patients with CRSwNP compared with controls, irrespective of the atopy status. Importantly, expression of PD-1 correlated with the total CT scan scores (r = 0.5, P = 0.02). Additionally, a significant association was found between PD-1 mRNA and expression of IL-5 mRNA in control nasal tissue (r = 0.95, P < 0.0001) and in CRSwNP (r = 0.63, P = 0.002). PD-1 was expressed on different subsets of T cells and CD11b- dendritic cells. Both PD-1 and its ligands were expressed on primary epithelial cells from control nasal tissue and nasal polyp tissue. Conclusions: Higher PD-1 expression was found in CRSwNP than in nasal tissue from controls. This was associated with disease severity and tissue IL-5 expression but unrelated to the patients' atopy status. abstract_id: PUBMED:25295807 Perceived stress and severity of chronic rhinosinusitis in allergic and nonallergic patients. Chronic stress exposure carries greater risk of onset of atopic respiratory disorders such as rhinitis and asthma. The interaction between depression, anxiety, and severity of chronic rhinosinusitis (CRS) has been suggested. We aimed to access the relationship between psychological stress, severity of CRS, and atopy. Sixty-three consecutive patients referred with CRS were asked to score the severity of rhinosinusitis symptoms on a visual analog scale and to fill in questionnaires on the disease-specific quality of life and perceived stress-22-item Sino-Nasal Outcome Test (SNOT-22) and measure of perceived stress (MPS) scale, respectively. Inclusion criteria for the study were a reliable allergy evaluation and a recent computerized tomography (CT) scan of the sinuses. Patients with nasal polyps (NPs), asthma, and previous surgery were excluded. The study group consisted of 14 allergic and 18 nonallergic patients with CRS without NPs (CRSsNPs). Correlation between MPS and SNOT-22 scores in the study group was highly significant (Pearson r = 0.61; p = 0.001). Patients with higher stress scores had significantly stronger postnasal discharge, thick discharge, cough, disturbed sleep, fatigue, and sadness. Postnasal drip was significantly stronger in patients with allergy. The correlation between SNOT-22 and CT scores was insignificant. The correlation between MPS and SNOT-22 scores suggests an interaction between severity of CRS and chronic stress, but not with the extent of the disease on CT in CRSsNPs. Chronic psychological stress might be one of the factors that modifies the disease severity and may lead to uncontrolled disease in CRS patients. abstract_id: PUBMED:32761877 Serum eosinophil cationic protein: a prognostic factor for early postoperative recurrence of nasal polyps. Background: The objective of this work was to assess the efficacy of serum eosinophil cationic protein (ECP) concentration in predicting early postoperative recurrence in patients with chronic rhinosinusitis with nasal polyps (CRSwNP). Methods: We prospectively enrolled CRSwNP patients to receive bilateral functional endoscopic sinus surgery (FESS) and followed them for 1 year. Serum ECP level was measured within 1 week before surgery. Demographics and associated medical factors were analyzed with the surgical outcome, and nasal polyp histology was microscopically examined. Results: Overall, 58 patients met the inclusion criteria and underwent FESS. After at least a 1-year follow-up period, 9 patients had postoperative recurrence, with significantly higher serum ECP levels (p = 0.030). Receiver operating characteristic curve analysis showed the optimal cutoff level of serum ECP concentration for predicting the postoperative recurrence of nasal polyps was 21.8 µg/L (p = 0.030). Regardless of atopy status and histology type, logistic regression analysis showed that a higher ECP level was the sole significant factor related to early postoperative recurrence of nasal polyps (odds ratio, 54.8; p = 0.014). Cox proportional hazard regression analysis revealed that the hazard ratio of CRSwNP patients with an ECP level of >21.8 µg/L resulting in early postoperative recurrence was 7.6 (p = 0.011). Conclusion: Serum ECP appears to be a feasible predictor for early postoperative recurrence of nasal polyps. CRSwNP patients with preoperative serum ECP levels of ≥21.8 µg/L had an approximately 55-fold increased risk of early recurrence. CRSwNP patients with higher preoperative serum ECP levels should be closely monitored within the first year after surgery. abstract_id: PUBMED:19401038 Relationships between severity of chronic rhinosinusitis and nasal polyposis, asthma, and atopy. Background: The effect of comorbid conditions such as asthma and atopy on the severity of chronic rhinosinusitis (CRS) and the presence of nasal polyps (NPs) remains an area of investigation. We sought to elucidate the relationship among these entities. Methods: The study population included 106 consecutive patients who were referred to a multidisciplinary, university-based allergy and sinus clinic that underwent computed tomography (CT) scan, skin-prick testing, and had CRS. Data were analyzed to determine Lund-MacKay score (LMS), presence of NPs, asthma status, and sensitivity to seven classes of aeroallergens. Results: Skin tests were positive in 52 cases and negative in 54 cases. Although, there was no statistical relationship between LMS and atopic status in the entire group, among the asthmatic subgroup, mean LMS was greater in nonatopic asthmatic patients than in atopic asthmatic patients. Asthmatic patients had a higher LMS than nonasthmatic patients (p < 0.0001). Asthmatic patients were more likely than nonasthmatic patients to have NPs (57.6% versus 25%; p = 0.0015), regardless of atopic status. Mean LMS was higher in NP patients compared with nonpolyp patients (p < 0.0001), independent of atopic status. Mean LMS was not affected by sensitivity to any particular allergen, with the exception of cockroach-allergic patients who were more likely to have an LMS of >10 (p = 0.0236) and had more severe maxillary sinus involvement (p = 0.0391). Conclusion: These data indicate a strong relationship between CRS severity, as measured by LMS, and chronic airway inflammatory diseases, asthma, and NPs. The association between LMS and atopic status appears weak. The present study suggests that CRS is an inflammatory disease that occurs independently of systemic IgE-mediated pathways. abstract_id: PUBMED:26370180 Fibroblast levels are increased in chronic rhinosinusitis with nasal polyps and are associated with worse subjective disease severity. Background: Fibroblasts are implicated in tissue remodeling and recruitment of inflammatory cells in chronic rhinosinusitis (CRS). Populations of fibroblasts remain unquantified in CRS subtypes. The objectives of this study were to measure fibroblast populations in subtypes of CRS, and to investigate the association between fibroblasts and disease severity. Methods: Patients undergoing endoscopic sinus surgery (ESS) for CRS were prospectively enrolled from January 2011 to December 2014. Control subjects included patients undergoing endoscopic surgery for non-inflammatory conditions such as cerebrospinal fluid leak repair or non-hormone-secreting pituitary tumors. Patients completed 22-item Sino-Nasal Outcome Test (SNOT-22) questionnaires prior to surgery. Blood and tissue biopsies were taken during surgery. Percent of sinonasal fibroblasts was determined via flow cytometry by selecting fibroblast-specific protein (FSP)-positive and Mucin 1 (MUC1)-negative cells. Results: A total of 69 patients were enrolled: control (n = 24), CRS without nasal polyps (CRSsNP) (n = 13), CRS with nasal polyps (CRSwNP) (n = 22), and allergic fungal rhinosinusitis (AFRS) (n = 10). Patients with CRSwNP had significantly more fibroblasts than both control (p < 0.001) and CRSsNP (p < 0.01). Patients with AFRS had the most fibroblasts when compared to control (p < 0.0001), CRSsNP (p < 0.0001), and CRSwNP (p < 0.05). Atopy and asthma were not associated with increased fibroblasts in CRSwNP (p = 0.21, p = 0.26, respectively). Increased fibroblasts correlated with subjective disease severity as measured by SNOT-22 for CRSwNP (p = 0.003) and AFRS (p = 0.048). Conclusion: Sinonasal fibroblasts are increased in CRSwNP and AFRS compared to control and CRSsNP. Increased fibroblasts correlated with worse quality of life in CRSwNP and AFRS. abstract_id: PUBMED:32243094 Comorbidities associated with eosinophilic chronic rhinosinusitis: A systematic review and meta-analysis. Objective: Eosinophilic chronic rhinosinusitis (ECRS) is a histological subtype of CRS that is generally recognised as being more difficult to manage. Patients with ECRS tend to have greater disease severity and poorer treatment outcomes after sinus surgery when compared with non-ECRS patients. The histopathology and biomarker assessments of ECRS are often unavailable prior to surgery and may be impractical and costly to analyse. Thus, the primary objective of this study was to understand clinical comorbidities associated with ECRS. Design/setting: We searched three independent databases for articles that reported clinical CRS comorbidities associated with tissue eosinophilia. Data from studies with the same reported comorbidities were pooled, and a forest plot analysis was used to assess potential associations with four different conditions including allergic rhinitis, ASA sensitivity, asthma and atopy. The association between the phenotype of nasal polyps and ECRS was also quantified as a secondary objective. ECRS cut-off levels were as defined by papers included. Main Outcome/results: Eighteen articles were identified. The presence of nasal polyps (the first numbers in brackets represent odds ratios) (5.85, 95% CI [3.61, 9.49], P < .00001), ASA sensitivity (5.63, 95% CI [3.43, 9.23], P < .00001), allergic rhinitis (1.84, 95% CI [1.27, 2.67], P = .001) and asthma (3.15, 95% CI [2.61, 3.82], P < .00001) were found to be significantly associated with tissue eosinophilia. Atopy, however, was not significantly associated with tissue eosinophilia (1.71, 95% CI [0.59, 4.95], P = .32). Conclusion: Certain clinical disease characteristics such as ASA sensitivity, allergic rhinitis and asthma are more associated with CRS patients with eosinophilia when compared to those without eosinophilia. The phenotype of nasal polyps was also associated with ECRS. It is important for surgeons to recognise these comorbidities to ensure correct diagnoses, management and follow-up are implemented. abstract_id: PUBMED:25688693 Comorbidity has no impact on eosinophil inflammation in the upper airways or on severity of the sinonasal disease in patients with nasal polyps. Objective: The study was designed to determine whether there is an association between the comorbidity as atopy, bronchial asthma, aspirin intolerance and eosinophil infiltration of the upper airways, severity of the sinonasal disease and rate of revision sinus surgery in patients with nasal polyps. Material And Methods: One hundred and fifty patients were enrolled in the prospective study. Differences in CT score, rate of revision surgery, concentration of eotaxin and eosinophil cationic protein in nasal lavage fluid (NALF) and distribution of eosinophils in NALF and nasal tissue in patients with chronic rhinosinusitis with nasal polyps (CRSwNP), chronic rhinosinusitis without nasal polyps (CRSsNP) and control group were investigated. We focused on the relationship between presence of comorbidity (atopy, bronchial asthma and aspirin intolerance) and severity of the disease, the need of revision surgery and markers of eosinophil inflammation in upper airways in patients with CRSwNP. Results: Patients with CRSwNP had more severe form of the sinonasal disease, higher rate of revision FESS and significant higher presence of markers of eosinophil inflammation in NALF and nasal tissue than patients with CRSsNP (P < 0.05). Atopic and non-atopic asthma as well as aspirin sensitivity significantly more often coexisted with CRSwNP. Comorbidity did not influence eosinophil infiltration or severity of the disease in patients with CRSwNP. Conclusion: Presence of comorbidity (atopy, bronchial asthma and aspirin intolerance) has no impact on severity of the disease or eosinophil content in the upper airways in patients with CRSwNP. abstract_id: PUBMED:30112918 Prognosis of Chronic Rhinosinusitis With Nasal Polyps Using Preoperative Eosinophil/Basophil Levels and Treatment Compliance. Background Patients with chronic rhinosinusitis with nasal polyps (CRSwNP) have a high risk of disease recurrence and revision surgery. The ability to predict a polyp recurrence in this patient population is critical in order to provide adequately tailored postoperative management. Objective We aim to explore the role of appropriate postoperative care in the prognosis of CRSwNP patients in relation to preoperative eosinophil and basophils levels. Methods This was a retrospective case series; data were collected for 102 CRSwNP patients over a period of 15 months after surgery. Baseline eosinophil and basophil levels were compared between patients with and without polyp recurrences. The analysis was then stratified based on clinical diagnosis, comorbidities (atopy, asthma, and aspirin allergy), a single versus multiple episodes of sinonasal polyp recurrences, and medication adherence. Results Of the 102 included patients, 65 (63.7%) of the patients experienced no recurrences, 26 (25.5%) experienced a single episode of recurrence, and 11 (10.8%) experienced multiple recurrences. Mean baseline eosinophil count and percentage of total white blood cells were significantly higher in the multiple recurrences group (0.70 × 109/L and 10%) compared with the no recurrences group (0.36 × 109/L and 5%). Adherence to prescribed medical therapy prior to the first episode of recurrence was significantly lower for the single exacerbations group (42.3%) than the multiple recurrences group (88.9%). Conclusions Patients with multiple recurrences of nasal polyps had significantly higher baseline eosinophil counts and significantly higher medication adherence compared to single exacerbations of nasal polyps. Single exacerbations may not reflect true failures of surgery but rather a failure of postoperative medical care. Basophil levels were inadequate to predict polyp recurrence rates. Answer: The role of atopy in chronic rhinosinusitis with nasal polyps (CRSwNP) is a subject of debate, with various studies providing different insights. According to a prospective study of 210 patients with CRSwNP, there was no significant association between atopy status and either disease severity or recurrence, although atopic patients were found to be significantly younger than non-atopic patients (PUBMED:27268094). This suggests that atopy may not be a critical factor in determining the severity or recurrence of CRSwNP. In contrast, another study found that asthma, which can be related to atopy, was the only factor significantly related to recurrence in patients with CRSwNP, indicating that asthma may be a more dominant factor for recurrence than atopy itself (PUBMED:31425833). Additionally, a systematic review and meta-analysis reported that allergic rhinitis, which is often associated with atopy, was significantly associated with eosinophilic chronic rhinosinusitis (ECRS), a subtype of CRS. However, atopy itself was not significantly associated with tissue eosinophilia in CRS patients (PUBMED:32243094). Other studies have explored different aspects of the relationship between atopy and CRSwNP. One study found that atopy did not correlate with disease severity as measured by PD-1 expression and IL-5 levels in CRSwNP patients (PUBMED:28122135). Another study indicated that comorbidity such as atopy, bronchial asthma, and aspirin intolerance did not impact the severity of the disease or eosinophil content in the upper airways in patients with CRSwNP (PUBMED:25688693). In summary, while atopy may be associated with CRSwNP, the evidence suggests that it does not significantly affect the severity or recurrence of the disease. Asthma and allergic rhinitis, which can be related to atopy, may have a more substantial impact on the recurrence of CRSwNP, but atopy alone does not appear to be a decisive factor.
Instruction: Does a child's fear of needles decrease through a learning event with needles? Abstracts: abstract_id: PUBMED:25113165 Does a child's fear of needles decrease through a learning event with needles? Background: Most children have a fear of needles. Suitable preparation can decrease the pain and fear of needles in hospitals; however, few have examined how such preparation affects healthy children. This study examined whether learning with needles decreases fear of needles and changes motivation to get vaccinations in school-age children and the possible association between fear of needles and motivation toward vaccinations. Methods: This study included children participating in the "Let's Be Doctors" event, which was held in 4 child centers in Tsukuba city, Ibaraki, Japan. In this event, children learned about injections and how a vaccine works, and injected a vaccine (water) into skin (sponge) using a real syringe and imitation needle. Data were collected just before and after the event by anonymous self-assessment questionnaires that used a 4-point Likert scale to assess fear of needles, motivation to get vaccinations, recommendation of vaccinations, and fear toward doctors among the children. Answers were divided into two categories for statistical analysis. Results: In total, 194 children participated in the event and 191 children answered the questionnaire (response rate 98.5%). We analyzed 180 subjects, comprising 79 boys (43.9%) and 94 girls (52.2%), mean age of 8.1 ± 1.0 years. The number of children reporting a fear of needles decreased from 69 (38.3%) before the event to 51 (28.3%) after the event, and those unwilling to get vaccinations decreased from 48 (26.7%) to 27 (15.0%). Children who reported fear of needles before the event were more unwilling to get vaccinations than those with no fear of needles (36 [52.2%] vs. 12 [10.8%]), while after the event the number of needle-fearing children unwilling to get vaccinations decreased to 19 (27.5%). Conclusions: Children's fear of needles and unwillingness to get vaccinations were decreased after experiencing a learning event with needles. The fear of needles is associated with a negative motivation to get vaccinations in children. abstract_id: PUBMED:19283260 Fear of needles--nature and prevalence in general practice. Background: The extent to which a fear of needles influences health decisions remains largely unexplored. This study investigated the prevalence of fear of needles in a southeast Queensland community, described associated symptoms, and highlighted health care avoidance tendencies of affected individuals. Methods: One hundred and seventy-seven participants attending an outer urban general practice responded to a questionnaire on fear of needles, symptoms associated with needles and its influence on their use of medical care. Results: Twenty-two percent of participants reported a fear of needles. Affected participants were more likely than participants with no fear to report vasovagal symptoms, have had a previous traumatic needle experience (46.2 vs. 16.4%, p<0.001) and avoid medical treatment involving needles (20.5 vs. 2.3%, p<0.001). Discussion: Fear of needles is common and is associated with health care avoidance. Health professionals could better identify and manage patients who have a fear of needles by recognising associated vasovagal symptoms and past traumatic experiences. abstract_id: PUBMED:4025517 The fear of needles in children. Encounters with needles are frightening experiences for most young children. This study explores the effect of a single session, cognitively-based intervention program on needle fears in hospitalized children. Results demonstrate significant changes in behavior, heart rate, and pain perception associated with post-intervention injection. Implications for routine pediatric practice are discussed. abstract_id: PUBMED:16894439 Belonephobia--a fear of needles. Belonephobia is an unreasonable and altered response due to the fear of needles. It affects up to 10% of the population and has implications for treatment and follow up, especially in the paediatric setting (see Case study). A three step behavioural approach involving recognition and relaxation, control and preparation, and graded exposure, can be effective in overcoming belonephobia. This will assist with non-urgent minor procedures being undertaken. abstract_id: PUBMED:30109720 The fear of needles: A systematic review and meta-analysis. Aims: The aim of this study was to evaluate the prevalence of needle fear and summarize the characteristics of individuals who exhibit this fear. Background: Injections are among the most common medical procedures, yet fear of needles can result in avoidance of preventive measures and treatment. Design: Systematic review and meta-analysis. Data Sources: MEDLINE (1966-2017), Embase (1947-2017), PsycINFO (1967-2017), and CINAHL (1961-2017) were searched, with no restrictions by age, gender, race, language, or country. Review Methods: The prevalence of needle fear was calculated and restricted maximum likelihood random effects models were used for meta-analysis and meta-regression. Results: The search yielded 119 original research articles which are included in this review, of which 35 contained sufficient information for meta-analysis. The majority of children exhibited needle fear, while prevalence estimates for needle fear ranged from 20-50% in adolescents and 20-30% in young adults. In general, needle fear decreased with increasing age. Both needle fear and needle phobia were more prevalent in females than males. Avoidance of influenza vaccination because of needle fear occurred in 16% of adult patients, 27% of hospital employees, 18% of workers at long-term care facilities, and 8% of healthcare workers at hospitals. Needle fear was common when undergoing venipuncture, blood donation, and in those with chronic conditions requiring injection. Conclusions: Fear of needles is common in patients requiring preventive care and in those undergoing treatment. Greater attention should be directed to interventions which alleviate fear in high-risk groups. abstract_id: PUBMED:28780134 Initial validation of a virtual blood draw exposure paradigm for fear of blood and needles. Fear of blood, injections, and needles commonly prevents or delays individuals' receipt of health care, such as vaccines or blood draws. Innovative methods are needed to overcome these fears and reduce anxiety related to activities of this nature. The present study describes initial testing of an arm illusion paradigm that may prove useful during early phases of graded exposure for people with blood and needle fear. Seventy-four undergraduate students aged 18-29 years were tested. In line with study aims, results indicated that the virtual blood draw paradigm promoted strong perceptions of arm ownership and elicited significant changes in physiological indices (blood pressure, heart rate, electrodermal activity, respiratory rate) in response to key procedure elements (e.g., needle insertion). Further, bivariate correlations indicated that individual differences in self-reported blood and needle fear collected prior to the illusion paradigm were significantly associated with presyncopal symptoms reported following the procedure. In regression analyses, self-reported measures of blood and needle fear explained unique variance in presyncopal symptoms even after controlling for general state anxiety. These findings provide initial support for the virtual blood draw paradigm as a promising tool to help provide graded exposure to medical procedures involving needles and blood draw. abstract_id: PUBMED:23538469 Frequency of fear of needles and impact of a multidisciplinary educational approach towards pregnant women with diabetes. Purpose: To evaluate the frequency of fear of needles and the impact of a multidisciplinary educational program in women with pre-gestational and gestational diabetes taking insulin during pregnancy. Methods: The short Diabetes Fear of Injecting and Self-testing Questionnaire (D-FISQ), composed by two subscales that access fear of self injection (FSI) and fear of self testing (FST), was administered twice during pregnancy to 65 pregnant women with pre-gestational and gestational diabetes: at the first endocrine consult and within the last two weeks of pregnancy or postpartum. An organized multidisciplinary program provided diabetes education during pregnancy. Statistical analysis was carried out by Wilcoxon and McNemar tests and Spearman correlation. A p<0.05 was considered to be significant. Results: Data from the short D-FISQ questionnaire shows that 43.1% of pregnant women were afraid of needles in the first evaluation. There was a significant reduction in scores for FSI and FST subscales between the first and second assessments (first FSI 38.5% compared with second 12.7%, p=0.001; first FST 27.7% compared with second FST 14.3%, p=0.012). Conclusions: The fear of needles is common in pregnant women on insulin therapy and an organized multidisciplinary educational diabetes program applied during pregnancy reduces scores of such fear. abstract_id: PUBMED:24188948 Intra-abdominal insertion of sewing needles: a rare method of child abuse. The insertion of sewing needles into the abdominal cavity is an uncommon form of child abuse. We report a 2-and-a-half-year-old boy with 2 intra-abdominal sewing needles that were discovered during the evaluation of chronic abdominal pain and vomiting. This case report illustrates the wide range with which abusive injury can present in children. Pediatricians need to be alerted to this newly-recognized and the increasing form of child abuse so that they examine and diagnose their patients appropriately. abstract_id: PUBMED:32152693 Comparisons of venous indwelling needles and intravenous infusion with steel needles in pediatric outpatient care. Objective: To compare the application effects of venous indwelling needles and intravenous infusion with steel needles in pediatric outpatient care and to observe the complications of the two approaches. Methods: A total of 110 children who received intravenous infusions in our hospital from March 2018 to May 2018 were selected as study subjects. Children were assigned to an observation group (n = 55) or a control group (n = 55) based on a random number table, of which the children in the control group received intravenous infusion with the steel needles while the children in the observation group received intravenous infusion with indwelling needles. Their mental states were rationally evaluated by the state-trait anxiety inventory (STAI) and modified Yale preoperative anxiety score (MYPAS) in the two groups. At the same time, the total number of punctures in 3 days of infusion, complications and the parents' satisfaction with nursing service were compared between the two groups. Results: The difference in STAI and MYPAS scores was statistically significant between the two groups (P < 0.05). The difference in the total number of punctures on the first day was not statistically significant (P > 0.05) between the two groups, but the difference in the total number of punctures on the second day and the third day was statistically significant (P < 0.05). Besides this, the incidence of complications in the observation group was less than that in the control group, and the nursing satisfaction in the observation group was higher than that in the control group (P < 0.05). Conclusion: The children's psychological impacts are relatively small when comparing the venous indwelling needle with the steel needle. Moreover, venous indwelling needles avoid repeated punctures; therefore, the parents' satisfaction in nursing is improved, and it is worth popularizing. abstract_id: PUBMED:21979400 Interferon and the fear of needles: a case report. The treatment of viral hepatitis C infection uses a combination of pegylated interferon and ribavirin. Psychological preparation of the patient is vital to ensure adherence to the treatment. In our center, the nurse prepares this treatment according to an established educative protocol; however, some patients have special needs that require individualized attention. One such situation observed by the nurse is that the patients frequently admit to the fear of needle puncture (the peginterferon treatment is administered subcutaneously) and are unable to inject themselves. We describe a representative case and the care plan to manage the patient's fear so that the patient acquires confidence in his or her ability to self-inject. This facilitates autonomy and coresponsibility for the treatment, and the nurse can develop care approaches to combat the patient's fear of needles. Answer: Yes, a child's fear of needles can decrease through a learning event with needles. A study conducted in Tsukuba city, Ibaraki, Japan, which included children participating in the "Let's Be Doctors" event, found that children's fear of needles and unwillingness to get vaccinations were decreased after experiencing a learning event with needles. The number of children reporting a fear of needles decreased from 38.3% before the event to 28.3% after the event, and those unwilling to get vaccinations decreased from 26.7% to 15.0%. The study concluded that the fear of needles is associated with a negative motivation to get vaccinations in children, and suitable preparation can decrease the pain and fear of needles in hospitals (PUBMED:25113165).
Instruction: Treatment of Ruptured Anterior Communicating Artery Aneurysms: Equipoise in the Endovascular Era? Abstracts: abstract_id: PUBMED:26308643 Treatment of Ruptured Anterior Communicating Artery Aneurysms: Equipoise in the Endovascular Era? Background: Ruptured anterior communicating artery (ACoA) aneurysms are heterogeneous intracranial aneurysms whose diverse morphological features influence treatment modality. Objective: To compare clinical outcomes and complications of all ruptured ACoA aneurysms treated by clipping or coiling in a modern institutional trial. Methods: All patients with ruptured ACoA aneurysms in the Barrow Ruptured Aneurysm Trial were included. Clinical follow-up at 1 and 3 years was analyzed; charts were reviewed for patient demographics, aneurysm characteristics, and in-hospital complications. Results: This cohort included 130 patients (mean age, 52.5 years). Mean aneurysm size was 5.8 mm. Most aneurysm domes projected anteriorly (n = 52). After randomization and crossover, 91 ACoA aneurysms (70%) were clipped and 39 (30%) were coiled. Twenty-two patients (16.9%) initially randomized to coiling crossed over to clipping after evaluation. No patients crossed over from clipping to coiling. Characteristics precluding aneurysms from coiling included unfavorable dome-to-neck ratio, lesions difficult to access by catheter, and branch vessel involvement. Aneurysm size and dome projection were not significantly associated with treatment group, clinical outcome, or retreatment. No significant difference existed in clinical outcome (modified Rankin Scale scores) between groups at discharge or at 1-year or 3-year follow-up using as-treated and intention-to-treat analyses. Retreatment was performed in 3 clipped patients (2.3%) and 3 coiled patients (2.3%). Conclusion: Ruptured ACoA aneurysms, regardless of size and projection, were safely treated by both treatment modalities in a large-scale randomized clinical trial. Clinical outcomes and stroke rates did not differ significantly in as-treated or intention-to-treat analyses. abstract_id: PUBMED:38169854 The impact of selection bias in the treatment for ruptured anterior communicating artery aneurysms: different results or different patients? Background: Anterior communicating artery is one of the most frequent locations for the development of intracranial aneurysm. The availability and advances of different treatments modalities allows for case-specific selection, but potentially impacts our ability to assess equipoise among them. Objective: Investigate and compare clinical and morphological variables among surgical and endovascular treatment groups with ruptured anterior communicating artery aneurysms. Methods: Data from patients from a single university hospital treated for ruptured anterior communicating aneurysms after multidisciplinary discussion in a period from January 2009 to January 2020 were retrospectively reviewed. Demographics, clinical status, aneurysm morphologic features and in-hospital complications were registered for each treatment (endovascular coiling vs. microsurgical clipping). Clinical assessment was made from outpatient evaluation at 1-year follow-up. Results: A total of 119 patients was obtained adding surgical (n = 80) and endovascular (n = 39) treatment groups. No significant changes between groups were detected regarding gender, age of treatment or other risk factors. Global complication rate (p = 0.335, p = 0.225, p = 0.428) and clinical outcome (p = 0.802) was similar among both groups. Univariate and multivariate analysis revealed statistically significant differences between endovascular and surgical treatment groups regarding dome orientation (p = 0.011), aneurysm height (p < 0.001) and maximum diameter (p < 0.001), aspect-ratio (p < 0.001), dome-to-neck ratio (p < 0.001) and dome diameter (p = 0.014). Conclusions: Despite similar clinical outcomes and rate of complications, morphological differences highlight the presence of a selection bias and high heterogeneity, which hampers inferential analysis when comparing both techniques. abstract_id: PUBMED:31512721 Treatment of Ruptured Anterior Communicating Artery Aneurysms: Equipoise in the Endovascular Era? N/A abstract_id: PUBMED:35027349 Optimizing Small, Low-Risk, Unruptured Intracranial Aneurysm Treatment Using Game Theory. The incidental diagnosis of unruptured intracranial aneurysms has been increasing in the past several decades. A significant proportion represent small, low-risk, unruptured intracranial aneurysms for which there is equipoise on whether to offer treatment or conservative management. Given this uncertainty, patients may not always be comfortable with their physicians' recommendations. Herein, we use game theory to study the interactions between physicians and patients to determine how conflict and cooperation affect the management of small, low-risk, unruptured intracranial aneurysms. We constructed a game theory model of the interaction between physicians and patients with respect to decision-making for a small, low-risk, unruptured intracranial aneurysm in an asymptomatic patient when there is perceived equipoise between whether to treat or manage conservatively. Assuming that both the physician and patient are rational and eliciting individual patient preferences is not practical, the physician should play the game based on an ex ante probability of meeting a patient with a certain type of preference. This recommendation means that the expectations of the physician regarding the patient's preferences should guide the decision to offer treatment or conservative management as a first option for a small, asymptomatic, low-risk, unruptured intracranial aneurysm for which there is clinical equipoise. abstract_id: PUBMED:37480480 A shift from open to endovascular repair in the treatment of ruptured middle cerebral artery aneurysms: a single institution experience. Purpose: Middle cerebral aneurysms were underrepresented in the two largest trials (BRAT and ISAT) for the treatment of ruptured intracranial aneurysms. Recent institutional series addressing the choice between endovascular or open repair for this subset of aneurysms are few and have not yielded a definitive conclusion. We compare clinical outcomes of patients presenting with acute subarachnoid hemorrhage from ruptured middle cerebral artery aneurysms undergoing either open or endovascular repair. Methods: We conducted a retrospective review of 138 consecutive patients with ruptured middle cerebral artery aneurysms admitted into our institution from January 2008 to March 2019 to compare endovascular and open surgical outcomes. Results: Of the ruptured middle cerebral artery aneurysms, 57 underwent endovascular repair while 81 were treated with open surgery. Over the study period, there was a notable shift in practice toward more frequent endovascular treatment of ruptured MCA aneurysms (31% in 2008 vs. 91% in 2018). At discharge (49.1% vs 29.6%; p = .002) and at 6 months (84.3% vs 58.6%; p = 0.003), patients who underwent endovascular repair had a higher proportion of patients with good clinical outcomes (mRS 0-2) compared to those undergoing open surgery. Long-term follow-up data (endovascular 54.9 ± 37.9 months vs clipping 18.6 ± 13.4 months) showed no difference in rebleeding (1.8% vs 3.7%, p = 0.642) and retreatment (5.3% vs 3.7%, p = 0.691) in both groups. Conclusion: Our series suggests equipoise in the treatment of ruptured middle cerebral artery aneurysms and demonstrates endovascular repair as a potentially feasible treatment strategy. Future randomized trials could clarify the roles of these treatment modalities. abstract_id: PUBMED:32726425 Microsurgical Treatment of Unruptured Anterior Communicating Artery Aneurysms: Approaches and Outcomes in a Large Contemporary Series and Review of the Literature. Background: With advances in endovascular techniques, the relative roles of microsurgery and endovascular therapy in the management of intracranial aneurysms have become less clear, and data regarding treatment-specific outcomes are increasingly important. Objective: To describe our experience with microsurgery in a large series of unruptured anterior communicating artery aneurysms (ACOMMAAs) and detail our treatment decision-making process based on individual aneurysm morphology. Methods: We retrospectively reviewed unruptured ACOMMAAs treated microsurgically at our center between 1997 and 2018, recording aneurysm size, surgical approach, occlusion rates, neurological outcomes, and complications. Results: We treated 300 unruptured ACOMMAAs in 300 patients, mean age 49 yr, 180 (60.0%) female, including 245 (81.7%) small, 50 (16.7%) large, and 5 (1.7%) giant aneurysms. Most underwent a pterional approach (253, 84.3%), with orbitozygomatic or anterior interhemispheric approaches in 35 (11.7%) and 12 (4%), respectively. Greater than 95% occlusion was achieved in 284 (94.7%), 90% to 95% occlusion in 13 (4.3%), and 3 (1.0%) underwent wrapping. Serious complications occurred in 3 patients (1.0%) resulting in 2 deaths (0.7%), and minor complications in 12 (4.0%). At final follow-up, good neurological outcome was observed in 297 (99.0%) patients. Conclusion: Given clinical equipoise regarding optimal treatment of unruptured ACOMMAAs, this work provides a benchmark for achievable microsurgical outcomes. Future surgical/endovascular studies should separate results of ruptured and unruptured lesions and describe aneurysm morphology to allow for better comparison of outcomes between treatments. We suggest that open microsurgery represents an important option for select, morphologically complex ACOMMAAs, and that a multidisciplinary approach offers optimal patient outcomes. abstract_id: PUBMED:33956233 Current decision support tools fail to agree or predict therapeutic decisions in a single cohort of unruptured intracranial aneurysms. Background: There is limited evidence to direct the management of unruptured intracranial aneurysms. Models extrapolated from existing data have been proposed to guide treatment recommendations. The aim of this study is to assess whether a consensus-based treatment score (UIATS) or rupture rate estimation model (PHASES) can be used to benchmark UK multi-disciplinary team (MDT) practice. Methods: Prospective data was collected on a consecutive series of all patients with unruptured intracranial aneurysms (UIAs) presenting to a major UK neurovascular centre between 2012 and 2015. The agreement between the UIATS and PHASES scores, and their sensitivity and specificity in predicting the real-world MDT outcome were calculated and compared. Results: A total of 366 patients (456 aneurysms) were included in the analysis. The agreement between UIATS and MDT recommendation was low (weighted kappa 0.26 [95% CI 0.19, 0.32]); sensitivity and specificity were also low at 36% and 52% respectively. Groups that the MDT allocated to treatment, equipoise or no treatment had significantly different PHASES scores (p = 0.004). There was no significant difference between the two scores when predicting patients for whom MDT outcome was to recommend aneurysm treatment, but the UIATS score was superior in predicting patients who received an MDT recommendation of 'treatment-equipoise', or 'not-for-treatment' (AUC of 0.73 compared to 0.59 for PHASES). Conclusions: The models studied failed to agree with the consensus view of multi-disciplinary team in a major neurovascular centre. We conclude that decision support tools such as the UIATS and PHASES scores should not be blindly introduced in respective institutions without prior internal validation, as they may not represent the local reality. abstract_id: PUBMED:38105434 Comparative cost analysis of endovascular and open approaches for elective treatment of middle cerebral artery aneurysms. Background: Intracranial aneurysms of the middle cerebral artery can be treated using several open surgical and endovascular approaches. Given the growing evidence of clinical equipoise between these various treatment strategies, there is a need to assess the costs associated with each. Methods: Cost of aneurysm treatment was divided into two categories for comparison. "Initial cost" comprised the total in-hospital expenses for initial aneurysm treatment and "total cost" comprised initial aneurysm treatment and all expenses relating to readmission due to treatment-related complications, prescribed catheter angiograms for monitoring of treatment stability, and any retreatments needed for a given aneurysm. The open surgical group was subdivided into a pterional approach group and a lateral supraorbital (LSO) approach group for. Results: Median initial cost was $37,152 (IQR $31,318-$44,947) for aneurysms treated with the pterional approach, $29,452 (IQR $27,779-$32,826) for aneurysms treated with the LSO approach, and $19,587 (IQR $14,125-$30,521) for aneurysms treated with endovascular approaches. The median total cost was $39,737 (IQR $33,891-$62,259) for aneurysms treated with the pterional approach, $31,785 (IQR $29,513-$41,099) for aneurysms treated with the LSO approach, and $24,578 (IQR $18,977-$34,547) for aneurysms treated with endovascular approaches. Analysis of variance test demonstrated variance across groups for both initial and total cost (p = 0.004, p = 0.008, respectively). In our subsequent analysis, initial cost and total cost were higher in the pterional group than the endovascular group (p = 0.003 and p = 0.006, respectively). Conclusions: Endovascular treatment of elective aneurysms has a significantly lower cost than open surgical treatment with the pterional approach, but not with the LSO approach. For aneurysms not amenable to endovascular treatment, a minimally invasive LSO approach carries a lower cost burden than a pterional approach. abstract_id: PUBMED:25465111 The durability of endovascular coiling versus neurosurgical clipping of ruptured cerebral aneurysms: 18 year follow-up of the UK cohort of the International Subarachnoid Aneurysm Trial (ISAT). Background: Previous analyses of the International Subarachnoid Aneurysm Trial (ISAT) cohort have reported on the risks of recurrent subarachnoid haemorrhage and death or dependency for a minimum of 5 years and up to a maximum of 14 years after treatment of a ruptured intracranial aneurysm with either neurosurgical clipping or endovascular coiling. At 1 year there was a 7% absolute and a 24% relative risk reduction of death and dependency in the coiling group compared with the clipping group, but the medium-term results showed the increased need for re-treatment of the target aneurysm in the patients given coiling. We report the long-term follow-up of patients in this UK cohort. Methods: In ISAT, patients were randomly allocated to either neurosurgical clipping or endovascular coiling after a subarachnoid haemorrhage, assuming treatment equipoise, between Sept 12, 1994, and May 1, 2002. We followed up 1644 patients in 22 UK neurosurgical centres for death and clinical outcomes for 10·0-18·5 years. We assessed dependency as self-reported modified Rankin scale score obtained through yearly questionnaires. Data for recurrent aneurysms and rebleeding events were collected from questionnaires and from hospital and general practitioner records. The Office for National Statistics supplied data on deaths. This study is registered, number ISRCTN49866681. Findings: At 10 years, 674 (83%) of 809 patients allocated endovascular coiling and 657 (79%) of 835 patients allocated neurosurgical clipping were alive (odds ratio [OR] 1·35, 95% CI 1·06-1·73). Of 1003 individuals who returned a questionnaire at 10 years, 435 (82%) patients treated with endovascular coiling and 370 (78%) patients treated with neurosurgical clipping were independent (modified Rankin scale score 0-2; OR 1·25; 95% CI 0·92-1·71). Patients in the endovascular treatment group were more likely to be alive and independent at 10 years than were patients in the neurosurgery group (OR 1·34, 95% CI 1·07-1·67). 33 patients had a recurrent subarachnoid haemorrhage more than 1 year after their initial haemorrhage (17 from the target aneurysm). Interpretation: Although rates of increased dependency alone did not differ between groups, the probability of death or dependency was significantly greater in the neurosurgical group than in the endovascular group. Rebleeding was more likely after endovascular coiling than after neurosurgical clipping, but the risk was small and the probability of disability-free survival was significantly greater in the endovascular group than in the neurosurgical group at 10 years. Funding: UK Medical Research Council. abstract_id: PUBMED:33073847 Survival, Dependency, and Health-Related Quality of Life in Patients With Ruptured Intracranial Aneurysm: 10-Year Follow-up of the United Kingdom Cohort of the International Subarachnoid Aneurysm Trial. Background: Previous analyses of the International Subarachnoid Aneurysm Trial (ISAT) cohort have reported on clinical outcomes after treatment of a ruptured intracranial aneurysm with either neurosurgical clipping or endovascular coiling. Objective: To evaluate the long-term quality-adjusted life years (QALYs) gained of endovascular coiling compare to neurosurgical clipping in the UK cohort of ISAT. Methods: Between September 12, 1994 and May 1, 2002, patients with ruptured intracranial aneurysms who were assumed treatment equipoise were randomly allocated to either neurosurgical clipping or endovascular coiling. We followed-up 1644 patients in 22 UK neurosurgical centers for a minimum of 10 yr. Health-related quality of life (HRQoL) was collected through yearly questionnaires, measured by utilities calculated from the EQ-5D-3L. We compared HRQoL between the 2 treatment groups over a period of 10 yr. In all, 1-yr, 5-yr, and 10-yr QALYs were estimated by combining utility and survival information. Results: Higher average utility values were found in the endovascular group throughout the follow-up period, with mean differences between groups statistically significant in most years. The 10-yr QALYs were estimated to be 6.68 (95% CI: 6.45-6.90) in the coiling group and 6.32 (95% CI: 6.10-6.55) in the clipping group, respectively, a significant mean difference of 0.36 (95% CI: 0.04-0.66). A third of this mean QALYs gain was estimated to derive solely from HRQoL differences. Conclusion: HRQoL after treatment of a ruptured intracranial aneurysm was better after endovascular coiling compared to neurosurgical clipping, which contributed significantly to the QALYs gained over a 10-yr period. Answer: The treatment of ruptured anterior communicating artery (ACoA) aneurysms has been a subject of debate, particularly in the context of whether there is equipoise between endovascular and surgical treatment modalities in the endovascular era. Several studies have investigated this issue, comparing clinical outcomes, complications, and other factors between the two treatment approaches. One study included in the Barrow Ruptured Aneurysm Trial analyzed the treatment of ruptured ACoA aneurysms by either clipping or coiling. The study found no significant difference in clinical outcomes or stroke rates between the two groups at discharge or at 1-year or 3-year follow-up, suggesting that both treatment modalities were safe and effective for ruptured ACoA aneurysms (PUBMED:26308643). Another study that retrospectively reviewed data from patients treated for ruptured anterior communicating aneurysms found similar clinical outcomes and rates of complications between surgical and endovascular treatment groups. However, the study noted significant morphological differences between the groups, indicating the presence of a selection bias and high heterogeneity, which complicates direct comparisons between the two techniques (PUBMED:38169854). A shift from open to endovascular repair has also been observed in the treatment of ruptured middle cerebral artery aneurysms, with a single institution experience suggesting equipoise in the treatment of these aneurysms and demonstrating endovascular repair as a potentially feasible treatment strategy (PUBMED:37480480). In the context of unruptured ACoA aneurysms, a large contemporary series highlighted the importance of microsurgery as an option for select, morphologically complex ACoA aneurysms, suggesting that a multidisciplinary approach offers optimal patient outcomes (PUBMED:32726425). The International Subarachnoid Aneurysm Trial (ISAT) provided long-term follow-up data, showing that patients treated with endovascular coiling were more likely to be alive and independent at 10 years than those treated with neurosurgical clipping. However, there was a small increased risk of rebleeding after endovascular coiling (PUBMED:25465111).
Instruction: Does a telephone follow-up intervention for patients discharged with acute myocardial infarction have long-term effects on health-related quality of life? Abstracts: abstract_id: PUBMED:19220616 Does a telephone follow-up intervention for patients discharged with acute myocardial infarction have long-term effects on health-related quality of life? A randomised controlled trial. Aims: An earlier combined proactive and reactive telephone follow-up intervention for acute myocardial infarction patients after discharge from hospital showed positive effects after six months. The aim of the present study was to assess whether the intervention has long-term effects up to 18 months after discharge. Design: A prospective randomised controlled trial with 18 months follow-up. Method: The trial was conducted with 288 patients allocated to a telephone follow-up intervention group (n = 156) or control group (n = 132). The primary endpoint was health-related quality of life using the SF-36. Secondary endpoints included smoking and exercise habits, return to work and rehospitalisation due to chest pain. Results: There were significant improvements over time on most dimensions of health-related quality of life in both the intervention and control group to US norm population levels on most SF-36 dimensions and summary scores. The intervention group showed no overall significant improvement beyond six months in the physical or mental summary scores, but there was a significant effect for those aged 70 or above. Although there was a promising effect for rehospitalisation due to chest pain, no significant differences were found between the groups on the secondary endpoints after six months. Conclusion: This study demonstrated that despite positive short-term effects at six months, the telephone follow-up intervention had no long-term effects on health-related quality of life or secondary endpoints. However, the potential for improvement beyond six months was less than anticipated reflecting a reduced morbidity among acute myocardial infarction patients. Relevance To Clinical Practice: Telephone follow-up after discharge from hospital is an easy implementable follow-up intervention enabling individualised provision of information and support in a time often experienced as stressful by patients. Our study indicates that six months is an adequate support period. Despite positive results six months after discharge no significant added long-term effects of telephone follow-up, compared to usual care were found in this study. abstract_id: PUBMED:35912146 Effect of PCI Standardized Telephone Follow-Up Service Mode on Out-of-Hospital Complications, Rehospitalization Rate, and Quality of Life of Discharged Patients with Acute Coronary Syndrome after PCI. Objective: To explore the effect of percutaneous coronary intervention (PCI) standardized telephone follow-up service mode on out-of-hospital complications, rehospitalization rate, and life quality of discharged acute coronary syndrome (ACS) patients after PCI. Methods: From August 2020 to March 2022, 218 ACS patients who were discharged after PCI were included. The controls accepted routine nursing care, and the researches accepted PCI standardized telephone follow-up service mode. The nursing satisfaction, rehospitalization rate, out-of-hospital complication rate, blood lipid level, and life quality score were taken as the comparisons. Results: The nursing satisfaction of study group (100.00%) was significantly higher than that of control group (88.07%). The rehospitalization rate was dramatically lower in the study group (3/109; 2.75%) than in the control group (25/109; 22.94%) (P < 0.05). In addition, compared with the control group, the incidence of complications (acute myocardial infarction and angina pectoris) was significantly reduced in the study group outside the hospital (P < 0.05). The blood lipid levels of TCHO, TG, LDL-C, and HDL-C were lower in the study group than in the controls. Further, after nursing, the quality of life score of the two groups was both decreased with a higher quality of life score in the study group (P < 0.05). Conclusion: The application of PCI standardized telephone follow-up service mode in discharged patients with acute coronary syndrome after PCI can reduce out-of-hospital complications and rehospitalization rate and improve blood lipid level and life quality. abstract_id: PUBMED:17568244 Improving outcomes after myocardial infarction: a randomized controlled trial evaluating effects of a telephone follow-up intervention. Background: Providing information is an important part of standard care and treatment for acute myocardial infarction inpatients. Evidence exists indicating that acute myocardial infarction patients experience an information gap in the period immediately after discharge from the hospital. The aim of this study was to assess the short-term effects of a nurse-led telephone follow-up intervention to provide information and support to patients with acute myocardial infarction after their discharge from hospital. Design And Method: A prospective randomized, controlled trial with a 6-month follow-up was conducted. A total of 288 patients were allocated to either an intervention group (n=156) or a control group (n=132). The latter received routine post-discharge care. The primary endpoint measured at 3 and 6 months after discharge was the health-related quality of life using the 36-item Short Form Health Survey. Secondary endpoints included smoking and exercise habits. Results: In both groups, health-related quality of life improved significantly over time on most subscales. A statistically significant difference in favour of the intervention group was found on the 36-item Short Form Health Survey Physical Health Component Summary Scale (P=0.034) after 6 months. No difference was found between the groups on the Mental Health Component Summary Scale. We found a significant difference with respect to frequency of physical activity in favour of the intervention group after 6 months (P=0.004). More participants in the intervention group than the control group had ceased smoking at the 6-month follow-up (P=0.055). Conclusion: A nurse-led systematic telephone follow-up intervention significantly improved the physical dimension of health-related quality of life in patients in the intervention group compared with usual care patients. Participation in this intervention also seemed to promote health behaviour change in patients after acute myocardial infarction. abstract_id: PUBMED:35915405 Long-term outcome of a pragmatic trial of multifaceted intervention (STROKE-CARD care) to reduce cardiovascular risk and improve quality-of-life after ischaemic stroke and transient ischaemic attack: study protocol. Background: Patients with ischaemic stroke or transient ischaemic attack (TIA) are at high risk of incident cardiovascular events and recurrent stroke. Despite compelling evidence about the efficacy of secondary prevention, a substantial gap exists between risk factor management in real life and that recommended by international guidelines. We conducted the STROKE-CARD trial (NCT02156778), a multifaceted pragmatic disease management program between 2014 and 2018 with follow-up until 2019. This program successfully reduced cardiovascular risk and improved health-related quality of life and functional outcome in patients with acute ischaemic stroke or TIA within 12 months after the index event. To investigate potential long-term effects of STROKE-CARD care compared to standard care, an extension of follow-up is warranted. Methods: We aim to include all patients from the STROKE-CARD trial (n = 2149) for long-term follow-up between 2019 and 2021 with the study visit scheduled 3-6 years after the stroke/TIA event. The co-primary endpoint is the composite of major recurrent cardiovascular events (nonfatal stroke, nonfatal myocardial infarction, and vascular death) from hospital discharge until the long-term follow-up visit and health-related quality of life measured with the European Quality of Life-5 Dimensions (EQ-5D-3L) at the final visit. Secondary endpoints include overall mortality, long-term functional outcome, and target-level achievement in risk factor management. Discussion: This long-term follow-up will provide evidence on whether the pragmatic post-stroke/TIA intervention program STROKE-CARD is capable of preventing recurrent cardiovascular events and improving quality-of-life in the long run. Trial registration clinicaltrials.gov: NCT04205006 on 19 December 2019. abstract_id: PUBMED:31769891 The effect of education and telephone follow-up intervention based on the Roy Adaptation Model after myocardial infarction: randomised controlled trial. Background: Patients' lifestyle changes after myocardial infarction reduce the risk of infarction. Nursing interventions are important for the initiation and maintenance of lifestyle adaptation. Aim: The aim of this study was to evaluate the effect of education and telephone follow-up intervention based on the Roy Adaptation Model for improving myocardial infarction patients' self-efficacy, quality of life and lifestyle adaptation. Method: In this parallel, randomised controlled trial, patients were randomly allocated to a control group or an intervention group (n = 33/group). The control group received routine care, while the intervention group received routine care plus a telephone follow-up intervention, which consisted of a predischarge education programme and three telephone follow-up sessions. Data were collected before discharge, in the 12th week after discharge between April 2016 and August 2017. All outcomes were assessed at baseline and at 12 weeks, and included quality of life, coping adaptation process, self-efficacy and lifestyle changes. The CONSORT checklist was used in the study. Results: In the 12th week after discharge, patients in the intervention group had significant improvements in self-efficacy, quality of life and coping adaptation process compared with the control group. The intervention group also had more adaptation lifestyle changes concerning patients nutrition and physical activity in the 12-week follow-up. Conclusion: This study demonstrated that education and telephone follow-up intervention based on Roy Adaptation Model was had positive and significant results after 12 weeks compared with usual care. The findings of this study are important for supporting nursing practice and health professionals who care for individuals with myocardial infarction to develop nursing care. abstract_id: PUBMED:38028486 Factors associated with health-related quality of life in patients undergoing percutaneous coronary intervention: Thai PCI registry. Background: Percutaneous coronary intervention (PCI) has been shown to improve health-related quality of life (HRQoL) in patients with coronary artery disease (CAD). The objectives of this study were to assess the changes in HRQoL and factors influencing these changes in CAD patients after undergoing PCI. Methods: Data from a nationwide PCI registry across 39 hospitals in Thailand were collected in 2018-2019, including baseline characteristics, comorbid diseases, angiographic CAD severity, procedural details, and type of health insurance. HRQoL, as measured by utility scores, was determined in all patients using the Thai version of EQ-5D-5l at admission, discharge, and 6 and 12 months after discharge. The effects of time after PCI procedure and various factors on mean utility scores were assessed using a mixed-effect linear regression model. Results: A total of 19,701 patients were included in the analysis; they had a mean age of 64.2 ± 11.7 years and were predominantly (69.1%) male. Following PCI, the mean utility scores increased from 66.6 ± 19.6 at admission to 81.9 ± 13.8 at discharge, and remained stable at 6 and 12 months (86.1 ± 12.3 and 88.0 ± 11.7, respectively). After adjusting for potential confounding variables, several factors were found to be independently associated with improved HRQoL, including angiographic success, male gender, overweight status, dyslipidemia, and radial access. Six other factors were associated with less improved HRQoLs, including cardiogenic shock/IABP support, old age, CKD, clinical presentation (STEMI and NSTEMI), prior cerebrovascular disease, and heart failure. There were no associations of CAD severity and procedural details with HRQoL. No differences were found related to type of health insurance, except that patients who were uninsured or self-pay tended to have less improvement in HRQoL. Conclusion: HRQoL improved significantly after PCI in these subjects, as observed through 1 year of follow-up. Identifying the factors influencing these improvements may assist clinicians in tailoring patient interventions to optimise quality of life after PCI. abstract_id: PUBMED:19995445 Cardiac rehabilitation in Austria: long term health-related quality of life outcomes. Background: The goal of cardiac rehabilitation programs is not only to prolong life but also to improve physical functioning, symptoms, well-being, and health-related quality of life (HRQL). The aim of this study was to document the long-term effect of a 1-month inpatient cardiac rehabilitation intervention on HRQL in Austria. Methods: Patients (N = 487, 64.7% male, age 60.9 +/- 12.5 SD years) after myocardial infarction, with or without percutaneous interventions, coronary artery bypass grafting or valve surgery underwent inpatient cardiac rehabilitation and were included in this long-term observational study (two years follow-up). HRQL was measured with both the MacNew Heart Disease Quality of Life Instrument [MacNew] and EuroQoL-5D [EQ-5D]. Results: All MacNew scale scores improved significantly (p < 0.001) and exceeded the minimal important difference (0.5 MacNew points) by the end of rehabilitation. Although all MacNew scale scores deteriorated significantly over the two year follow-up period (p < .001), all MacNew scale scores still remained significantly higher than the pre-rehabilitation values. The mean improvement after two years in the MacNew social scale exceeded the minimal important difference while MacNew scale scores greater than the minimal important difference were reported by 40-49% of the patients.Two years after rehabilitation the mean improvement in the EQ-5D Visual Analogue Scale score was not significant with no significant change in the proportion of patients reporting problems at this time. Conclusion: These findings provide a first indication that two years following inpatient cardiac rehabilitation in Austria, the long-term improvements in HRQL are statistically significant and clinically relevant for almost 50% of the patients. Future controlled randomized trials comparing different cardiac rehabilitation programs are needed. abstract_id: PUBMED:27091284 Factors influencing health-related quality of life after primary percutaneous coronary intervention for ST-elevation myocardial infarction. Aims: This study compared health-related quality of life (HRQOL) between patients aged ≥ 70 and <70 years at 4 weeks and 6 months after primary percutaneous coronary intervention (PPCI) and examined predictors of HRQOL. Background: HRQOL is an important patient outcome following PPCI for ST elevation myocardial infarction (STEMI) including pre-hospital field triage. Methods: A comparative cohort design was conducted on STEMI patients undergoing PPCI. HRQOL was measured using the Medical Outcomes Short Form-12 (SF-12) and the Seattle Angina Questionnaire (SAQ) at 4 weeks and 6 months post-PPCI. Results: HRQOL improved significantly from 4 weeks to 6 months in all aspects measured except anginal frequency and mental health. Patients aged ≥ 70 years had poorer physical HRQOL (SF-12) and physical limitations (SAQ), but better mental HRQOL (SF-12), angina frequency and QOL (SAQ) at both time points. Age, length of hospital stay, gender, partnership status and number of stents deployed are independent predictors of HRQOL improvement over time. Conclusion: People ≥ 70 years reported better cardiac-specific quality of life, primarily from angina relief and improved mental function, despite worse physical limitations. HRQOL assessment is an important gauge of health status after PPCI for STEMI. abstract_id: PUBMED:23111536 Efficacy of a long-term secondary prevention programme following inpatient cardiovascular rehabilitation on risk and health-related quality of life in a low-education cohort: a randomized controlled study. Objective: The aim of this study was to evaluate the efficacy of a long-term secondary prevention programme following inpatient cardiovascular rehabilitation on cardiovascular risk and health-related quality of life in a cohort of middle-aged (≤58 years) coronary artery disease (CAD) patients of low educational level compared to usual care. Design And Methods: The study included 600 patients with CAD, with 271 in the intervention group (IG) and 329 in the control group (CG). The average age was nearly 50 years in both groups, nearly 90% were male, and 77% had less than 10 years of school education. No significant differences existed between the groups at baseline. Both groups had a 3-week comprehensive cardiovascular inpatient rehabilitation programme at the beginning, the intervention consisted of one further rehabilitation session in hospital after 6 months and regular telephone reminders over a period of 36 months. Analyses were conducted on an intention-to-treat basis. To evaluate the individual risk level, we used the PROCAM score and intima-media thickness (IMT) was measured at the common carotid artery on both sides following international standards. Health-related quality of life was assessed with the EUROQOL and HADS. Results: Patients in the IG showed better 3-year risk profile outcomes. The PROCAM score increased by 3.0 (IG) and by 3.7 (CG) from the beginning to after 3 years (p > 0.05 intention-to-treat). The average IMT increased by 0.04 mm in the CG and was reduced by 0.03 mm in the IG (p = 0.014 for the difference). The IG had a significant improvement in health-related quality of life. Mortality, myocardial infarction, and stroke were not different although 'other cardiac events' (cardiac surgery or intervention) were significantly lower in the IG than the CG patients (p < 0.05). Conclusion: This long-term secondary prevention programme with inpatient rehabilitation at the beginning and telephone reminder for a 3-year period was successful. There were significant differences in health-related quality of life between the IG and CG, despite the relatively positive outcomes in the CG. In this low-education (predominantly male), middle-aged cohort, the positive impact on cardiovascular risk was pronounced in the high-risk subgroup (PROCAM 10-year risk 10-40%). abstract_id: PUBMED:23012159 Effect of a telephone-delivered coronary heart disease secondary prevention program (proactive heart) on quality of life and health behaviours: primary outcomes of a randomised controlled trial. Background: Participation in coronary heart disease secondary prevention programs is low. Innovative programs to meet this treatment gap are required. Purpose: To aim of this study is to describe the effectiveness of a telephone-delivered secondary prevention program for myocardial infarction patients. Methods: Four hundred and thirty adult myocardial infarction patients in Brisbane, Australia were randomised to a 6-month secondary prevention program or usual care. Primary outcomes were health-related quality of life (Short Form-36) and physical activity (Active Australia Survey). Results: Significant intervention effects were observed for health-related quality of life on the mental component summary score (p = 0.02), and the social functioning (p = 0.04) and role-emotional (p = 0.03) subscales, compared with usual care. Intervention participants were also more likely to meet recommended levels of physical activity (p = 0.02), body mass index (p = 0.05), vegetable intake (p = 0.04) and alcohol consumption (p = 0.05). Conclusions: Telephone-delivered secondary prevention programs can significantly improve health outcomes and could meet the treatment gap for myocardial infarction patients. Answer: The evidence from the provided abstracts suggests that telephone follow-up interventions for patients discharged with acute myocardial infarction can have positive effects on health-related quality of life (HRQoL), but the long-term effects are mixed and may depend on various factors. One study found that a proactive and reactive telephone follow-up intervention showed positive effects on HRQoL after six months, but these effects did not persist up to 18 months after discharge. The intervention group showed no overall significant improvement beyond six months in the physical or mental summary scores of the SF-36, a measure of HRQoL. However, there was a significant effect for those aged 70 or above, suggesting that older patients might benefit more from such interventions in the long term (PUBMED:19220616). Another study reported that a PCI standardized telephone follow-up service mode significantly improved nursing satisfaction, reduced rehospitalization rates, and improved blood lipid levels and quality of life in discharged patients with acute coronary syndrome after PCI (PUBMED:35912146). A randomized controlled trial evaluating a nurse-led telephone follow-up intervention found significant improvements in the physical dimension of HRQoL and health behavior changes after six months, indicating short-term benefits (PUBMED:17568244). A study on the long-term outcomes of a multifaceted intervention (STROKE-CARD care) aimed to investigate potential long-term effects on preventing recurrent cardiovascular events and improving quality of life, but results from this long-term follow-up are not provided in the abstract (PUBMED:35915405). An education and telephone follow-up intervention based on the Roy Adaptation Model showed significant improvements in self-efficacy, quality of life, and lifestyle adaptation after 12 weeks, suggesting positive short-term effects (PUBMED:31769891). A study from the Thai PCI registry indicated that HRQoL improved significantly after PCI and remained stable at 6 and 12 months, with several factors being independently associated with improved HRQoL (PUBMED:38028486). In summary, while some studies demonstrate short-term benefits of telephone follow-up interventions on HRQoL after acute myocardial infarction, the long-term effects are less clear and may vary based on patient age, intervention design, and other individual factors. The evidence suggests that such interventions can be part of a multifaceted approach to improve HRQoL and other health outcomes in the short term, but more research is needed to confirm their long-term efficacy.
Instruction: Can better mental health services reduce the risk of juvenile justice system involvement? Abstracts: abstract_id: PUBMED:15117712 Can better mental health services reduce the risk of juvenile justice system involvement? Objectives: We evaluated how improved mental health services affect justice involvement among juveniles treated in the public mental health system. Methods: Our analyses were based on administrative and interview data collected in 2 communities participating in the evaluation of a national initiative designed to improve mental health services for children and youths. Results: Results derived from Cox proportional hazard models suggested that better mental health services reduced the risks of initial and subsequent juvenile justice involvement by 31% and 28%, respectively. Effects were somewhat more pronounced for serious offenses. Conclusions: Our findings suggest that improved mental health services reduce the risk of juvenile justice involvement. abstract_id: PUBMED:28966415 Involvement in the Juvenile Justice System for African American Adolescents: Examining Associations with Behavioral Health Problems. While researchers have found that African American youth experience higher levels of juvenile justice involvement at every system level (arrest, sentencing, and incarceration) relative to their other ethnic counterparts, few studies have explored how juvenile justice involvement and number of contacts might be correlated with this broad range of problems. A convenience sample of 638 African American adolescents living in predominantly low-income, urban communities participated in a survey related to juvenile justice involvement. Major findings using logistic regression models indicated that adolescents who reported juvenile justice system involvement versus no involvement were 2.3 times as likely to report mental health problems, substance abuse, and delinquent or youth offending behaviors. Additional findings documented that the higher the number of juvenile justice system contacts, the higher the rates of delinquent behaviors, alcohol and marijuana use, sex while high on drugs, and commercial sex. These findings suggest that identifying and targeting youth who have multiple juvenile justice system contacts, especially those in low-resourced communities for early intervention services, may be beneficial. Future research should examine whether peer network norms might mediate the relationships between juvenile justice involvement and youth problem behaviors. abstract_id: PUBMED:32133867 From Child Welfare to Jail: Mediating Effects of Juvenile Justice Placement and Other System Involvement. This study examines the effects of child welfare, mental health, and drug/alcohol system experiences on jail involvement, as mediated by juvenile justice placement, for Black and White youth/young adults. The sample was comprised of individuals born between 1985 and 1994 with child welfare involvement in an urban Pennsylvania county (N = 37,079) and an out-of-home placement (OOHP) subsample (n = 8,317). Four path models were estimated (two full samples, two subsamples; separate models for Black and White youth). For all models, juvenile justice placement was positively related to jail involvement. For the full samples, out-of-home child welfare placement was associated with an increase in juvenile justice placement. Within the Black placement sample, child welfare placement experiences had complex relationships with juvenile justice placement. Providing drug/alcohol services may be a protective factor for Black youth prior to juvenile justice contact. Articulating these relationships helps identify youth most at risk of justice system involvement and better targets services, especially mental health and drug/alcohol services. abstract_id: PUBMED:9364112 Risk factors for juvenile justice system referral among children in a public mental health system. The objective of this study is to identify, through the use of secondary data, risk factors for juvenile justice system involvement among children entering a public mental health system. Data-sharing agreements between juvenile justice and mental health systems enabled investigators to examine criminal referrals among 645 children between the ages of 10 to 17 who entered community-based public mental health programs in King County, Washington, over the course of a single year. Univariate and logistic regression analyses were performed. Adjusting for age, gender, and ethnicity, children involved in the public mental health system who had juvenile justice referrals were more likely than children involved in the mental health system without juvenile justice referrals to have parents with a history of incarceration, to abuse drugs and/or alcohol, and to have experienced physical abuse. The study shows that some children who receive public mental health services are at particular risk of having an encounter with the juvenile justice system. Understanding the characteristics and experiences of youth who use multiple service systems has important implications for children's mental health services delivery. abstract_id: PUBMED:12447033 Effects of capitated mental health services on youth contact with the juvenile justice system. Objective: To evaluate the association between Medicaid mental health capitation and youth's involvement with the juvenile justice system. Method: A longitudinal, quasi-experimental pre/post design was used. Using administrative databases, juvenile justice contact, defined as any detention or commitment, was assessed for 13,365 Medicaid-eligible youths, aged 10-17 years, who received public mental health services over a 3-year period (1994-1997) in the state of Colorado. Results: There was no significant effect of financing when secular time was controlled for (interaction financing x time: hazard rate = 0.92, p =.62). The main effects model demonstrates a significant secular trend with juvenile justice contacts decreasing over time for both types of financing (hazard rate = 0.80, p =.002) and a significant effect of financing with eventually capitated sites having higher rates of juvenile justice contact compared with sites that remained fee-for-service (hazard rate = 1.24, p =. 009). Multivariate analyses controlled for demographics, mental health and substance use diagnoses, and other risk factors for juvenile justice contact. Conclusions: After adjustment for secular trends, capitation was not associated with a reduction in juvenile justice contact. Nonetheless, these findings provide evidence that capitation did not increase the risk of juvenile justice contact. abstract_id: PUBMED:12397896 Juvenile justice mental health services. As the second century of partnership begins, child psychiatry and juvenile justice face continuing challenges in meeting the mental health needs of delinquents. The modern juvenile justice system is marked by a significantly higher volume of cases, with increasingly complicated multiproblem youths and families with comorbid medical, psychiatric, substance abuse disorders, multiple family and psychosocial adversities, and shrinking community resources and alternatives to confinement. The family court is faced with shrinking financial resources to support court-ordered placement and treatment programs in efforts to treat and rehabilitate youths. The recognition of high rates of mental disorders for incarcerated youth has prompted several recommendations for improvement and calls for reform [56,57]. In their 2000 annual report, the Coalition for Juvenile Justice advocated increased access to mental health services that provide a continuum of care tailored to the specific problems of incarcerated youth [58]. The specific recommendations of the report for mental health providers include the need for wraparound services, improved planning and coordination between agencies, and further research. The Department of Justice, Office of Juvenile Justice and Delinquency Prevention has set three priorities in dealing with the mental health needs of delinquents: further research on the prevalence of mental illness among juvenile offenders, development of mental health screening assessment protocols, and improved mental health services [59]. Other programs have called for earlier detection and diversion of troubled youth from juvenile justice to mental health systems [31,56]. Most recently, many juvenile and family courts have developed innovative programs to address specific problems such as truancy or substance use and diversionary or alternative sentencing programs to deal with first-time or nonviolent delinquents. All youths who come in contact with the juvenile justice system should be screened and, when necessary, assessed for mental health and substance abuse disorders. The screening should occur at the youth's earliest point of contact with the juvenile justice system and should be available at all stages of juvenile justice processing. Whenever possible, youth with serious mental health disorders should be diverted from the juvenile justice system [58]. If delinquent youths do not receive the necessary evaluation, treatment, and rehabilitation, they face the real prospect of further incarceration in adult correctional facilities. Improved screening and treatment require better interagency collaboration, established standards of care, and continuing research on the mental health needs of youth in the juvenile justice system. Better mental health care for youth in the juvenile justice system supports the goal of rehabilitation. Mental health professionals should support these efforts as the appropriate response to meet the challenges of the new century. abstract_id: PUBMED:26615036 Disposition of Charges, Out-of-Home Mental Health Treatment, and Juvenile Justice Recidivism. This study examined whether the disposition of juvenile justice encounters among youth with severe emotional disturbance was associated with the likelihood of recidivism. Court dispositions, such as probation and diversion, as well as Medicaid-funded out-of-home mental health treatment, were compared. Data sources included the Florida Department of Juvenile Justice and Medicaid claims data. Youth receiving probation had the highest recidivism rates. Among youth in out-of-home treatment, those receiving treatment in foster care services had the lowest recidivism rates. Youth placed into a diversion program were less likely to be re-arrested for a felony, whereas youth receiving inpatient psychiatric services were less likely to be re-arrested for a misdemeanor. Mental health treatment may reduce the likelihood of youth continuing on increasing criminal trajectories. abstract_id: PUBMED:9364111 Risk of juvenile justice systems referral among children in a public mental health system. This study established the risk of police referral among a cohort of children who were recipients of public mental health services. Investigators used secondary data to calculate the incidence of criminal referral among 645 children, ages 10 to 17, who entered community-based public mental health programs in King County, Washington. Children receiving public mental health services were nearly three times more likely to be referred to the juvenile justice system compared to children of similar age and gender in the general population. Relative risks were particularly high for younger children (10-13 years) and for children of Hispanic, Native American, and Caucasian origin. Understanding the characteristics and experiences of children who use multiple-service systems has important implications for services delivery. In addressing the needs of youth who have both mental illness and criminal involvement, age- and culturally specific interventions and advocacy efforts are warranted. abstract_id: PUBMED:19638602 Examining the meaning attached to mental illness and mental health services among justice system-involved youth and their parents. A large percentage of youth involved in the juvenile justice system experience mental health problems, yet many do not receive mental health care. In this study, we used a process-focused framework of mental health decision making to gain insight into the use of mental health services among these youth. In-depth interviews were conducted with nine youth and nine parents participating in a program servicing youth with mental health problems who have been in detention. Themes related to problem recognition, the decision to seek and participate in services, subjective norms, and juvenile justice system involvement emerged. Most families acknowledged their youth was having problems, but few defined those problems in mental health terms. This did not prevent them from seeking services, although some were not able to access adequate services until the justice system became involved. Participants were aware of negative attitudes about mental illness, and might have limited their social networks to shield themselves. abstract_id: PUBMED:32612337 Juvenile Justice System, Juvenile Mental Health, and the Role of MHPs: Challenges and Opportunities. Juvenile justice system (JJS) and the mental health of the juveniles involved in it are intricately related. Children in conflict with the law (CICL) and children in need of care and protection (CINCP) have a higher prevalence of mental health-and substance use related-problems, similarly, juveniles with mental health problems have a higher chance of coming in contact with JJS. Juvenile Justice Act, 2000 (JJ Act), with its latest amendment (2015), emphasizes the developmental well-being, including the psychological well-being of juveniles coming in contact with JJS and their social reintegration and rehabilitation. Mental health professionals (MHPs) can play a significant role in realizing this goal by contributing at all the levels: mental health promotion, preventing juveniles from coming in contact with JJS, treating juveniles in contact with JJS, and subsequent rehabilitation. Being well-versed in this area would also give a clinical and legal edge to the MHPs. Although JJ Act is a child-friendly law, its implementation in the real-world is faced with many practical challenges, which in turn limit or undermine the full legal, social, educational, and health benefits to the juveniles. The current viewpoint is aimed to highlight the important mental health aspects of juveniles involved with JJS with reference to the JJ Act (care and protection of children act, 2015) and the potential role that MHPs can play and discusses important challenges and road ahead. Answer: Yes, better mental health services can reduce the risk of juvenile justice system involvement. Improved mental health services were found to reduce the risks of initial and subsequent juvenile justice involvement by 31% and 28%, respectively, with effects being somewhat more pronounced for serious offenses (PUBMED:15117712). Additionally, providing drug/alcohol services may be a protective factor for Black youth prior to juvenile justice contact, suggesting that targeted services can help identify and support youth most at risk of justice system involvement (PUBMED:32133867). Furthermore, mental health treatment may reduce the likelihood of youth continuing on increasing criminal trajectories, as indicated by lower recidivism rates among youth receiving out-of-home treatment in foster care services and those placed into diversion programs (PUBMED:26615036). However, it is important to note that while capitated mental health services did not increase the risk of juvenile justice contact, they also did not show a significant reduction in juvenile justice contact when secular trends were controlled for (PUBMED:12447033). This suggests that the relationship between mental health services and juvenile justice involvement may be complex and influenced by various factors, including the type of mental health service financing and delivery. Overall, the evidence suggests that better mental health services, particularly those that are well-targeted and accessible, can play a role in reducing the risk of juvenile justice system involvement (PUBMED:15117712; PUBMED:32133867; PUBMED:26615036).
Instruction: Does Pseudomonas aeruginosa colonization influence morbidity and mortality in the intensive care unit patient? Abstracts: abstract_id: PUBMED:16987339 Does Pseudomonas aeruginosa colonization influence morbidity and mortality in the intensive care unit patient? Experience from an outbreak caused by contaminated oral swabs. Background: Contaminated oral swabs caused a nationwide monoclonal Pseudomonas aeruginosa outbreak involving 27 Norwegian hospitals. The aim of the study was to study the consequences on mortality and morbidity of the introduction of this P. aeruginosa strain to intensive care unit (ICU) patients. Methods: Forty-four out of 96 patients admitted to the general ICU of Akershus University Hospital during the outbreak, ventilated for more than 24 h and with at least one microbiological sample, were included and followed until death or hospital discharge. All isolated P. aeruginosa strains were genotyped. Demographic data, admission diagnosis, Simplified Acute Physiology Score II (SAPS II), Sequential Organ Failure Assessment (SOFA) score, comorbidities, and antibiotics used in the first week were recorded. Results: The outbreak strain was found in 18 patients (41%) of whom seven became infected. Median time to the first positive culture was 4 days. These 18 patients spent a significantly longer time on mechanical ventilation (P =0.03) and had a significantly higher hospital mortality, 55.5% vs. 19.2% (P =0.03), than non-colonized patients. The number of patients with severe underlying disease was significantly higher (P =0.01) and the decline in SOFA score was significantly slower in the pseudomonas group (P =0.02). Irrespective of colonization status, patients with severe underlying disease had a significantly higher mortality (58%) than those without (16%) (P =0.009). Conclusion: Use of contaminated oral swabs led to a high rate of early airways colonization. Patients with severe underlying disease were more likely to become colonized, but whether colonization has any influence on hospital mortality requires further study. abstract_id: PUBMED:28096975 A prospective survey of Pseudomonas aeruginosa colonization and infection in the intensive care unit. Background: Pseudomonas aeruginosa (PA) surveillance may improve empiric antimicrobial therapy, since colonizing strains frequently cause infections. This colonization may be 'endogenous' or 'exogenous', and the source determines infection control measures. We prospectively investigated the sources of PA, the clinical impact of PA colonization upon admission and the dynamics of colonization at different body sites throughout the intensive care unit stay. Methods: Intensive care patients were screened on admission and weekly from the pharynx, endotracheal aspirate, rectum and urine. Molecular typing was performed using Enterobacterial Repetitive Intergenic Consensus Polymerase Chain reaction (ERIC-PCR). Results: Between November 2014 and January 2015, 34 patients were included. Thirteen (38%) were colonized on admission, and were at a higher risk for PA-related clinical infection (Hazard Ratio = 14.6, p = 0.0002). Strains were often patient-specific, site-specific and site-persistent. Sixteen out of 17 (94%) clinical isolates were identical to strains found concurrently or previously on screening cultures from the same patient, and none were unique. Ventilator associated pneumonia-related strains were identical to endotracheal aspirates and pharynx screening (87-75% of cases). No clinical case was found among patients with repeated negative screening. Conclusion: PA origin in this non-outbreak setting was mainly 'endogenous' and PA-strains were generally patient- and site-specific, especially in the gastrointestinal tract. While prediction of ventilator associated pneumonia-related PA-strain by screening was fair, the negative predictive value of screening was very high. abstract_id: PUBMED:16459043 Reservoirs and transmission of Pseudomonas aeruginosa in intensive care unit Objective: The authors had for aim to study reservoirs and transmission of Pseudomonas aeruginosa in an intensive care unit. Design: A 6-month prospective descriptive study was made on water samples, samples from hands of health care workers, and clinical samples. P. aeruginosa strains were compared by pulsed-field gel electrophoresis. Results: Among the 211 patients hospitalized during the study, 14 (6.6%) were infected by P. aeruginosa. Out of 494 water samples, 80 were contaminated by P. aeruginosa. The regularly disinfected water taps were more rarely contaminated than the others (P<10(-5)). Out of 140 hand samples, one showed contamination from an infected patient. Conclusions: aeruginosa cross transmission was observed during this study. We should follow strict hygienic precautions such as wearing gloves and performing thorough alcoholic rub disinfection. Water taps are often contaminated and require regular disinfection. abstract_id: PUBMED:7907624 Colonization of intensive care unit patients by Pseudomonas aeruginosa. Colonization and infection by Pseudomonas aeruginosa was found at several sites by selective culture in 32 of 66 patients in an intensive care unit. Twenty-four patients (75%) were colonized on admission, and eight patients (25%) acquired P. aeruginosa during hospitalization. Positive rectal cultures were more frequent than at any other site. The most common P. aeruginosa serotypes were 1, 2, 3 and 5, and pyocin types 1, 3, 5 and 10 were predominant. There were no significant differences in the serotypes or pyocin types detected on admission or acquired during hospitalization. The serotypes and pyocin types from the respiratory tract (trachea and nasopharynx) were different from those found in the rectum. Intubated patients were colonized more frequently than those not intubated and upper respiratory tract colonization correlated strongly with low initial arterial pH values. abstract_id: PUBMED:30639071 Effects of a disinfection device on colonization of sink drains and patients during a prolonged outbreak of multidrug-resistant Pseudomonas aeruginosa in an intensive care unit. Background: Sink drains in intensive care units (ICUs) are frequently colonized with bacteria such as Pseudomonas aeruginosa. Aim: To study the influence of installing disinfecting devices on sink drains on colonization of sinks and patients in an ICU during a prolonged outbreak of multidrug-resistant P. aeruginosa. Methods: From 2010, there was a clonal outbreak of multidrug-resistant P. aeruginosa (MDR-PA). In April 2013, in ICU subunit A, the siphons draining these sinks were replaced by devices applying heat and electromechanical vibration to disinfect the draining fluid. In the other units, siphons were replaced by new polyvinyl chloride plastic siphons (control). In February 2016 the disinfecting devices were also placed at ICU subunit B. Findings: Baseline colonization rate of sinks was 51% in ICU A and 46% in ICU B. In ICU A colonization decreased to 5% (P < 0.001) after the intervention whereas it was 62% in ICU B (control). After installing the disinfection devices in ICU B, colonization rate was 8.0 and 2.4% in ICU A and B, respectively (both P < 0.001 compared with baseline). Colonization in ICU patients decreased from 8.3 to 0 per 1000 admitted patients (P < 0.001) and from 2.7 to 0.5 per 1000 admitted patients (P = 0.1) in ICU A and B respectively. Conclusion: Colonization with MDR-PA in sink drains in an ICU was effectively managed by installing disinfection devices to the siphons of sinks. Colonization of patients was also significantly reduced, suggesting that sink drains can be a source of clinical outbreaks with P. aeruginosa and that disinfecting devices may help to interrupt these outbreaks. abstract_id: PUBMED:11963376 Risk factors for the acquisition of Pseudomonas aeruginosa in a surgical intensive care unit Objectives: To investigate the risk factors for the acquisition (infection and/or colonization) of Pseudomonas aeruginosa, which is frequently associated with nosocomial infections, in a surgical intensive care unit in Dijon. Method: A retrospective case-control study was performed on 57 cases matching with 114 controls, between December 1996 and February 1999. The statistical method used was a conditional multiple logistic regression model. Results: Three groups of variables were studied (patient characteristics--invasive procedures--previous administration of antibiotics). The multiple logistic regression analysis confirmed 3 risk factors: duration of sedation, infection with another bacteria and cranio-encephalic trauma. These factors are commonly involved in nosocomial infections. Conclusion: This study confirms the interest of infection control measures and the prevention of nosocomial infections, especially in cranio-encephalic trauma. The hypothetical relationship between acquisition of Pseudomonas aeruginosa and previous administration of antibiotics was not confirmed. A more powerful study would perhaps specify this relationship. abstract_id: PUBMED:26832307 Pseudomonas aeruginosa Colonization in the Intensive Care Unit: Prevalence, Risk Factors, and Clinical Outcomes. Objective: To determine the prevalence of Pseudomonas aeruginosa colonization on intensive care unit (ICU) admission, risk factors for P. aeruginosa colonization, and the incidence of subsequent clinical culture with P. aeruginosa among those colonized and not colonized. Methods: We conducted a cohort study of patients admitted to a medical or surgical intensive care unit of a tertiary care hospital. Patients had admission perirectal surveillance cultures performed. Risk factors analyzed included comorbidities at admission, age, sex, antibiotics received during current hospitalization before ICU admission, and type of ICU. Results: Of 1,840 patients, 213 (11.6%) were colonized with P. aeruginosa on ICU admission. Significant risk factors in the multivariable analysis for colonization were age (odds ratio, 1.02 [95% CI, 1.01-1.03]), anemia (1.90 [1.05-3.42]), and neurologic disorder (1.80 [1.27-2.54]). Of the 213 patients colonized with P. aeruginosa on admission, 41 (19.2%) had a subsequent clinical culture positive for P. aeruginosa on ICU admission and 60 (28.2%) had a subsequent clinical culture positive for P. aeruginosa in the current hospitalization (ICU period and post-ICU period). Of these 60 patients, 49 (81.7%) had clinical infections. Of the 1,627 patients not colonized on admission, only 68 (4.2%) had a subsequent clinical culture positive for P. aeruginosa in the current hospitalization. Patients colonized with P. aeruginosa were more likely to have a subsequent positive clinical culture than patients not colonized (incidence rate ratio, 6.74 [95% CI, 4.91-9.25]). Conclusions: Prediction rules or rapid diagnostic testing will help clinicians more appropriately choose empirical antibiotic therapy for subsequent infections. abstract_id: PUBMED:15567786 Investigation of cross-infection of Pseudomonas aeruginosa in an intensive care unit Objective: To investigate the factors responsible for Pseudomonas aeruginosa nosocomial cross-infection in the intensive care unit (ICU) and provide effective measure for the prevention and management. Methods: The homology of 7 Pseudomonas aeruginosa strains isolated from the patients in the ICU and the environment was examined by biological, serological, drug-resistance and plasmid analysis. Results: The results of serological and plasmid analyses had good consistency, which demonstrated that the 7 Pseudomonas aeruginosa strains originated from solution in the oxygen humidifier, with uniform serological and plasmid type, and belonged to the same clone with cross-infection by contact between the patients or nursing staff and the objects in the ICU. Conclusion: Serological and plasmid analyses are more practical for identifying the sources of Pseudomonas aeruginosa cross-infection, which can be prevented by strict disinfection of the instrument and supervision of the nursing staff in the ICU. abstract_id: PUBMED:10974133 Endemic Pseudomonas aeruginosa infection in a neonatal intensive care unit. Background: Nosocomial infections due to Pseudomonas aeruginosa have been well described, but the environmental reservoir of the organism varies. We conducted an epidemiologic and molecular investigation of endemic P. aeruginosa infection among infants in a neonatal intensive care unit that was associated with carriage of the organisms on the hands of health care workers. Methods: In August 1998, colonization or infection with P. aeruginosa was identified in six infants. Surveillance cultures for P. aeruginosa were obtained from the other 27 infants in the unit, and possible environmental reservoirs were also assessed. The hands of health care workers were inspected and cultured, and risk factors for P. aeruginosa colonization were evaluated. Isolates were analyzed for clonality by pulsed-field gel electrophoresis. Results: Surveillance cultures showed that three additional infants were colonized with P. aeruginosa. Cultures of environmental specimens were negative, but cultures of the hands of 10 of 165 health care workers (6 percent) were positive for P. aeruginosa. Increasing age (P=0.05) and a history of the use of artificial fingernails or nail wraps (P=0.03) were both risk factors for colonization of the hands. From January 1997 to August 1998, 49 infants were infected or colonized with P. aeruginosa. Pulsed-field gel electrophoresis demonstrated that 17 of these infants and 1 health care worker who had onychomycosis had the same clone. Infants who were exposed to this health care worker in August 1998 were at greater risk of having this clone than infants who were not exposed to this health care worker (odds ratio, 41.2; 95 percent confidence interval, 1.8 to 940.0; P=0.006). Conclusions: An increased rate of infection and colonization with P. aeruginosa among infants in neonatal intensive care units should be investigated by assessing potential reservoirs, including environmental sources as well as patients and health care workers. abstract_id: PUBMED:3132871 Pseudomonas aeruginosa and surgical intensive care units The prevalence of Pseudomonas aeruginosa in intensive care unit (ICU) is 19 per cent although its incidence in blood culture is only 3.5 per cent and remains the same even though the total prevalence of this bacterium increases. A review of the clinical results shows that this germ causes severe complications in only 3 per cent of patients. A study by epidemiological markers reveals the presence of 16 different IATS serotypes in the ICU, this distribution of the serotypes being similar to this observed in ambulatory patients. The serotype 12 of Ps. aeruginosa is multiresistant. In 1984, it represented 22 per cent of the isolated Ps. aeruginosa, its incidence decreases in 1985 and in 1986 (14.5%). Ps. aeruginosa seems weakly pathogenic for the surgical patients in the ICU. On the other hand, the incidence of Klebsiella pneumoniae, Serratia marcescens (014 imm. serotype resistant to tobramycin), Enterobacter aerogenes and Enterobacter cloacae increases in blood cultures proportionally to their total prevalence in ICU. These bacteria are probably responsible for hospital epidemics. Answer: Pseudomonas aeruginosa colonization does appear to influence morbidity and mortality in intensive care unit (ICU) patients. An outbreak caused by contaminated oral swabs led to a high rate of early airways colonization by P. aeruginosa in ICU patients. Those who were colonized spent significantly more time on mechanical ventilation and had a significantly higher hospital mortality compared to non-colonized patients. Additionally, patients with severe underlying disease were more likely to become colonized, and the decline in their SOFA score was significantly slower, indicating a worse clinical course (PUBMED:16987339). A prospective survey also indicated that patients colonized with P. aeruginosa on admission were at a higher risk for PA-related clinical infection. The study found that the origin of P. aeruginosa was mainly 'endogenous' and that the strains were generally patient- and site-specific. The negative predictive value of screening for P. aeruginosa was very high, suggesting that patients not colonized upon admission were less likely to develop infections caused by this pathogen (PUBMED:28096975). Another study found that colonization with multidrug-resistant P. aeruginosa in sink drains in an ICU was effectively managed by installing disinfection devices to the siphons of sinks, which also significantly reduced colonization of patients. This suggests that sink drains can be a source of clinical outbreaks with P. aeruginosa and that disinfecting devices may help to interrupt these outbreaks (PUBMED:30639071). Furthermore, a cohort study found that patients colonized with P. aeruginosa on ICU admission were more likely to have a subsequent positive clinical culture than patients not colonized. This indicates that colonization upon admission is associated with an increased risk of subsequent P. aeruginosa infections (PUBMED:26832307). In summary, Pseudomonas aeruginosa colonization is associated with increased morbidity and mortality in ICU patients, and it is a significant risk factor for subsequent infections. Measures to prevent colonization and manage outbreaks, such as disinfection of environmental reservoirs and appropriate screening, are important for reducing the impact of this pathogen in the ICU setting.
Instruction: Is it worth knowing the characteristics of inpatient paediatric parenteral nutrition? Abstracts: abstract_id: PUBMED:20018577 Is it worth knowing the characteristics of inpatient paediatric parenteral nutrition? Objective: To describe the characteristics of inpatient parenteral nutrition during 2008 in a tertiary level paediatric hospital. To compare this practice with recent standards. Material And Methods: Longitudinal, descriptive, and retrospective study of patients under 16 years receiving parenteral nutrition in 2008. Demographic data as well as data of the nutritional support were recorded. Results: A total of 120 children received parenteral nutrition, of whom 63 were under 2 years, 30 between 3 and 7 years, and 27 older than 7 years. They accounted for 1.6% of all the inpatients during that period. Most frequent indication was gastrointestinal surgery (35.5%), followed by non-surgical gastrointestinal diseases (19.8%). The average composition in the first day of parenteral nutrition was: 63.1kcal/kg (SD: 21.3); volume, 76.6ml/kg (SD: 28.4); carbohydrates 9.6g/kg/d (SD: 3.2); amino acids 2.1g/kg/d (SD: 0.6), and lipids 1.9g/kg/d (SD: 1). The mean duration of parenteral support was 10.5 days (SD: 9.8). Internal jugular vein (34.7%) and femoral vein were the most common intravenous access. The main reason to discontinue parenteral nutrition was transition to oral/enteral feedings. The complication rate was 27.3%. Conclusions: There is an interest in evaluating standard practice in parenteral nutrition in order to develop improvement strategies. Infants and toddlers are the principal candidates for parenteral nutrition. Standardisation is one way to decrease errors in its preparation and prescription. abstract_id: PUBMED:37818490 Enteral Nutrition Versus Parenteral Nutrition on Outcomes in Acute Pancreatitis: Insights From the Nationwide Inpatient Sample. Introduction: Despite considerable research on the comparison of enteral and parenteral nutrition in patients with acute pancreatitis, there is an ongoing debate about the optimal timing of nutrition initiation, invasiveness of interventions, impact on outcomes, and patient tolerance. Given the gap that still exists in the literature, we investigated the relationship between the mode of nutrition and critical outcomes such as mortality rates, inpatient complications, length of hospitalization, and discharge disposition, using comprehensive national-level data. In addition, we investigated the impact of early enteral nutrition on outcomes in acute pancreatitis. Methods: All adult discharges for acute pancreatitis between 2016 and 2018 were analyzed from the National (Nationwide) Inpatient Sample (NIS). Discharges of minors and those involving mixed nutrition were excluded from the analysis. Enteral nutrition and parenteral nutrition subgroups were identified using the International Classification of Diseases, 10th revision (ICD-10) codes. Disease severity was defined using the 2013 revised Atlanta Classification of Acute Pancreatitis, along with the All Patient Refined Diagnosis Related Group (APR-DRG)'s severity of illness and likelihood of mortality variables. Complications were identified using ICD-10 codes from the secondary diagnoses variables within the NIS dataset. Multivariable logistic regression analyses were employed to assess associations between the mode of nutrition and the outcomes of interest. Results: A total of 379,410 hospitalizations were studied. About 2,011 (0.53%) received enteral nutrition, while 4,174 (1.1%) received parenteral nutrition. The mean age of the study was 51.7 years (SD 0.1). About 2,280 mortalities were recorded in the study. After adjustments, enteral nutrition was associated with significantly lower odds of mortality (adjusted OR (aOR): 0.833; 95%CI: 0.497-0.933; P<0.001). Parenteral nutrition was linked with significantly greater odds of mortality (aOR: 6.957; 95%CI: 4.730-10.233; P<0.001). Both enteral nutrition and parenteral nutrition were associated with augmented odds of complications and prolonged hospitalization (P<0.001) compared to normal oral feeding. Initiation of enteral nutrition within 24 hours of admission did not improve the odds of mortality in this study (aOR: 5.619; 95%CI: 1.900-16.615; P=0.002). Conclusion: Enteral nutrition demonstrates better outcomes in mortality rates and systemic complications compared to parenteral nutrition in patients unable to maintain normal oral feeding. abstract_id: PUBMED:32204976 Micronutrients in paediatric Intestinal Failure Patients receiving home parenteral nutrition. Background & Aims: Children with intestinal failure (IF) receive parental nutrition to ensure adequate growth and development. The aim of this study was to assess micronutrient status in paediatric IF patients receiving home parenteral nutrition (HPN) in comparison to a group of healthy children. Methods: An observational cross-sectional study was performed at Oslo University Hospital and at the Department of Nutrition, University of Oslo from January to September 2017. All children with IF, aged two to 18 years, were invited to participate. A reference group of healthy children was recruited through social media advertisement. Dietary intake was assessed by a four-day food record, and enteral and parenteral provision was recorded. Blood samples were analysed for vitamins, minerals and haematology to assess iron status. Two spot urine samples from each subject were analysed for iodine concentration (UIC) and creatinine. Results: Nineteen children with IF and 50 healthy children were included. The mean age of the participants was 10.0 years. IF-patients received a median of 76% of their estimated energy requirements from parenteral nutrition (PN). Recommended intake (RI) for iodine from the diet was reached by 16% of IF patients and 28% of healthy children. In the IF group there was a significant positive correlation between UIC and the percentage of iodine intake from oral diet and enteral nutrition support (r = 0.57, p = 0.03). Although the IF patients had a median parenteral iodine supply of more than twice the ESPGHAN recommendation, the median UIC was 89 μg/L indicating insufficient iodine status. This may suggest that the ESPGHAN recommendation for iodine in paediatric parenteral nutrition is too low. The healthy children had sufficient iodine status according to the median UIC (133 μg/L). IF patients had significantly lower total provision of iron compared to the healthy children, 4,9 vs 8,4 mg/day (p = 0.01) with 21% of IF patients and 28% of healthy children reaching RI for iron. The prevalence of anaemia was higher in IF patients than in the healthy children (40 vs.10%, p = 0.016). Conclusion: The study indicates an insufficient iodine and iron status among paediatric IF patients. Iodine status was associated with enteral provision and patients had insufficient status even if they received the ESPGHAN recommendation of iodine. Trial Identification Number: Clinical Trials AEV2017/1. 2016/391/REK sør-øst B. abstract_id: PUBMED:18051997 Consensus on paediatric parenteral nutrition: a document approved by SENPE/SEGHNP/SEFH In current medical practice, standardization of procedures has become a desirable objective. The Spanish Society of Parenteral and Enteral Nutrition (SENPE) has undertaken a considerable effort to create clinical practice guidelines or documents of consensus and quality gold standards in artificial nutrition. As a result the SENPE group of standardization has put together a document called "Consensus of Paediatric Parenteral Nutrition" supported by the Spanish Society of Gastroenterology, Hepatology and Nutrition (SEGHNP) and by the Spanish Society of Hospital Pharmacies (SEFH). This is a reduced version of our work. The complete version is available at www.nutricionhospitalaria.com and www.senpe.com. This document covers many aspects such as paediatric patient indications, vascular access, macro and micro nutrients needs; also preparation in special situations, components, forms of prescription, possibilities for standardization of the elaboration and prescription, elaboration, administration, monitoring, complications and side effects. Much research has been done, incorporating current publications, books and clinical practice guidelines such as those published by the American Society for Parenteral and Enteral Nutrition (ASPEN) and by the European Society of Paediatric Gastroenterology, Hepatology and Nutrition (ESPGHAN) in collaboration with the European Society for Clinical Nutrition and Metabolism (ESPEN). abstract_id: PUBMED:21892570 Stability study of paediatric extemporaneous parenteral nutrition with lipids Unlabelled: Stability of extemporaneous parenteral nutrition is a critical aspect of these formulations, with impact in patient safety and quality of service. In lipid emulsions physical stability can be assessed by the increase in the number of lipid globules of size superior than 500 nm, generated by coalescence of small globules during time. Objectives: To determine medium size of the lipid globules that compose the internal phase of TNA, in order to evaluate its stability and establish beyond-use date of the parenteral nutrition. To evaluate distribution profile of the lipid globules in the parenteral nutrition and compare it with this of the lipid emulsion used as raw material. Method: Globule size assessment by dynamic light scattering in a paediatric extemporaneous parenteral nutrition formula of frequent use, stored in different periods of time and temperatures. Results: Medium globule size of the parenteral nutrition analyzed samples did not exceed the limit recommended by literature. Medium size and distribution of the lipid globules in the original lipid emulsion did not have significative changes after the compounding of the parenteral nutrition. Discussion: Obtained data allow to consider that the extemporaneous parenteral nutrition evaluated would have a beyond-use date superior than the one now in use. This research must be deepened by the study of other formulas of parenteral nutrition in order to optimize the setting of beyond-use date. abstract_id: PUBMED:12428302 Survey on parenteral nutrition preparation variability in pediatrics Unlabelled: The prescription and preparation of paediatric parenteral nutrition in Spain are subject to great variability. Aim: To identify how paediatric parenteral nutrition is prescribed and prepared in Spain. Material And Methods: During the first quarter of 2001, a telephone survey was carried out among most of the hospitals in which parenteral nutrition is habitually prepared. The survey included questions on who was in charge of the prescription, the use of different solutions, addition of supplements (carnitine, heparin and glutamine), as well as information on the shelf-life of the mixtures. Subsequently, the results of the survey were compared with the following guidance documents: "Enteral and parenteral nutrition in paediatrics", drafted under the auspices of the Spanish Association for Paediatric Gastroenterology, Hepatology and Nutrition (2000) and the "Guidelines for the use of parenteral and enteral nutrition in adult and paediatric patients"/"Nutrition support practice manual" from the American Society for Parenteral and Enteral Nutrition (1998). Results: Of the 48 hospitals surveyed, paediatric parenteral nutrition was not prepared in 12 of them. the number of food bags prepared daily correlated directly with the size of the hospital. In all cases, the paediatricians were responsible for prescription. In 87% of the centres, this prescription was customized (i.e. solutions adapted to each individual patient). All of the hospitals used dextrose as the source of carbohydrates and specific amino acid solutions for paediatric medicine. Basically, lipid emulsions with long chain triglycerides were used in 65% of cases and another 19% used physical mixtures of MCT and LCT. Only half of the hospitals routinely used all-in-one mixtures. Inorganic phosphate continued to be used in most cases (78%) versus sodium glycerol phosphate. Vitamins and trace elements were added daily in 65% of the hospitals, with alternate administration in the remainder. In half of the centres, heparin was added to the mixture and carnitine in 27%. For 40% of the centres responding to the survey, the solution had to be used within 24 hours of its preparation; 11% did not indicate the shelf-life. Conclusions: Although parenteral nutrition is prescribed by the paediatricians on all occasions, the preparation protocols differ significantly between hospitals. Standardization is exceptional. It is noteworthy that all-in-one mixtures are only used in half of the hospitals surveyed. We suggest the creation of a multidisciplinary working party (pharmacists, paediatricians, neonatologists) in order to draw up protocols for the preparation of paediatric parenteral nutrition. abstract_id: PUBMED:32133114 Paediatric parenteral nutrition: current issues. Parenteral nutrition transformed the prognosis for infants and children with intestinal failure. Soon after its introduction into clinical care 50 years ago, parenteral nutrition was also rapidly adopted for use in the preterm infant, where immaturity of gastrointestinal motor function precluded enteral feeding. Preterm infants subsequently became the single largest group of patients to be fed in this way. Although the development of scientific knowledge and the lessons of clinical experience have reduced the risk of complications, some of the problems and difficulties associated with this form of nutritional support remain challenging. These include central venous catheter-related sepsis, thrombosis, liver disease, bone disease and metabolic disturbance. In an initiative to promote best practice, guidelines on parenteral nutrition were first published by the European Society for Paediatric Gastroenterology, Hepatology and Nutrition and collaborating organisations in 2005. These were constructed following a thorough review of the scientific literature, allowing a series of evidence-based recommendations to be made. The exercise was repeated just over 10 years later and updated guidelines published in 2018. This review summarises key elements from the new guideline, with a focus on what has changed since 2005. abstract_id: PUBMED:21170688 Inpatient enteral and parenteral [corrected] nutrition for patients with diabetes. Both glycemic control and adequate nutrition support impact the clinical outcome of hospitalized patients. Providing nutrition to malnourished patients using the enteral or parenteral route may increase the risk of hyperglycemia, especially in patients with diabetes. Hyperglycemia can be managed through the use of enteral tube feeds with reduced carbohydrate content or limiting the carbohydrate concentration in parenteral formulas. Judicious use of insulin or other glucose-lowering medications synchronized with appropriate nutrition support allows for optimal inpatient glycemic control. abstract_id: PUBMED:30442387 Do carnitine and extra trace elements change stability of paediatric parenteral nutrition admixtures? Introduction: High concentrations of trace elements (TE), in particular zinc and selenium, along with carnitine, are often added to parenteral admixtures in paediatric patients on long-term Parenteral Nutrition (PN). We aim to evaluate whether lipid droplet diameters of these admixtures maintain the recommended range of 0.4-1.0 μm. Materials And Methods: Stability studies were carried out on six parenteral admixtures with carnitine, trace elements and electrolytes added in different amounts. Each admixture was formulated with five different lipid emulsions with or without fish oil. Analyses were performed at time 0 (t = 0) and 24, 48, 72, 96 (t = 96) hours after compounding. Droplet diameters were determined by Light Scattering-Reverse Fourier Optics Technique. Samples, stored at 4 °C, were triple tested for a total of 450 analyses. Regression analyses were performed using panel-data techniques. Results: During the 4 days, lipid droplet diameters were in the expected range of 0.4-1.0 μm regardless of trace element and carnitine amounts in all admixtures apart from those containing fish-oil based emulsions and calcium concentrations equal to 4.5 mmol/L. In these latter admixtures, 12% of droplet diameters were larger than 1.0 μm and 2% exceeded 5.0 μm immediately after compounding. Conclusion: Carnitine and high concentrations of trace elements do not affect PN admixtures stability and can be safely infused in long-term home-PN paediatric patients and prematures. Only high calcium concentrations in compresence with fish oil based lipid emulsions seem to change PN stability. abstract_id: PUBMED:15858953 How much glutamate is toxic in paediatric parenteral nutrition? Aim: To discuss a hypothesis regarding the impact and possible side effects of glutamate in paediatric parenteral nutrition. Results: Published evidence suggests that the arcuate nucleus, which is a potent site of leptin action, is severely damaged by elevated glutamate levels. Early administration of glutamate (GLU) to the neonatal rat disrupts the hypothalamic signalling cascade of leptin action. Conclusion: We are concerned that GLU-containing parenteral nutrition may not only increase the risk of hypothalamic damage in neurosurgical patients with an impaired blood-brain barrier, and in patients with periventricular leukomalacia, but may also permanently damage the arcuate nucleus neurones in the very immature infant. This may result in later impairment of feeding regulation, obesity, hyperleptinaemia, and other symptoms that characterize the "thrifty phenotype" and the dysmetabolic syndrome. We strongly suggest reconsidering the recommended daily allowances of amino acids, particularly the use of GLU, in current paediatric parenteral nutrition. Answer: Yes, it is worth knowing the characteristics of inpatient paediatric parenteral nutrition. Understanding these characteristics is essential for evaluating standard practice and developing strategies for improvement (PUBMED:20018577). It is particularly important because infants and toddlers are the principal candidates for parenteral nutrition, and standardization can help decrease errors in its preparation and prescription (PUBMED:20018577). Moreover, the stability of parenteral nutrition is critical for patient safety and service quality, and knowledge of its characteristics can inform beyond-use dates and stability assessments (PUBMED:21892570). Additionally, the mode of nutrition, whether enteral or parenteral, has a significant impact on outcomes in conditions such as acute pancreatitis, with enteral nutrition showing better outcomes in mortality rates and systemic complications compared to parenteral nutrition (PUBMED:37818490). Furthermore, the micronutrient status of paediatric patients receiving home parenteral nutrition is an area of concern, with studies indicating insufficient iodine and iron status among these patients (PUBMED:32204976). Consensus documents and guidelines, such as those from SENPE/SEGHNP/SEFH and ESPGHAN, emphasize the importance of standardizing procedures in paediatric parenteral nutrition to promote best practices and reduce the risk of complications (PUBMED:18051997, PUBMED:32133114). Variability in the preparation of parenteral nutrition can lead to significant differences in clinical practice, and standardization is often the exception rather than the rule (PUBMED:12428302). In summary, knowing the characteristics of inpatient paediatric parenteral nutrition is crucial for optimizing patient care, ensuring adequate growth and development, minimizing the risk of complications, and improving clinical outcomes.
Instruction: Is there a link between premature ovarian failure and serum concentrations of vitamin D, zinc, and copper? Abstracts: abstract_id: PUBMED:22968257 Is there a link between premature ovarian failure and serum concentrations of vitamin D, zinc, and copper? Objective: The risk of primary ovarian insufficiency (POI) increases in association with autoimmune conditions. Adequate intake of vitamin D (vit D) and trace elements is required for the immune system to function efficiently. The aim of this study was to evaluate vit D, zinc, and copper blood levels in women with POI who had given birth to at least one child and in women with normal menstrual cycles. Methods: This was a cross-sectional, case-control study involving 63 participants divided into two groups: the study group, which is composed of 35 women with POI, and the control group, which is composed of 28 women with normal menstrual cycles. Serum concentrations of zinc, vit D, and copper were determined for each participant. Results: Women with POI had significantly higher serum copper levels and copper-to-zinc ratio but significantly lower serum vit D and zinc levels when compared with the healthy control group. Serum follicle-stimulating hormone levels were inversely correlated with zinc and vit D levels and positively correlated with the copper-to-zinc ratio and copper levels. Vit D levels were inversely correlated with follicle-stimulating hormone levels, copper-to-zinc ratio, and copper levels and positively correlated with zinc levels. Conclusions: Most women with POI are deficient in vit D. Zinc, copper, and vit D seem to correlate with hormonal status in the participants. The present study may generate hypotheses for future studies that will investigate the possible mechanisms behind alterations in trace elements and vit D deficiency in women with POI and whether these changes could be used to screen for the risk of developing POI. abstract_id: PUBMED:23812652 Risk factors for diabetes mellitus in women with primary ovarian insufficiency. Primary ovarian insufficiency (POI) is not only a gynecological problem but also has serious effects on women's health such as changes in hormone levels that can trigger fluctuations in blood sugar level and inflammation status. The present study was designed to determine vitamin D, copper, zinc, metabolic parameters [insulin, homeostasis model of assessment-insulin resistance (HOMA-IR)], inflammation parameters such as procalcitonin and high sensitivity C reactive protein (hs-CRP), and lipid profile in POI patients and control subjects with normal menstrual cycles. A total of 43 patients with nondiabetic POI were studied in order to evaluate and compare the findings with those of the control group, which comprised 33 women with normal menstrual cycles. The women with POI had higher levels of serum copper, serum insulin, glucose, LDL-cholesterol, total cholesterol, HOMA-IR, hs-CRP, and procalcitonin, whereas serum vitamin D and zinc levels were lower compared with the healthy control group. Follicle-stimulating hormone (FSH) levels were positively correlated with insulin, glucose, HOMA-IR, hs-CRP, procalcitonin, and copper and negatively correlated with vitamin D and zinc levels. In multivariate statistic analyses with body mass index and FSH as dependent variables, FSH was positively associated with copper and HOMA-IR negatively with vitamin D levels. The present study demonstrated that women with POI have traditional risk factors for diabetes mellitus, including lower levels of vitamin D, whereas higher levels of copper and HOMA-IR. abstract_id: PUBMED:38311764 Micronutrients intake and genetic variants associated with premature ovarian insufficiency; MASHAD cohort study. Background And Aim: premature ovarian insufficiency (POI) is defined as the menopause before 40 years of age, and its prevalence is reported to be two-fold higher in Iranian women than the average for woman globally. POI is associated with several cardio/cerebrovascular complications as well as an increased overall mortality. Genetic factors, and serum levels of minerals and vitamin D, have been reported to be related to the prevalence of POI. We have investigated the association between some POI -related genotypes with the serum levels of some important micronutrients. Methods: One hundred and seventeen women with POI and 183 controls without any renal, hepatic, and thyroid abnormalities were recruited as part of the MASHAD study. Demographic and anthropometric features were recorded and blood samples were collected and processed. DNA was extracted from the buffy coat of blood samples from all participants and 8 POI-related single nucleotide polymorphisms (SNPs) were determined using ASO-PCR or Tetra ARMS-PCR. Serum minerals and vitamin D concentrations were measured using routine methods. Results: In women with POI, serum copper, phosphate, and calcium were significantly different for those with rs244715, rs16991615, and rs4806660 genotypes, respectively. In our control population, significant differences were also found in serum copper concentrations between different genotypes of rs4806660, rs7246479, rs1046089, and rs2303369. After adjusting for all confounding factors, the women with POI carrying TC genotype (rs4806660) had a lower risk to have serum copper levels < 80 (µg/dL) than those carrying a TT genotype. Furthermore, women with POI carrying GG genotype (rs244715) had a 6-fold higher risk to have serum copper levels > 155 than those carrying AA genotype. Conclusion: The C and G alleles of the rs4806660 and rs244715 polymorphisms respectively are independently associated with serum copper in women with POI. Further studies are necessary to investigate the association of serum copper and other micronutrients in women and other POI -related polymorphisms. abstract_id: PUBMED:35379206 Association between serum vitamin A levels and premature ovarian insufficiency: a case-control, cross-sectional survey study. Background: Although vitamin A is known to play an important role in ovarian function, its association with ovarian insufficiency has not been reported yet. Therefore, the aim of the study was to explore the association between serum vitamin A levels and premature ovarian insufficiency (POI). Methods: This cross-sectional survey included women with POI (n = 47) and normo-ovulatory controls (n = 67) who were enrolled between December 2016 and May 2018 in Zhejiang, China. The serum levels of follicle-stimulating hormone (FSH), luteinizing hormone (LH), estradiol (E2), anti-Müllerian hormone (AMH), vitamin A, and total cholesterol (TC) were measured for each participant. The association of TC-adjusted vitamin A levels with the risk of POI was assessed using binary logistic regression analysis. Results: Serum vitamin A levels appeared to be slightly higher in the POI group than in the control group, but there was no evidence of a statistically significant difference (728.00 ± 176.00 µg/L vs. 503.93 ± 145.64 µg/L, p = 0.13). After adjustment for serum lipid levels, the serum vitamin A/TC ratio was significantly lower in the POI group than in the control group (143.14 ± 35.86 vs. 157.56 ± 35.21 µg/mmol, p = 0.04). Further, the serum vitamin A/TC ratio was significantly and inversely associated with POI risk (unadjusted odds ratio [OR] = 0.988, 95% confidence interval [CI]: 0.977-0.999, p = 0.04). The association remained after adjusting for confounding factors (age, BMI, annual household income, and education) (OR = 0.986, 95% CI: 0.972-0.999, p = 0.04). Conclusions: Serum vitamin A/TC ratio was inversely associated with POI risk. Therefore, the serum vitamin A/TC ratio may serve as a predictive factor for POI, and vitamin A supplementation may play help prevent or treat POI. abstract_id: PUBMED:24522025 The role of vitamin D in impaired fertility treatment. Vitamin D is currently in the scope of research in many fields of medicine. Despite that its influence on health remains uncertain. This paper presents the review of the publications concerning the role of calciferol in reproduction processes and its significance in infertility therapy covering topics of polycystic ovary syndrome, endometriosis infertility, myoma infertility, male infertility, premature ovary failure and in vitro fertilization techniques. The results of latest research articles in those fields has been discussed and summarized. The deficiency of vitamin defined as the concentration of 25-hydroxycalciferol <20 ng/ml is frequently noted in patients of fertility clinics. Serum vitamin D concentration in healthy women is higher comparing to PCOS patients. The supplementation with vitamin D should be applied in the schemes of PCOS treatment both due to an improved insulin resistance and the results of infertility treatment. The explanation of vitamin D activity mechanism in patients with PCOS requires further research. Vitamin D have direct effect on AMH production, and thus increase longer maintenance of ovarian reserve in the patients with its higher concentration. The occurrence of uterine myomas in the group with vitamin D deficiency was evaluated as much higher comparing to controls. On the other hand it is supposed that high concentration of calciferol may be related to an impaired elimination of endometrial cells passing to peritoneal cavity via ovarian reflux causing endometriosis. In male infertility both low (<20 ng/ml) and high (>50 ng/ml) concentration of vitamin D in serum negatively affects spermatozoa number per ml of semen, their progressive movement and morphology. Significant differences as a response on ovulation stimulation, number and quality of embryos depending on vitamin D concentration were not observed in none of the analyzed papers concerning the role of vitamin D in in vitro fertilization (IVF). Better results in patients without calciferol insufficiency are explained by reports about high concentration of vitamin D and its metabolites in human in decidua collected in the 1st trimester of pregnancy which suggests its contribution in proper implantation and local immunological preference of the embryo. It is accepted that the treatment requires vitamin D concentration below 20 ng/ml (up to 50 nmol/l), especially in obese women, these with insulin resistance and small ovarian reserve and in men with oligo- and asthenozoospermia. abstract_id: PUBMED:34488147 Urinary trace elements in association with premature ovarian insufficiency and reproductive hormones in a Chinese population. Homeostasis disturbance of trace elements has been linked to adverse reproductive consequences, including premature ovarian insufficiency (POI) in women, but limited evidence has been reported so far. This case-control study evaluated the associations between 5 common urinary trace elements [copper (Cu), manganese (Mn), Iron (Fe), Selenium (Se), and zinc (Zn)] and the odds for POI. Urinary concentrations of these 5 metals and serum levels of POI-related reproductive hormones of 169 cases and 209 healthy controls were measured. The urinary levels of Cu and Se in women with POI were significantly higher than those in the controls. The positive associations were observed between Cu levels and the odds of POI [for the medium tertile: odds ratio (OR) = 3.79, 95% CI: 1.98-7.27, p < 0.001; for the highest tertile: OR = 3.85, 95% CI: 2.00-7.41, p < 0.001]. The highest tertile of urinary Se levels was associated with increasing POI risk (for the highest tertile: OR = 2.54, 95% CI: 1.38-4.70, compared with the lowest tertile, p for trend = 0.001). In POI patients, urinary concentrations of Zn and Fe were negatively associated with serum levels of follicle-stimulating hormone (FSH). Our findings suggested that higher exposure levels of Cu and Se might lead to an increased risk of POI. abstract_id: PUBMED:26035242 The role of vitamin D in reproductive health--a Trojan Horse or the Golden Fleece? In the last decade, vitamin D was in the spotlight in many fields of research. Despite numerous publications, its influence on reproductive health remains ambiguous. This paper presents an up-to-date review of current knowledge concerning the role of cholecalciferol in human reproduction. It covers various infertility issues, such as polycystic ovary syndrome, endometriosis, myoma-induced infertility, male infertility, premature ovary failure and in vitro fertilization techniques. Vitamin D deficiency, defined as serum concentration of 25-hydroxycalciferol of less than 50 nmol/L, is commonly noted more frequently than only in fertility clinic patients. It is a global trend that is observed in all age groups. The results of original publications dated up to 2015 have been summarized and discussed in a critical manner. Most experts agree that vitamin D supplementation is a necessity, particularly in women suffering from obesity, insulin resistance or small ovarian reserve, as well as in men with oligo- and asthenozoospermia if serum concentration should fall below 50 nmol/L (normal range up to 125 nmol/L). High concentration of vitamin D and its metabolites in decidua during the 1st trimester suggests its important role in the implantation process and a local immunological embryo-protection. On the other hand, evidence-based research did not prove a significant difference so far in ovulation stimulation or embryo development depending on vitamin D level. In one of the publications, it was also found that vitamin D binding protein (VDBP) has a molecular similarity to anti-sperm antibodies, and another one concluded that both low (<50 nmol/L) and high (>125 nmol/L) concentration of vitamin D are associated with decreased number and quality of spermatozoa in semen. Vitamin D is definitely not a Trojan Horse in reproductive health, since there were no adverse effects reported for vitamin D intake of up to 10,000 IU/day, but to proclaim it the Golden Fleece, more evidence is needed. abstract_id: PUBMED:30107585 Vitamin D Status Is Not Associated with Risk of Early Menopause. Background: Early natural menopause, the cessation of ovarian function before age 45 y, is positively associated with cardiovascular disease and other conditions. Dietary vitamin D intake has been inversely associated with early menopause; however, no previous studies have evaluated risk with regard to plasma 25-hydroxyvitamin D [25(OH)D] concentrations. Objective: We prospectively evaluated associations of total and free 25(OH)D and vitamin D-binding protein (VDBP) concentrations and the risk of early menopause in a case-control study nested within the Nurses' Health Study II (NHS2). We also considered associations of 25(OH)D and VDBP with anti-Müllerian hormone (AMH) concentrations. Methods: The NHS2 is a prospective study in 116,430 nurses, aged 25-42 y at baseline (1989). Premenopausal plasma blood samples were collected between 1996 and 1999, from which total 25(OH)D and VDBP concentrations were measured and free 25(OH)D concentrations were calculated. Cases experienced menopause between blood collection and age 45 y (n = 328) and were matched 1:1 by age and other factors to controls who experienced menopause after age 48 y (n = 328). Conditional logistic regression models were used to estimate ORs and 95% CIs for early menopause according to each biomarker. Generalized linear models were used to estimate AMH geometric means according to each biomarker. Results: After adjusting for smoking and other factors, total and free 25(OH)D were not associated with early menopause. Quartile 4 compared with quartile 1 ORs were 1.04 (95% CI: 0.60, 1.81) for total 25(OH)D and 0.70 (95% CI: 0.41, 1.20) for free 25(OH)D. 25(OH)D was unrelated to AMH concentrations. VDBP was positively associated with early menopause; the OR comparing the highest with the lowest quartile of VDBP was 1.80 (95% CI: 1.09, 2.98). Conclusions: Our findings suggest that total and free 25(OH)D are not importantly related to the risk of early menopause. VDBP may be associated with increased risk, but replication is warranted. abstract_id: PUBMED:33438304 The relationship between vitamin E level and premature ovarian insufficiency. Aim: The aim of this study is to investigate the role of vitamin E in the etiology of premature ovarian insufficiency (POI). Methods: This study included a total of 96 matched cases and controls, including the case group composed of 40 women with POI, and the control group composed of 56 women with normal menstrual cycles. Serum levels of vitamin E (also known as α-tocopherol), total cholesterol, follicle stimulating hormone (FSH), luteinizing hormone (LH) and anti-Müllerian hormone (AMH) were determined. The relationship of α-tocopherol with FSH, LH and AMH levels in women were evaluated using regression models. Results: We found a negative association of α-tocopherol levels with FSH and LH levels, and a positive correlation between a-tocopherol and AMH levels in the whole study population. In this study, α-tocopherol levels were significantly lower in women with POI than those in women with normal menstrual cycles. Conclusion: Vitamin E levels showed significantly statistical difference between the POI and control group (P < 0.001). Therefore, vitamin E is correlated to POI. abstract_id: PUBMED:32469224 Vitamin D3 action within the ovary - an updated review. Vitamin D3 is well-known as a major regulator of calcium and phosphorus homeostasis. A growing body of evidence highlights its crucial role in the regulation of reproductive processes in females. The role of vitamin D3 in the female reproductive tract has been extensively investigated because its receptor is abundant in reproductive organs, including ovary. Importantly, besides expression of vitamin D3 receptor, the ovary is an extrarenal site of vitamin D3 metabolism. The influence of vitamin D3 on follicular development and ovarian steroidogenesis has been investigated. Furthermore, vitamin D3 deficiency has also been associated with polycystic ovary syndrome, premature ovarian failure and ovarian cancer. The objective of this review is to summarize our knowledge about the contribution of vitamin D3 to physiological and pathological processes within the ovary. Answer: Yes, there is evidence suggesting a link between premature ovarian failure (POI) and serum concentrations of vitamin D, zinc, and copper. Studies have found that women with POI tend to have significantly higher serum copper levels and copper-to-zinc ratios, but significantly lower serum vitamin D and zinc levels compared to healthy control groups with normal menstrual cycles (PUBMED:22968257; PUBMED:23812652). Additionally, serum follicle-stimulating hormone (FSH) levels were inversely correlated with zinc and vitamin D levels and positively correlated with the copper-to-zinc ratio and copper levels (PUBMED:22968257). Moreover, women with POI have been shown to have traditional risk factors for diabetes mellitus, including lower levels of vitamin D and higher levels of copper and homeostasis model of assessment-insulin resistance (HOMA-IR) (PUBMED:23812652). Genetic variants associated with POI have also been linked to serum copper levels, suggesting a genetic component to the observed mineral imbalances in POI (PUBMED:38311764). Furthermore, other studies have explored the role of vitamin D in reproductive health and its significance in infertility therapy, including its effects on conditions such as polycystic ovary syndrome, endometriosis infertility, myoma infertility, male infertility, and in vitro fertilization techniques (PUBMED:24522025; PUBMED:26035242). However, it is important to note that while some studies have found associations between vitamin D status and the risk of early menopause, others have not found a significant relationship (PUBMED:30107585). In summary, the available evidence suggests that there are correlations between serum concentrations of vitamin D, zinc, and copper and the occurrence of POI, although the exact mechanisms and the potential for using these findings in screening or treatment require further investigation.
Instruction: Teacher expectations and perceptions of student attributes: Is there a relationship? Abstracts: abstract_id: PUBMED:30961877 The interpersonal character of teacher expectations: The perceived teacher-student relationship as an antecedent of teachers' track recommendations. Teacher expectations of students have long been recognized as a form of interpersonal expectations. In this study, we aimed to investigate the interpersonal character of teacher expectations by assessing 1) whether teacher expectations and the teacher-student relationship shared similar antecedents in terms of demographic characteristics of students, and 2) whether the dimensions closeness, conflict, and dependency of the teacher-student relationship were predictive of teacher expectations. Analyses were based on a large sample of 9881 students in 614 classes in the final grade of primary education. The results indicated that teacher expectations - as measured by track recommendations - and the teacher-student relationship were not consistent in antecedents. Student performance, parental education, and closeness were positive predictors of track recommendations, whereas they were negatively associated with conflict and dependency. Ethnicity was positively associated with track recommendations, but negatively with closeness. Furthermore, perceived closeness and conflict were not statistically significantly associated with track recommendations. A negative association was found for perceived dependency with teachers' track recommendations, although the latter association appeared stronger for high performing students. Finally, the results showed that teachers differed in how they weigh the conflict and dependency dimensions when formulating track recommendations, indicating that those teacher perceptions played a stronger role in some of the classes. abstract_id: PUBMED:34219230 Parent and teacher educational expectations and adolescents' academic performance: Mechanisms of influence. The current study investigated how parents' and teachers' educational expectations both directly and indirectly shaped young people's academic outcomes in a nationally-representative sample of high school students (Education Longitudinal Study; N = 9654 adolescents). Higher parent and math teacher expectations in 10th grade were associated with better 12th grade math scores and higher grade point averages, math course-taking sequence, and educational attainment two years post-high school. High parent expectations generally magnified the particularly strong positive effects of high math teacher expectations, and there was some evidence of variation in links between adult expectations and outcomes by both student race/ethnicity and socioeconomic status. Parents' educational involvement at school, teacher-student relationships, and school-parent communication mediated the links between adult educational expectations and academic outcomes. abstract_id: PUBMED:34287895 Teacher-Student Incongruence in Perceptions of School Equity: Associations with Student Connectedness in Middle and High Schools. Background: School equity refers to the extent to which students are treated fairly, ensuring that each student receives what they need to be successful. School staff can play a vital role in creating an equitable school climate for adolescents, but little is known about how staff perceive equity in their school and how this in turn may relate to students' perceptions. This study sought to explore congruence between teacher and student perceptions of school equity and how congruence or incongruence related to students' sense of connectedness to school. Methods: Data for the study came from the Maryland Safe and Supportive Schools Climate Survey, which was administered online to 5523 school staff and 59,218 students across 104 middle and high schools. Results: Multilevel models indicated that, compared to high staff and high student ratings, there was lower connectedness among the schools with low staff, low student ratings of equity as well as the schools with high staff, low student and low staff, high student ratings of equity. Conclusions: The findings suggest that staff perceptions of school equity are often higher than students and incongruence in perceptions may have a negative impact on students' connection to school. abstract_id: PUBMED:29961480 How far can the apple fall? Differences in teacher perceptions of minority and immigrant parents and their impact on academic outcomes. While a large body of research has focused on increasing parental involvement in schools, less work has considered teacher perceptions of parental involvement. Teacher perceptions of parents are important because they influence teacher practices and relationships with students, with ensuing consequences for student outcomes. Prior research suggests that teacher perceptions of parents vary by children's family background, but empirical work comparing teacher perceptions of parental involvement across groups and the impact of such perceptions on different student outcomes is lacking. Using nationally representative data from the Education Longitudinal Study of 2002, we find that even after taking into account parents' self-reported involvement in their adolescent children's education, teachers are less likely to perceive that minority immigrant parents are as involved as native-born White parents. Patterns also differ for teachers of English and teachers of math in ways that are consistent with racial and ethnic stereotypes about academic ability. Further, teacher perceptions of parental involvement matter for student GPAs and teacher recommendations. abstract_id: PUBMED:25847064 The influence of teacher feedback on children's perceptions of student-teacher relationships. Background: Teachers can deliver feedback using person ('you are clever') or process terms ('you worked hard'). Person feedback can lead to negative academic outcomes, but there is little experimental research examining the impact of feedback on children's perceptions of the student-teacher relationship. Aim: We examined the effects of person, process, and no feedback on children's perceptions of their relationship with a (fictional) teacher following success and failure. Samples: Participants were British children (145 aged 9-11 in experiment 1 and 98 aged 7-11 in experiment 2). Method: In experiment 1, participants read three scenarios where they succeeded and received one of two types of praise (person or process) or no praise. Participants then read two scenarios where they failed. In experiment 2, participants read that they had failed in three tasks and received one of two types of criticism (person or process) or no criticism. Participants then read two scenarios where they succeeded. They rated how much they liked the teacher and how much they felt that the teacher liked them. Results: Children felt more positive about the student-teacher relationship following success than failure. Type of praise did not influence perceptions of the student-teacher relationship following success or failure. However, person criticism led children to view the student-teacher relationship more negatively following failure and maintain this negative view following the first success. Conclusions: Success appears to be important for developing positive student-teacher relationships. In response to failure, teachers could avoid person criticism which may negatively influence the student-teacher relationship. abstract_id: PUBMED:26179738 Accurate, inaccurate, or biased teacher expectations: Do Dutch teachers differ in their expectations at the end of primary education? Background: In several tracked educational systems, realizing optimal placements in classes in the first year of secondary education depends on the accuracy of teacher expectations. Aims: The aim of this study was to investigate between-teacher differences in their expectations regarding the academic aptitude of their students. Sample: The sample consisted of 500 teachers (classes) who provided their expectations of 7,550 students in the final grade of Dutch primary education. Methods: We analysed the extent to which teachers differed in their expectations and in what contexts their expectations were biased, using multilevel random slope models. Results: Multilevel analysis showed teacher expectation bias to be related to gender and socio-ethnic background of students. The differences among teachers in expectations for Turkish, Moroccan, and other foreign students with low-educated parents were larger than the average teacher expectation bias for these groups in the sample. When student characteristics were controlled for, we found that the teachers in our sample had higher expectations for students in high-performing classes or classes with only a small proportion of students from low-SES families. Teacher expectation bias for demographic groups, however, was found to be independent of the class population. Conclusions: The results indicate the importance of the teacher and the necessity of using multilevel models when investigating the complex nature of between-teacher differences in expectations of student performance. abstract_id: PUBMED:33715782 Associations between teacher expectations and racial disproportionality in discipline referrals. We explored the racial discipline gap and sought to understand the extent to which elementary school teachers' (N = 33; Grades K-6) expectations for their students' (N = 496) future outcomes predicted racial discipline disproportionality. We used multilevel models, and Ordinary Least Squares (OLS) regression with standardized errors corrected, to examine associations between teacher expectations of academic and behavioral student outcomes and patterns of office discipline referrals (ODRs) for the year. Results showed disproportionate ODR outcomes for Black students in comparison to White students. Additionally, we found that students held to lower expectations by their teachers received more ODRs than peers rated highly by their teachers, regardless of race. We also found that for every unit teacher expectations increased, the ODR disparity between Black, Latinx, and White students decreased. Lastly, multilevel models exploring teacher expectations of student outcomes explained approximately 21% of the difference in ODRs between Black and White peers. abstract_id: PUBMED:38322646 Of Promise and Penalties: How Student Racial-Cultural Markers Shape Teacher Perceptions. Scholars document considerable disparities in teacher perceptions of students, yet absent from this literature is an examination of how race, ethnicity, and immigration status intersect to influence teacher ratings. This study extends previous research by examining variation in teachers' ratings of academic ability across four conventional racial/ethnic groups as well as thirteen racialized subgroups. Using data from the Early Childhood Longitudinal Study-Kindergarten Class of 1998-1999, we find that black first-graders receive lower ratings in language and literacy, a pattern that holds for both black Americans and black immigrants. In contrast, Asian first-graders receive higher ratings in math; however, this is primarily driven by teachers' much higher ratings of East Asian and Southeast Asian immigrants. These subgroup differences remain even after controlling for a host of background and contextual factors, as well as students' tested ability and academic growth in math and reading. Teacher perceptions of student academic behavior explain lower language and literacy ratings for black Americans and higher math ratings for Southeast Asian immigrants that are present net background and performance, but higher math ratings for East Asian immigrants remain. We conclude by discussing implications of our approach and findings. abstract_id: PUBMED:34777094 Student Perceptions of Academic Engagement and Student-Teacher Relationships in Problem-Based Learning. Students' academic engagement depends on a variety of factors that are related to personal learner characteristics, the teacher, the teaching methodology, peers, and other features in the learning environment. Components that influence academic engagement can be cognitive, metacognitive, affective, social, task-related, communicative, and foreign language-related. Rather than existing in isolated spheres, the factors contributing to an individual's academic engagement intertwine and overlap. The relationships students cultivate with others are prominent in several of these areas. Positive interpersonal relationships enhance individuals' enthusiasm for learning (Mercer and Dörnyei, 2020), which benefits sustainable learning success and self-confidence. The relationships between students and teachers and the perceptions students have of their teachers seem to be particularly influential on students' engagement in academic undertakings. Problem-based learning (PBL), a teaching approach particularly suitable for tertiary education, involves students in authentic problem-solving processes and fosters students' self-regulation and teamwork. Intensive relationship-building is one of the key characteristics of this student-centered approach (Amerstorfer, 2020). The study reported in this article explores the connection between the academic engagement of 34 students and their perceptions of three instructors in a teacher education program for pre-service English teachers in Austria. An online questionnaire was used to investigate the participants' perceived academic engagement (effort, dedication, learning success) in a university course that implements PBL as its underlying teaching methodology in comparison to conventional teaching approaches. The study further examines how the students perceived the course instructors' caring, credibility, communication style, and feedback, which leads to new information about how PBL shapes student-teacher relationships. Due to Covid-19, the otherwise face-to-face course was taught online. abstract_id: PUBMED:34603138 Teacher Expectations and Parental Stress During Emergency Distance Learning and Their Relationship to Students' Perception. School closures in spring 2020 caused by the COVID-19 pandemic were an unprecedented and drastic event for students, parents, and teachers. The unplanned adaptation of classroom instruction to emergency distance learning was necessary to ensure continued education. In this new learning environment, teachers formed expectations for student academic achievement gains, which in turn affected the opportunities for students to learn. Parents faced new challenges in supporting their children's learning. According to parenting stress models, such drastic events can be a stress factor for parents, which in turn affects their children's adjustment. This study analyzed the extent to which parents and teachers affected the perceptions of students in compulsory school toward distance learning through processes at home (individual level) and at the class level with data from multiple informants. On an individual level, the relationship between parents' perceived threat of COVID-19 and their stress due to distance learning and students' perceived threat of COVID-19 and their perception of distance learning were examined. Students' learning behavior was accounted for as a variable related to their perception of distance learning. At the class level, the explanatory character of teacher expectations and class-aggregated achievement gains were examined. Data on students in grades 4 to 8, parents, and teachers in Switzerland were collected with standardized online questionnaires after the period of school closures. A subsample of 539 students, 539 parents, and 83 teachers was analyzed. The results of multilevel structural equation modeling suggested that students had a more positive perception of distance learning if they were able to learn more autonomously (i.e., more motivated and concentrated than in regular classroom instruction) and if their parents felt less stressed in the distance learning setting. Parents were more stressed if they perceived COVID-19 as a threat. Students' perception of the COVID-19 threat was related to their parents' perception but did not explain students' learning behavior. At the class level, if teachers expected high academic achievement gains in distance learning, the average academic achievement gains of a class were greater. The greater the achievement gains were, the more positive the collective student perception of distance learning was. Answer: Yes, there is a relationship between teacher expectations and perceptions of student attributes. Studies have shown that teacher expectations can be influenced by various student demographic characteristics and can impact academic outcomes. For example, teacher expectations, as measured by track recommendations, were found to be influenced by student performance, parental education, and closeness in the teacher-student relationship, with ethnicity also playing a role (PUBMED:30961877). Additionally, higher parent and teacher expectations were associated with better academic performance, and these expectations were mediated by factors such as educational involvement, teacher-student relationships, and school-parent communication (PUBMED:34219230). Furthermore, teacher perceptions of parental involvement varied by children's family background, with minority immigrant parents often being perceived as less involved than native-born White parents, which in turn affected student GPAs and teacher recommendations (PUBMED:29961480). Teachers' feedback styles also influenced children's perceptions of the student-teacher relationship, with person criticism after failure leading to more negative perceptions (PUBMED:25847064). Teacher expectation bias was found to be related to the gender and socio-ethnic background of students, with differences among teachers in expectations for certain demographic groups (PUBMED:26179738). Teacher expectations were also linked to racial disproportionality in discipline referrals, with lower expectations correlating with more referrals, and higher expectations reducing the disparity in referrals among students of different races (PUBMED:33715782). Moreover, teacher perceptions of students' academic ability were found to be shaped by racial-cultural markers, with black students receiving lower ratings in language and literacy, and Asian students, particularly East Asian and Southeast Asian immigrants, receiving higher ratings in math (PUBMED:38322646). The study on problem-based learning highlighted the connection between academic engagement and student perceptions of their instructors, suggesting that teacher-student relationships are influential on students' engagement (PUBMED:34777094). Lastly, during emergency distance learning due to the COVID-19 pandemic, teacher expectations and parental stress were found to affect students' perceptions of the learning experience, with higher teacher expectations and lower parental stress leading to more positive student perceptions (PUBMED:34603138).
Instruction: Sentinel lymph node biopsy in atypical Spitz nevi: is it useful? Abstracts: abstract_id: PUBMED:24694641 Atypical Spitz tumours and sentinel lymph node biopsy: a systematic review. Sentinel lymph node biopsy has been proposed as a diagnostic method for estimation of the malignant potential of atypical Spitz tumours. However, although cell deposits are commonly detected in the sentinel lymph nodes of patients with atypical Spitz tumours, their prognosis is substantially better than that of patients with melanoma and positive sentinel lymph node biopsies. We did a systematic review of published reports to assess the role of sentinel lymph node biopsy as a prognostic method in the management of atypical Spitz tumours. The results of our analysis did not show any prognostic benefit of sentinel lymph node biopsy; having a positive sentinel lymph node does not seem to predict a poorer outcome for patients with atypical Spitz tumours. These findings indicate that, especially in the paediatric population, it might be prudent initially to use complete excision with clear margins and careful clinical follow-up in patients with atypical Spitz tumours. abstract_id: PUBMED:19040933 Sentinel lymph node biopsy in the pediatric population. Background: Sentinel lymph node biopsy (SLNB) has only been recently used for childhood neoplasms. Methods: We reviewed all patients younger than 19 years who underwent SLNB for 5 years. Results: Twenty patients were identified (11 male, 9 female). Sentinel lymph node biopsy was performed for 10 sarcomas (5 synovial, 3 rhabdomyosarcoma, 1 epitheliod, 1 other); 9 skin neoplasms (4 melanomas, 3 Spitz nevi, 2 melanocytomas); and 1 acinic cell carcinoma. All patients underwent Technetium 99m sulfur microcolloid injection and 4-quadrant subdermal injection with Lymphazurin 1% (Autosuture, Norwalk, Conn). Six patients required either sedation for lymphoscintigraphy. Intraoperative gamma probe was used. Primary lesions were found in lower extremity (n = 8), upper extremity (n = 6), trunk (n = 3), and head and neck (n = 3). The lymphatic basins were inguinal (n = 8), axilla (n = 8), neck (n = 3), and both inguinal and axilla (n = 1). At least one lymph node was identified in each procedure. Of 20 patients, 5 (25%) had metastatic disease (4 skin neoplasms and 1 sarcoma). There were no complications in our series, and all patients are alive with no recurrence at an average follow-up of 2.2 years. Conclusions: Sentinel lymph node biopsy allows for an accurate biopsy in children. However, some younger patients may require sedation, and it may be more challenging to isolate the sentinel node. abstract_id: PUBMED:15868590 Sentinel lymph node biopsy for melanoma and other melanocytic tumors in adolescents. Background/purpose: Melanoma is rare, accounting for only 1% of all pediatric malignancies. The management of pediatric melanoma is controversial but largely parallels that of an adult occurrence. Sentinel lymph node biopsy (SLNBX) has become a standard of care for adults with melanoma, but the role of this procedure in the staging of pediatric patients remains to be established. The goal of this study was to determine outcomes and complications of children and adolescent patients undergoing SLNBX at the authors' institution. Methods: A retrospective review of patients younger than 21 years (N = 20) undergoing SLNBX for melanoma or other melanocytic skin lesions at the University of Colorado Health Science Center between 1996 and 2003 was conducted. Results: Sentinel lymph node biopsy was successful in all 20 patients, and 8 patients (40%) were found to have metastases within the sentinel node. As in adults, the sentinel node status correlates with primary tumor depth. No complications occurred in patients undergoing SLNBX, but 4 clinically significant complications (57%) occurred in the 7 patients undergoing a completion lymph node dissection. At 33 months median follow-up, all patients were disease free. Conclusions: Sentinel lymph node biopsy can be successfully and safely performed in pediatric patients for melanoma and atypical nevi. However, the prognostic information and therapeutic implications of SLNBX results for children and adolescents remain unclear. Completion lymph node dissection for microscopic disease is a morbid procedure with uncertain benefit to pediatric or adult patients with a positive SLNBX result. Long-term follow-up data are needed before SLNBX can become a standard of care in pediatric melanoma or as a diagnostic tool to distinguish the atypical Spitz nevus from melanoma. abstract_id: PUBMED:18724099 Sentinel lymph node biopsy for patients with diagnostically controversial Spitzoid melanocytic tumors? The distinction of a Spitz nevus from melanoma can be very difficult. Pathologists may disagree on whether a Spitzoid melanocytic proliferation is benign or malignant, or acknowledge uncertainty about the diagnosis. As long as melanoma is suspected or strongly considered, a clinical management plan is often adopted as if the patient had melanoma, which may include sentinel lymph node (SLN) biopsy for staging. The findings of the sentinel node may resolve the diagnostic controversy about the primary tumor, but there is also the risk for more diagnostic confusion, uncertainty, and errors. We review the arguments in favor and against SLN biopsy for patients with diagnostically controversial Spitzoid melanocytic tumors, summarize current experience, and illustrate diagnostic pitfalls. Although SLN biopsy provides prognostic information helpful for clinical trials, we caution against performing the procedure as a diagnostic adjunct. abstract_id: PUBMED:12653752 Malignant melanoma in an 8-year-old Caribbean girl: diagnostic criteria and utility of sentinel lymph node biopsy. The incidence of malignant melanoma (MM) is continuing to rise, although childhood MM remains rare. We describe an 8-year-old Afro-Caribbean girl who developed a non pigmented lesion on the tip of her left thumb, which persisted despite treatment in primary care with cryotherapy. At biopsy she was found to have an acquired acral MM. She underwent amputation of the distal phalanx of her thumb, together with positive sentinel lymph node (SLN) biopsy and subsequent axillary lymph node clearance and adjuvant chemotherapy. MMs are very rare in this age and skin-type group, therefore requiring strict diagnostic criteria. These criteria include the distinction from MM mimics, especially Spitz tumours, and an appropriate use of staging techniques such as SLN biopsy to influence management. abstract_id: PUBMED:25595513 Sentinel lymph node biopsy in paediatric melanoma. A case series. The incidence of melanoma in children is uncommon, being particularly rare in children under 10 years-old. However, this disease is increasing by a mean of 2% per year. As in adults, the lymph node status is the most important prognostic factor, crucial to performing the selective sentinel lymph node biopsy (SLNB). We report 3 cases of paediatric patients of 3, 4 and 8 years-old, in which SLNB was performed for malignant melanoma. Paediatric age implies greater technical difficulty to the scintigraphy scan due to poor patient cooperation, with mild sedation required in some cases, and only being able to acquire planar images in other cases. SPECT/CT was only performed in the oldest patient. In our cases, SLNB was useful for selecting the least invasive surgery in order to reduce morbidity. abstract_id: PUBMED:23210497 Atypical Spitz tumour with positive sentinel lymph node. A 16-year-old young man presented with an atypical Spitz tumour of the left buttock with involvement of the left inguinal sentinel node. After inguinal lymphadenectomy, none among the further 21 left inguinal lymph nodes excised was positive. The patient has been free of disease for 39 months. Reviewing published reports, 68 of 147 patients with atypical Spitz tumours having sentinel node mapping had positive nodes. Complete nodal dissection was performed in 66 of these 68 patients. All these patients with a positive sentinel node including this patient were alive and disease-free, with a mean follow-up of 35.8 months. Doubts about the exact nature of atypical Spitz tumours still persist. Although it does not seem to behave like conventional melanomas since prognosis is highly favourable despite a high incidence of sentinel node deposits, interpretations of these nodal involvements are still debated. A longer follow-up period of surveillance should be useful to definitively disclose this question. A literature review to provide an analysis of current knowledge regarding diagnosis, management, and treatment options was undertaken. Furthermore, current misconcepts leading to under-treatment as well as over-treatment, new trends in diagnosis, and surgical approach are discussed. The real biologic potential of the disease could not often be evaluated by sentinel lymph node status. Although this procedure may have a role, possibly being therapeutic for those patients who have malignant lesions, it produces morbidity and outcomes that are sometimes difficult to interpret in this type of pigmented lesions. Thus, it should be reserved to selected patients. abstract_id: PUBMED:11324881 Pediatric melanoma: confirming the diagnosis with sentinel node biopsy. Many pediatric melanoma lesions present at a more advanced stage than those in the adult population. Clinical and histological melanoma mimics, including a subset of Spitz nevi, are difficult to discriminate from melanoma. When dealing with a childhood melanoma, the clinician is likely to be faced with a thick lesion, and one in which the actual diagnosis may even be in doubt. There is a paucity of data to guide the physician in his management of melanoma in this age group, particularly with respect to node status and adjuvant therapy. The authors present two cases of pediatric melanoma in which the novel use of sentinel node biopsy helped confirm the diagnosis of melanoma, determined the need for full lymph node dissection, and guided the use of adjuvant interferon therapy. abstract_id: PUBMED:15991183 Survival in sentinel lymph node-positive pediatric melanoma. Background: Sentinel lymph node (SLN) status is the strongest predictor of survival in adult melanoma. However, the prognostic value of SLN status in children and adolescents with melanoma is unknown. Methods: Records of 327 patients aged 12 to 86 years undergoing SLN biopsy for melanoma or other melanocytic lesions were reviewed. A literature search identified additional patients younger than 21 years undergoing SLN biopsy for the same indications and these patients were combined with our series for meta-analysis. Results: Sentinel lymph node metastases were found in 8 (40%) of 20 patients aged 12 to 20 years compared with 55 (18%) of 307 adults (P < .05). Median follow-up was 35 and 17 months for the groups, respectively. Sentinel lymph node-positive pediatric patients did not recur, whereas 14 (25%) adults recurred within this period. Of the 55 adults, 5 (9.1%) have died of disease. Of the combined SLN-positive children and adolescents from the literature (total n = 25), only a single (4%) child recurred at 6 months. The difference in survival for adult and pediatric patients was significant. Conclusion: Pediatric patients have a higher incidence of SLN metastases than adults yet have a lower incidence of recurrence. Sentinel lymph node status does not predict early recurrence in pediatric patients with melanoma or atypical Spitz nevi. abstract_id: PUBMED:17024556 Sentinel lymph node biopsy for atypical melanocytic lesions with spitzoid features. Introduction: Sentinel lymph node biopsy (SLNB) is routinely used as a staging procedure for melanomas, however may also assist in understanding the biology of atypical and controversial spitzoid melanocytic skin lesions. Methods: Five hundred and forty-nine sentinal lymph node excisions were performed over a 5-year period. Fourteen patients with controversial melanocytic lesions were identified and of these ten underwent SLNB. The histology of the primary skin lesion and corresponding sentinal lymph nodes were evaluated and correlated with outcome. Results: Thickness of the primary melanocytic lesion ranged from 1.22 to 4 mm. Fifty percent of patients were less than 17 years of age. Ten patients underwent SLNB and three cases (30%) displayed metastatic disease in the SLNB specimen. All three patients were under 17 years of age and all underwent completion axillary dissection. One completion axillary dissection had an additional node with metastasis on routine H&E and immunohistochemical staining. No capsular invasion was seen. All three cases with metastatic disease received adjuvant systemic therapy and remain disease free at 29, 49 and 57 months follow-up. All patients with a negative SLNB remain disease free at mean follow-up of 28.1 months (range: 13-40 months). Conclusion: Our results confirm that some of these spitzoid lesions metastasize to regional lymph nodes and SLNB is a valuable adjunct tool in staging these lesions. However, molecular studies and a prolonged follow-up are needed to determine whether these lesions, especially those occurring in children are comparable to stage matched overt melanoma in adults. Answer: Sentinel lymph node biopsy (SLNB) has been proposed as a diagnostic method for estimating the malignant potential of atypical Spitz tumours. However, the utility of SLNB in this context is controversial. A systematic review did not show any prognostic benefit of SLNB for atypical Spitz tumours, as having a positive sentinel lymph node does not seem to predict a poorer outcome for these patients (PUBMED:24694641). This suggests that, particularly in the pediatric population, an initial approach of complete excision with clear margins and careful clinical follow-up might be prudent (PUBMED:24694641). Moreover, while SLNB can be successfully and safely performed in pediatric patients for melanoma and atypical nevi, the prognostic information and therapeutic implications of SLNB results for children and adolescents remain unclear (PUBMED:15868590). The procedure may be more challenging in younger patients, who may require sedation, and it may be difficult to isolate the sentinel node (PUBMED:19040933). Additionally, the real biologic potential of atypical Spitz tumours often cannot be evaluated by sentinel lymph node status, and the procedure may produce morbidity and outcomes that are sometimes difficult to interpret (PUBMED:23210497). In cases of diagnostically controversial Spitzoid melanocytic tumors, SLNB may resolve the diagnostic controversy about the primary tumor, but there is also a risk for more diagnostic confusion, uncertainty, and errors (PUBMED:18724099). Furthermore, while pediatric patients have a higher incidence of SLN metastases than adults, SLN status does not predict early recurrence in pediatric patients with melanoma or atypical Spitz nevi (PUBMED:15991183). In conclusion, the current evidence suggests that SLNB may not be universally useful for atypical Spitz nevi, especially considering the lack of clear prognostic benefit and the potential for diagnostic confusion and morbidity associated with the procedure. It may be reserved for selected patients, and a longer follow-up period of surveillance might be useful to definitively disclose the question of its utility (PUBMED:23210497).
Instruction: Ferumoxtran-10 enhancement in orthotopic xenograft models of human brain tumors: an indirect marker of tumor proliferation? Abstracts: abstract_id: PUBMED:17443290 Ferumoxtran-10 enhancement in orthotopic xenograft models of human brain tumors: an indirect marker of tumor proliferation? Purpose: Ferumoxtran-10 belongs to the Ultra Small Particles of Iron Oxide (USPIO) class of contrast agents and induces delayed tumor enhancement in brain tumors, reflecting the trapping of iron oxide particles by the macrophages and activated microglia. The aim of the study was to compare Ferumoxtran-10 contrast enhancement in four human high-grade glioma xenograft models (TCG2, TCG3, TCG4, and U87) with different growing profiles. Materials And Methods: Fragments of human malignant glioma were orthotopically xenografted into the brain of four groups of nude mice. All mice underwent a MRI examination 24 h after intravenous administration of Ferumoxtran-10 (axial T1 SE weighted MR images). The contrast enhancement observed in the different tumor types was measured and was correlated to in vivo tumor growth and to histological parameters, such as proliferative tumor cell fraction, apoptosis, vascular density, and Perls' staining score. Results: A good relationship was observed: (a) between tumor-to-background contrast and proliferative index, (b) between tumor-to-background contrast and tumor growth, and (c) between tumor-to-background contrast and Perls' staining score. The registered MR enhancement contrasts were not influenced by apoptotic index and by vascular density in these experimental xenografts. Conclusions: Tumor contrast enhancement 24 h after intravenous Ferumoxtran-10 administration seems to be well correlated to tumor proliferative index and tumor growth and could be used as an indirect marker of tumor proliferation. abstract_id: PUBMED:15377941 Macrophage imaging in central nervous system and in carotid atherosclerotic plaque using ultrasmall superparamagnetic iron oxide in magnetic resonance imaging. The long blood circulating time and the progressive macrophage uptake in inflammatory tissues of ultrasmall superparamagnetic iron oxide (USPIO) particles are 2 properties of major importance for magnetic resonance imaging (MRI) pathologic tissue characterization. This article reviews the proof of principle of applications such as imaging of carotid atherosclerotic plaque, stroke, brain tumor characterization, or multiple sclerosis. In the human carotid artery, USPIO accumulation in activated macrophages induced a focal drop in signal intensity compared with preinfusion MRI. The USPIO signal alterations observed in ischemic areas of stroke patients is probably related to the visualization of inflammatory macrophage recruitment into human brain infarction since animal experiments in such models demonstrated the internalization of USPIO into the macrophages localized in these areas. In brain tumors, USPIO particles which do not pass the ruptured blood-brain barrier at early times postinjection can be used to assess tumoral microvascular heterogeneity. Twenty-four hours after injection, when the cellular phase of USPIO takes place, the USPIO tumoral contrast enhancement was higher in high-grade than in low-grade tumors. Several experimental studies and a pilot multiple sclerosis clinical trial in 10 patients have shown that USPIO contrast agents can reveal the presence of inflammatory multiple sclerosis lesions. The enhancement with USPIO does not completely overlap with the gadolinium chelate enhancement. While the proof of concept that USPIO can visualize macrophage infiltrations has been confirmed in animals and patients in several applications (carotid atherosclerotic lesions, stroke, brain tumors and multiple sclerosis), larger prospective clinical studies are needed to demonstrate the clinical benefit of using USPIO as an MRI in vivo surrogate marker for brain inflammatory diseases. abstract_id: PUBMED:18506183 Characterisation of tumour vasculature in mouse brain by USPIO contrast-enhanced MRI. To enhance the success rate of antiangiogenic therapies in the clinic, it is crucial to identify parameters for tumour angiogenesis that can predict response to these therapies. In brain tumours, one such parameter is vascular leakage, which is a response to tumour-derived vascular endothelial growth factor-A and can be measured by Gadolinium-DTPA (Gd-DTPA)-enhanced magnetic resonance imaging (MRI). However, as vascular permeability and angiogenesis are not strictly coupled, tumour blood volume may be another potentially important parameter. In this study, contrast-enhanced MR imaging was performed in three orthotopic mouse models for human brain tumours (angiogenic melanoma metastases and E34 and U87 human glioma xenografts) using both Gd-DTPA to detect vascular leakage and ultrasmall iron oxide particles (USPIO) to measure blood volume. Pixel-by-pixel maps of the enhancement in the transverse relaxation rates (Delta R(2) and Delta R(2)(*)) after injection of USPIO provided an index proportional to the blood volume of the microvasculature and macrovasculature, respectively, for each tumour. The melanoma metastases were characterised by a blood volume and vessel leakage higher than both glioma xenografts. The U87 glioblastoma xenografts displayed higher permeability and blood volume in the rim than in the core. The E34 glioma xenografts were characterised by a relatively high blood volume, accompanied by only a moderate blood-brain barrier disruption. Delineation of the tumour was best assessed on post-USPIO gradient-echo images. These findings suggest that contrast-enhanced MR imaging using USPIOs and, in particular, Delta R(2) and Delta R(2)(*) quantitation, provides important additional information about tumour vasculature. abstract_id: PUBMED:24421639 Superparamagnetic iron oxide nanoparticles conjugated with epidermal growth factor (SPION-EGF) for targeting brain tumors. Superparamagnetic iron oxide nanoparticles (SPIONs) conjugated with recombinant human epidermal growth factor (SPION-EGF) were studied as a potential agent for magnetic resonance imaging contrast enhancement of malignant brain tumors. Synthesized conjugates were characterized by transmission electron microscopy, dynamic light scattering, and nuclear magnetic resonance relaxometry. The interaction of SPION-EGF conjugates with cells was analyzed in a C6 glioma cell culture. The distribution of the nanoparticles and their accumulation in tumors were assessed by magnetic resonance imaging in an orthotopic model of C6 gliomas. SPION-EGF nanosuspensions had the properties of a negative contrast agent with high coefficients of relaxation efficiency. In vitro studies of SPION-EGF nanoparticles showed high intracellular incorporation and the absence of a toxic influence on C6 cell viability and proliferation. Intravenous administration of SPION-EGF conjugates in animals provided receptor-mediated targeted delivery across the blood-brain barrier and tumor retention of the nanoparticles; this was more efficient than with unconjugated SPIONs. The accumulation of conjugates in the glioma was revealed as hypotensive zones on T2-weighted images with a twofold reduction in T2 relaxation time in comparison to unconjugated SPIONs (P<0.001). SPION-EGF conjugates provide targeted delivery and efficient magnetic resonance contrast enhancement of EGFR-overexpressing C6 gliomas. abstract_id: PUBMED:26635474 Specific detection of CD133-positive tumor cells with iron oxide nanoparticles labeling using noninvasive molecular magnetic resonance imaging. Background: The use of ultrasmall superparamagnetic iron oxide (USPIO) nanoparticles to visualize cells has been applied clinically, showing the potential for monitoring cells in vivo with magnetic resonance imaging (MRI). USPIO conjugated with anti-CD133 antibodies (USPIO-CD133 Ab) that recognize the CD133 molecule, a cancer stem cell marker in a variety of cancers, was studied as a novel and potent agent for MRI contrast enhancement of tumor cells. Materials And Methods: Anti-CD133 antibodies were used to conjugate with USPIO via interaction of streptavidin and biotin for in vivo labeling of CD133-positive cells in xenografted tumors and N-ethyl-N-nitrosourea (ENU)-induced brain tumors. The specific binding of USPIO-CD133 Ab to CD133-positive tumor cells was subsequently detected by Prussian blue staining and MRI with T2-weighted, gradient echo and multiple echo recombined gradient echo images. In addition, the cellular toxicity of USPIO-CD133 Ab was determined by analyzing cell proliferation, apoptosis, and reactive oxygen species production. Results: USPIO-CD133 Ab specifically recognizes in vitro and labels CD133-positive cells, as validated using Prussian blue staining and MRI. The assays of cell proliferation, apoptosis, and reactive oxygen species production showed no significant differences in tumor cells with or without labeling of USPIO-CD133 Ab. In vivo imaging of CD133-positive cells was demonstrated by intravenous injection of USPIO-CD133 Ab in mice with HT29 xenografted tumors. The MRI of HT29 xenografts showed several clusters of hypotensive regions that correlated with CD133 expression and Prussian blue staining for iron. In rat, brain tumors induced by transplacental ENU mutagenesis, several clusters of hypointensive zones were observed in CD133-expressing brain tumors by MRI and intravenously administered USPIO-CD133 Ab. Conclusion: Combination of USPIO-CD133 Ab and MRI is valuable in recognizing CD133-expressing tumor cells in vitro, extracellularly labeling for cell tracking and detecting CD133-expressing tumors in xenografted tumors as well as ENU-induced rat brain tumors. abstract_id: PUBMED:27671990 Investigating intracranial tumour growth patterns with multiparametric MRI incorporating Gd-DTPA and USPIO-enhanced imaging. High grade and metastatic brain tumours exhibit considerable spatial variations in proliferation, angiogenesis, invasion, necrosis and oedema. Vascular heterogeneity arising from vascular co-option in regions of invasive growth (in which the blood-brain barrier remains intact) and neoangiogenesis is a major challenge faced in the assessment of brain tumours by conventional MRI. A multiparametric MRI approach, incorporating native measurements and both Gd-DTPA (Magnevist) and ultrasmall superparamagnetic iron oxide (P904)-enhanced imaging, was used in combination with histogram and unsupervised cluster analysis using a k-means algorithm to examine the spatial distribution of vascular parameters, water diffusion characteristics and invasion in intracranially propagated rat RG2 gliomas and human MDA-MB-231 LM2-4 breast adenocarcinomas in mice. Both tumour models presented with higher ΔR1 (the change in transverse relaxation rate R1 induced by Gd-DTPA), fractional blood volume (fBV) and apparent diffusion coefficient than uninvolved regions of the brain. MDA-MB-231 LM2-4 tumours were less densely cellular than RG2 tumours and exhibited substantial local invasion, associated with oedema, whereas invasion in RG2 tumours was minimal. These additional features were reflected in the more heterogeneous appearance of MDA-MB-231 LM2-4 tumours on T2 -weighted images and maps of functional MRI parameters. Unsupervised cluster analysis separated subregions with distinct functional properties; areas with a low fBV and relatively impermeable blood vessels (low ΔR1 ) were predominantly located at the tumour margins, regions of MDA-MB-231 LM2-4 tumours with relatively high levels of water diffusion and low vascular permeability and/or fBV corresponded to histologically identified regions of invasion and oedema, and areas of mismatch between vascular permeability and blood volume were identified. We demonstrate that dual contrast MRI and evaluation of tissue diffusion properties, coupled with cluster analysis, allows for the assessment of heterogeneity within invasive brain tumours and the designation of functionally diverse subregions that may provide more informative predictive biomarkers. abstract_id: PUBMED:15488022 Imaging of iron oxide nanoparticles by MR and light microscopy in patients with malignant brain tumours. Objective: Ferumoxtran-10 (Combidex), a dextran-coated iron oxide nanoparticle, provides enhancement of intracranial tumours by magnetic resonance (MR) for more than 24 h and can be imaged histologically by iron staining. Our goal was to compare ferumoxtran imaging and histochemistry vs. gadolinium enhancement in malignant brain tumours on preoperative and postoperative MR. Methods: Seven patients with primary and metastatic malignant tumours underwent MR imaging with gadolinium and ferumoxtran both pre- and postoperatively. Normalized signal intensities on the ferumoxtran-enhanced scans were determined in representative regions of interest. Resected tissue from six ferumoxtran patients and from three patients who did not receive ferumoxtran was assessed for localization of iron in tumour and reactive brain. Results: All malignant tumours (all of which enhanced by gadolinium MR) showed ferumoxtran accumulation with T1 and T2 signal changes, even using a 0.15 T intraoperative MR unit in one patient. Iron staining was predominantly in reactive cells (reactive astrocytes and macrophages) and not tumour cells. In five of the seven patients, including two patients who showed additional lesions, areas enhancing with ferumoxtran but not with gadolinium were observed. Comparison of the pre- and postoperative MR revealed residual ferumoxtran-enhancing areas in four of seven cases. Conclusion: In malignant tumours, ferumoxtran may show areas of enhancement, even with a 0.15 T intraoperative MR, that do not enhance with gadolinium. Ferumoxtran-enhancing lesions have persistent increased T1 signal intensity for 2-5 days, which may provide advantages over gadolinium for postoperative imaging. Histochemistry for iron shows uptake of ferumoxtran in reactive cells (astrocytes and macrophages) rather than tumour cells. abstract_id: PUBMED:23661610 Cellular imaging and texture analysis distinguish differences in cellular dynamics in mouse brain tumors. Purpose: The heterogeneous tumor cell population and dynamic microenvironment within a tumor lead to regional variations in cell proliferation, migration, and differentiation. In this work, MRI and optical projection tomography were used to examine and compare the redistribution of a cellular label in two mouse glioma models. Methods: GL261 and 4C8 glioma cells labeled with iron oxide particles or with a fluorescent probe were injected into the brains of syngeneic mice and allowed to develop into ∼10-mm(3) tumors. Texture analysis was used to quantitatively describe and compare the label distribution patterns in the two tumor types. Results: The label was seen to remain predominantly in the tumor core in GL261 tumors, but become more randomly distributed throughout the tumor volume in 4C8 tumors. Histologically, GL261 tumors displayed a more invasive, aggressive phenotype, although the distribution of mitotic cells in the two tumors was similar. Conclusion: The redistribution of a cellular label during tumor growth is characteristic of a tumor model. The label distribution map reflects more than simple differences in cell proliferation and is likely influenced by differences in the tumor microenvironment. abstract_id: PUBMED:27723161 Difference in the intratumoral distributions of extracellular-fluid and intravascular MR contrast agents in glioblastoma growth. Contrast enhancement by an extracellular-fluid contrast agent (CA) (Gd-DOTA) depends primarily on the blood-brain-barrier permeability (bp), and transverse-relaxation change caused by intravascular T2 CA (superparamagnetic iron oxide nanoparticles, SPIONs) is closely associated with the blood volume (BV). Pharmacokinetic (PK) vascular characterization based on single-CA-using dynamic contrast-enhanced MRI (DCE-MRI) has shown significant measurement variation according to the molecular size of the CA. Based on this recognition, this study used a dual injection of Gd-DOTA and SPIONs for tracing the changes of bp and BV in C6 glioma growth (Days 1 and 7 after the tumor volume reached 2 mL). bp was quantified according to the non-PK parameters of Gd-DOTA-using DCE-MRI (wash-in rate, maximum enhancement ratio and initial area under the enhancement curve (IAUC)). BV was estimated by SPION-induced ΔR2 * and ΔR2 . With validated measurement reliability of all the parameters (coefficients of variation ≤10%), dual-contrast MRI demonstrated a different region-oriented distribution between Gd-DOTA and SPIONs within a tumor as follows: (a) the BV increased stepwise from the tumor center to the periphery; (b) the tumor periphery maintained the augmented BV to support continuous tumor expansion from Day 1 to Day 7; (c) the internal tumor area underwent significant vascular shrinkage (i.e. decreased ΔR2 and ΔR2 ) as the tumor increased in size; (d) the tumor center showed greater bp-indicating parameters, i.e. wash-in rate, maximum enhancement ratio and IAUC, than the periphery on both Days 1 and 7 and (e) the tumor center showed a greater increase of bp than the tumor periphery in tumor growth, as suggested to support tumor viability when there is insufficient blood supply. In the MRI-histologic correlation, a prominent BV increase in the tumor periphery seen in MRI was verified with increased fluorescein isothiocyanate-dextran signals and up-regulated immunoreactivity of CD31-VEGFR. In conclusion, the spatiotemporal alterations of BV and bp in glioblastoma growth, i.e. augmented BV in the tumor periphery and increased bp in the center, can be sufficiently evaluated by MRI with dual injection of extracellular-fluid Gd chelates and intravascular SPION. abstract_id: PUBMED:15891164 Single-dose contrast agent for intraoperative MR imaging of intrinsic brain tumors by using ferumoxtran-10. Background And Purpose: Intraoperative MR imaging (IMRI) has advantages over conventional framed and frameless techniques. IMRI, however, also has some drawbacks, especially related to interpretation of gadolinium-enhanced intraoperative imaging resulting from surgically induced blood brain barrier injury, vascular changes, and hemorrhage. Ultra-small superparamagnetic iron particles like ferumoxtran-10 have a long plasma half-life and are trapped by reactive cells within the tumor. These trapped particles provide a method to demonstrate enhancing lesions without the artifact of repeat gadolinium administration in the face of blood brain barrier and vascular injury. Methods: We present a review of the literature and the cases of two patients who underwent surgery in which IMRI with ferumoxtran-10 was used. Results: Ultra-small superparamagnetic iron particles represent a method to demonstrate enhancing intrinsic brain tumors without the drawbacks of intraoperative gadolinium enhancement. These lesions appear even on low-field strength IMRI. Ferumoxtran-10, administered preoperatively, provides a stable imaging marker, even after surgical manipulation of the brain. Conclusion: Fermumoxtran-10 provides a way to lessen artifactual enhancement during IMRI related to the administration of gadolinium. Answer: Yes, Ferumoxtran-10 enhancement in orthotopic xenograft models of human brain tumors can be used as an indirect marker of tumor proliferation. A study comparing Ferumoxtran-10 contrast enhancement in four human high-grade glioma xenograft models with different growing profiles found a good relationship between tumor-to-background contrast and proliferative index, as well as between tumor-to-background contrast and tumor growth. The contrast enhancement observed 24 hours after intravenous administration of Ferumoxtran-10 was well correlated to the tumor proliferative index and tumor growth, suggesting its potential use as an indirect marker of tumor proliferation (PUBMED:17443290).
Instruction: Is lead dust within nuclear medicine departments a hazard to pediatric patients? Abstracts: abstract_id: PUBMED:19692455 Is lead dust within nuclear medicine departments a hazard to pediatric patients? Unlabelled: Because of the penetrating ability of the radiation used in nuclear medicine, metallic lead is widely used as radiation shielding. However, this shielding may present an insidious health hazard because of the dust that is readily removed from the surfaces of lead objects. The lead dust may become airborne, contaminate floors and other nearby surfaces, and be inadvertently inhaled or ingested by patients. We determined if the quantity of lead dust encountered within nuclear medicine departments exceeded Environmental Protection Agency (EPA) standards. Methods: For lead dust quantification, professional lead test kits were used to sample fifteen 1-ft(2) sections of different surfaces within the department. Four samples were collected once per week from each site. The samples were then submitted to a National Lead Laboratory-accredited program for a total lead measurement. Lead contamination (mug/ft(2)) for each of the 60 samples was compared with the EPA standards for lead dust. Results: Lead contamination was present at 6 of the 15 sites, and of 60 samples, 18 exceeded the EPA standard of 50 mug/ft(2). Conclusion: Lead contamination is present within nuclear medicine departments, and corrective measures should be considered when dealing with pediatric patients. A larger series needs to be conducted to confirm these findings. abstract_id: PUBMED:15238335 The influence of exterior dust and soil lead on interior dust lead levels in housing that had undergone lead-based paint hazard control. To aid in understanding the contribution of exterior dust/soil lead to postintervention interior dust lead, a subset of housing from the HUD Lead-Based Paint Hazard Control Grant Program Evaluation was selected for study. Housing from 12 state and local governments was included. Exterior entry and street dust samples were obtained by a vacuum method, and soil samples were building perimeter core composites. Interior dust wipe lead data (microg/ft(2)) and paint lead data (mg/cm(2)) were also available for each of the dwelling units and included in the modeling. Results from 541 dwelling units revealed a wide range of exterior dust and soil lead levels, within and between grantees. Minimum and maximum geometric mean lead levels, by grantee, were 126 and 14400 microg/ft(2) for exterior entry dust; 325 and 4610 microg/ft(2) for street dust; and, for soil concentration, 383 and 2640 ppm. Geometric mean exterior entry dust lead concentration (1641 ppm) was almost four times as high as street dust lead concentration (431 ppm), suggesting that lead dust near housing was often a source of street dust lead. Geometric mean exterior entry dust lead loading was more than four times as high as window trough dust lead loading and more than an order of magnitude higher than interior entry dust lead loading. Statistical modeling revealed pathways from exterior entry dust lead loading to loadings on interior entryway floors, other interior floors, and windowsills. Paint lead was found to influence exterior entry dust lead. Results of this study show that housing where soil lead hazard control activities had been performed had lower postintervention exterior entry, interior entry floor, windowsills, and other floor dust loading levels. Soil was not present for almost half the buildings. Statistical analysis revealed that exterior strategy influenced soil lead concentration, and soil lead concentration influenced street dust lead loading. This study represents one of the few where an impact of soil treatments on dust lead levels within the housing has been documented and may represent the first where an impact on exterior entry dust lead has been found. The inclusion of measures to mitigate the role of exterior sources in lead hazard control programs needs consideration. abstract_id: PUBMED:12146704 Exposure of persons accompanying patients in nuclear medicine departments. Patient who receives a radiopharmaceutical remains radioactive for hours or even days and so represents a potential hazard to other individuals. Thus, in nuclear medicine departments, apart from the personnel who are occupationally exposed to ionizing radiation, protection rules must be observed for members of the public, in particular persons accompanying patients. The purpose of this work was to evaluate the absorbed dose levels, which persons accompanying patients are exposed to. Integral dose studies were carried out in three nuclear medicine departments, in Brazil. Dose levels are presented, and issues are discussed with regard to the presence of members of the public in waiting areas of such departments. abstract_id: PUBMED:12382785 Investigation of maximum doses absorbed by people accompanying patients in nuclear medicine departments. In nuclear medicine, radioisotopes are bound to several compounds (called radiopharmaceuticals) for use in diagnostic and therapeutic applications. These unscaled sources are administered in various forms to patients, who remain radioactive for hours or days, and represent a source of potential radiation exposure for others. Thus, in nuclear medicine departments, radiation protection of workers and members of the public must take this exposure into account. In this context, the aim of this work was the investigation of maximum dose levels to which people accompanying patients are exposed. For monitoring, thermoluminescence dosemeters were given to 380 people in three nuclear medicine departments in Brazil. Exposure results are given, and issues regarding length of stay of members of the public in those departments are discussed. abstract_id: PUBMED:10811573 A pilot study examining changes in dust lead loading on walls and ceilings after lead hazard control interventions. The U.S. Department of Housing and Urban Development (HUD) guidelines on lead hazard control instruct contractors to clean floors, windows, walls, ceilings, and other horizontal surfaces to remove lead-contaminated dust and debris after lead interventions are conducted. This dust removal activity adds costs to each project. The need to clean floors and windows is well documented in the HUD guidelines. However, there is substantially less documentation to support the recommendation to clean walls and ceilings. We examined whether it is necessary to clean walls and ceilings after lead hazard control (LHC) interventions by comparing dust lead loadings measured on these surfaces before an LHC intervention to dust lead loadings after the intervention. Twenty-two dwelling units undergoing substantial LHC measures consistent with the HUD guidelines were enrolled in the study. There was a significant increase in dust lead loading on walls and ceilings between the pre- and postintervention. The change in wall dust lead loading was substantial and created potentially harmful lead exposures. Although statistically significant, the change in ceiling dust lead loading was minimal and the postintervention dust lead loadings were far below the existing federal floor dust lead clearance standard. These results strongly support the recommendations in the HUD guidelines to clean walls after LHC interventions and do not provide sufficient justification to alter the current recommendation to clean ceilings after lead work. abstract_id: PUBMED:32462398 Impact of the COVID-19 pandemic in nuclear medicine departments: preliminary report of the first international survey. Purpose: Coronavirus disease-19 (COVID-19) pandemic is challenging the availability of hospital resources worldwide. The Young Group of the Italian Association of Nuclear Medicine (AIMN) developed the first international survey to evaluate the impact of COVID-19 in nuclear medicine (NM). The aim of this study was to perform a preliminary report of the ongoing survey. Methods: A questionnaire of thirty questions was prepared for all NM professionals addressing three main issues: (1) new scheduling praxes for NM diagnostic and therapeutic procedures, (2) assistance of patients with diagnosed or suspected COVID-19, and (3) prevention of COVID-19 spreading in the departments. An invitation to the survey was sent to the corresponding authors of NM scientific papers indexed in SCOPUS in 2019. Personal data were analysed per individual responder. Organisation data were evaluated per single department. Results: Two-hundred and ninety-six individual responders from 220 departments were evaluated. Most of the responders were from Europe (199/296, 67%). Approximately, all departments already changed their scheduling praxes due to the pandemic (213/220, 97%). In most departments, scheduled diagnostic and therapeutic procedures were allowed but quantitatively reduced (112/220, 51%). A significant reduction of diagnostic and therapeutic procedures (more than 20%) affected 198/220 (90%) and 158/220 (72%) departments, respectively. Incidental COVID-19 signs in NM exams occurred in 106/220 departments (48%). Few departments were closed or shifted to assist patients with COVID-19 (36/220, 16%). Most of the responders thought that pandemic would not permanently change the work of NM departments in the future (189/296, 64%). Conclusions: According to this preliminary report of the first international survey, COVID-19 heavily impacted NM departments and professionals. New praxes for NM procedures, assistance, and prevention of COVID-19 have been applied during the pandemic. abstract_id: PUBMED:15673092 Residential dust lead loading immediately after intervention in the HUD lead hazard control grant program. At the conclusion of most lead hazard control interventions in federally assisted housing built before 1978, a certified clearance examiner must verify that the lead hazard control work was completed as specified and that the area is safe for residents, a process referred to as clearance. This study explores the experience of 14 grantees participating in the Evaluation of the HUD Lead-Based Paint Hazard Control Grant Program in passing clearance. The study also considers how preintervention lead levels (interior dust and paint), building condition/characteristics, and the scope of work influenced initial clearance dust lead loadings and clearance rates. At the initial clearance inspection, 80% of the 2682 dwellings achieved grantee-specific clearance standards on windowsills, window troughs (500 microg/ft2 and 800 microg/ft2, respectively), and floors (80, 100, or 200 microg/ft2 depending on state/local regulations at the dates of clearance in the mid-1990s), with individual grantee success rates ranging from 63 to 100%. Dwellings that failed initial clearance required an average of 1.13 retests to clear. The high level of success at clearance demonstrates that following methods for work site containment, lead hazard control, and cleaning similar to those recommended in the HUD Guidelines for the Evaluation and Control of Lead-Based Paint in Housing is effective. The most common lead hazard control intervention was window abatement accompanied by the repair or abatement of all other deteriorated lead-based paint (56% of dwellings). An additional 5% of dwellings were fully abated, 29% had lower intensity interventions. Interventions including window replacement are recommended to reduce dust lead loading on windowsills and troughs at clearance, but lower level interventions such as full paint stabilization are just as good at reducing floor dust lead loadings. Whatever lead hazard control activities are selected, the condition of the surfaces of interest should be in good condition at clearance. abstract_id: PUBMED:32709733 Global Impact of COVID-19 on Nuclear Medicine Departments: An International Survey in April 2020. The coronavirus disease 2019 (COVID-19) pandemic has placed significant challenges on health-care systems worldwide, whether in the preparation, response, or recovery phase of the pandemic. This has been primarily managed by dramatically reducing in- and outpatient services for other diseases and implementing infection prevention and control measures. The impact of the pandemic on nuclear medicine departments and their services has not yet been established. The aim of this online survey was to evaluate the impact of COVID-19 on nuclear medicine departments. Methods: A web-based questionnaire, made available from April 16 to May 3, 2020, was designed to determine the impact of the pandemic on in- and outpatient nuclear medicine departments, including the number of procedures, employee health, availability of radiotracers and other essential supplies, and availability of personal protective equipment. The survey also inquired about operational aspects and types of facilities as well as other challenges. Results: A total of 434 responses from 72 countries were registered and analyzed. Respondents reported an average decline of 54% in diagnostic procedures. PET/CT scans decreased by an average of 36%, whereas sentinel lymph-node procedures decreased by 45%, lung scans by 56%, bone scans by 60%, myocardial studies by 66%, and thyroid studies by 67%. Of all participating centers, 81% performed radionuclide therapies, and they reported a reduction of 45% on average in the last 4 wk, ranging from over 76% in Latin America and South East Asia to 16% in South Korea and Singapore. Survey results showed that 52% of participating sites limited their 99mTc/99Mo generator purchases, and 12% of them temporarily cancelled orders. Insufficient supplies of essential materials (radioisotopes, generators, and kits) were reported, especially for 99mTc/99Mo generators and 131I, particularly in Africa, Asia, and Latin America. Conclusion: Both diagnostic and therapeutic nuclear medicine procedures declined precipitously, with countries worldwide being affected by the pandemic to a similar degree. Countries that were in the postpeak phase of the pandemic when they responded to the survey, such as South Korea and Singapore, reported a less pronounced impact on nuclear medicine services; however, the overall results of the survey showed that nuclear medicine services worldwide had been significantly impacted. In relation to staff health, 15% of respondents experienced COVID-19 infections within their own departments. abstract_id: PUBMED:24184749 Source contributions of lead in residential floor dust and within-home variability of dust lead loading. Evidence of the impact of exposure to low levels of lead on children's health is increasing. Residential floor dust is the assumed origin of lead exposure by young children. In this study, we estimate the contribution of different lead sources to household interior floor dust contamination. We also estimate the within-home variability of interior floor dust lead loadings. A multilevel model was developed based on data collected in a French survey in 2008-2009 (484 housing units, 1834 rooms). Missing data were handled by multiple imputation using chained equations. The intra-home correlation between interior floor Log dust lead loadings was approximately 0.6. Dust lead from the landing of an apartment, mostly originating outside the building, was the major contributor to interior floor dust lead. Secondary contributors included the lead-based paint on exterior railings, track-in of the exterior soil of the children's play area into the dwelling, smoking inside the home, demolition of nearby old buildings and sites of pollution in the vicinity. Interior lead-based paint contaminated interior floor dust only in old and non-renovated dwellings. To reduce interior floor dust lead levels in the general population of dwellings, common areas should be maintained, and track-in from the outside should be limited as much as possible. abstract_id: PUBMED:25191111 Select the optimized effective dose to reduce nuclear radiations in pediatric nuclear medicine. Many techniques and research models on calculating and reducing the nuclear radiation dose on pediatric nuclear medicine procedure have been developed and reported in recent years. However, most those models either utilized simple shapes to present the organs or used more realistic models to estimate the nuclear dose applied on pediatric patients. The former are too simple to provide accurate estimation results, and the latter are too complicated to intensively involve complex calculations. In this study, a simple but practical model is developed to enable physicians to easily and quickly calculate and select the average optimal effective nuclear dose for the given age and body-size of the pediatric patients. This model is built based on one research result reported by Frederic Fahey, et al and it can be easily implemented in most common pediatric nuclear medicine procedures. This is the first research of using fuzzy inference system to calculate the optimal effective dose applied in the nuclear medicine for pediatric patients. Answer: Yes, lead dust within nuclear medicine departments can be a hazard to pediatric patients. A study found that lead contamination was present at 6 of the 15 sites tested within a nuclear medicine department, and out of 60 samples, 18 exceeded the EPA standard of 50 micrograms per square foot for lead dust. This indicates that lead dust levels encountered within some nuclear medicine departments can exceed safety standards set by the EPA, which could pose a health risk to pediatric patients who may inadvertently inhale or ingest contaminated dust (PUBMED:19692455). Therefore, it is important for nuclear medicine departments to consider corrective measures to address lead contamination, especially when dealing with pediatric patients.
Instruction: Does warming up improve surgical outcome in total hip arthroplasty? Abstracts: abstract_id: PUBMED:26289764 Does warming up improve surgical outcome in total hip arthroplasty? Purpose: Warming up prior to surgery has been shown to enhance surgeons' performance during laparoscopic procedures. It is unknown whether or not warming up prior to total hip arthroplasty (THA) will have a similar effect on surgical outcome. Methods: We retrospectively evaluated the effect of warming up on the intraoperative outcome of 82 patients who underwent a primary THA with identical implants. The patients were divided equally into two groups. Group 1 (first case of the day) served as "warm-up procedure," while Group 2 (second case of the day) was the "post-warm-up procedure." Results: We found no statistically significant difference when we compared cup abduction and anteversion angles, femoral offset, postoperative LLD, stem alignment, hip center of rotation and femoral stem canal fill between the two groups (p > 0.05). Conclusion: Warming up prior to performing surgery does not make a difference for primary THA when performed by an experienced surgeon. However, these results may not reflect its effect on procedures that require fine motor skills or done by an orthopedic trainee or less experienced surgeons. abstract_id: PUBMED:37500931 Analysis of the effects of intraoperative warming devices on surgical site infection in elective hip arthroplasty using a large nationwide database. Introduction: The aim of our study is to analyze the association of usage and type of warming device with the risk of surgical site infection (SSI) in patients who underwent hip arthroplasty, and to analyze the factors that increase the risk of SSI if the warming device is not used. Materials And Methods: This retrospective cross-sectional study identified subjects from data of "Evaluation of the Appropriate Use of Prophylactic Antibiotics". Included patients were defined as those who underwent elective unilateral hip hemiarthroplasty or total hip arthroplasty (THA). Patients were classified into no intraoperative warming device, forced air warming devices, and devices using conduction. Multiple logistic regression analysis was conducted to estimate adjusted odds ratios (aORs) and 95% confidence intervals (CIs) to assess the association between warming devices and SSI. Results: A total of 3945 patients met the inclusion criteria. Compared to those who received an intraoperative warming device, the odds of developing SSI were 1.9 times higher in those who did not receive intraoperative warming devices (aOR 1.9; 95% CI 1.1-3.6). The risk of SSI was 2.2 times higher with forced air warming devices compared to devices using conduction but this difference was not statistically significant (aOR 2.2; 95% CI 0.7-6.8). The risk of SSI increased in males (aOR 2.8; 95% CI 1.1-7.2), in patients under 70 years of age (aOR 4.4; 95% CI 1.6-10.4), in patients with a Charlson`s comorbidity index of 2 or higher (aOR 3.3; 95% CI 1.3-8.7), and in patients who underwent THA (aOR 3.8; 95% CI 1.7-8.3) when intraoperative warming devices were not used. Conclusions: The use of intraoperative active warming devices is highly recommended to prevent SSI during elective hip arthroplasty. In particular, male patients younger than 70 years, those with a high CCI, and those undergoing THA are at significantly increased risk of SSI if intraoperative active warming devices are not used. Intraoperative warming device using conduction is likely superior to forced air warming device, but further studies are needed to confirm this. abstract_id: PUBMED:31323107 Relationship Between Functional Anatomy of the Hip and Surgical Approaches in Total Hip Arthroplasty. Multiple surgical approaches have been used successfully for total hip arthroplasty. Minimally invasive surgery, defined by the length of the incision, has been associated with less blood loss and shorter length of stay compared with conventional total hip arthroplasty. Differences in early functional outcomes, as well as the risk of early and long-term complications, have been observed between different anatomic surgical approaches. However, no single surgical approach has been established that achieves both the shortest functional recovery and the lowest rate of complications after total hip arthroplasty. [Orthopedics. 2019; 42(4):e356-e363.]. abstract_id: PUBMED:29310681 Comparison of outcome measures and complication rates following three different approaches for primary total hip arthroplasty: a pragmatic randomised controlled trial. Background: Total hip arthroplasty is one of the most commonly performed surgical procedures worldwide. There are a number of surgical approaches for total hip arthroplasty and no high-level evidence supporting one approach over the other. Each approach has its unique benefits and drawbacks. This trial aims to directly compare the three most common surgical approaches for total hip arthroplasty. Methods/design: This is a single-centre study conducted at Western Health, Melbourne, Australia; a large metropolitan centre. It is a pragmatic, parallel three-arm, randomised controlled trial. Sample size will be 243 participants (81 in each group). Randomisation will be secure, web-based and managed by an independent statistician. Patients and research team will be blinded pre-operatively, but not post-operatively. Intervention will be either direct anterior, lateral or posterior approach for total hip arthroplasty, and the three arms will be directly compared. Participants will be aged over 18 years, able to provide informed consent and recruited from our outpatients. Patients who are having revision surgery or have indications for hip replacement other than osteoarthritis (i.e., fracture, malignancy, development dysplasia) will be excluded from the trial. The Oxford Hip Score will be determined for patients pre-operatively and 6 weeks, 6, 12 and 24 months post-operatively. The Oxford Hip Score at 24 months will be the primary outcome measure. Secondary outcome measures will be dislocation, infection, intraoperative and peri-prosthetic fracture rate, length of hospital stay and pain level, reported using a visual analogue scale. Discussion: Many studies have evaluated approaches for total hip arthroplasty and arthroplasty registries worldwide are now collecting this data. However no study to date has compared these three common approaches directly in a randomised fashion. No trial has used patient-reported outcome measures to evaluate success. This pragmatic study aims to identify differences in patient perception of total hip arthroplasty depending on surgical approach. Trial Registration: Australian New Zealand Clinical Trials Registry, ACTRN12617000272392 . Registered on 22 February 2017. abstract_id: PUBMED:25553602 Selecting the surgical approach for revision total hip arthroplasty. Selecting the approach for revision total hip arthroplasty is a crucial step in pre-operative planning. Whether the surgical objectives can be reached via a conventional approach or require a specific approach must be determined. The best approach depends on multiple factors including the reason for revision, patient's characteristics, implants requiring removal, previous approach, soft tissue and bone lesions, and surgeon's level of experience. These factors are discussed herein, as well as the potential and limitations of conventional approaches and the indications for specific approaches. abstract_id: PUBMED:27480604 Long-term outcome of total hip arthroplasty in patients with haemophilia. Introduction: Besides the target joints (elbow, knee and ankle), the hip is one of the commonly affected joints in haemophilic arthropathy. Hip arthroplasty is the therapy of choice after failure of conservative treatment. There are only limited data on long-term results after primary total hip arthroplasty (THA). Aim: The aim of this retrospective study was to analyse clinical outcome and complication rate after total hip replacement in patients with severe haemophilic arthropathy. Methods: Forty-three patients with haemophilia (PWH), one patient with von Willebrand disease and one patient with a Factor-VII-deficiency undergoing 49 total hip arthroplasties, were evaluated in a retrospective study. Harris hip score (HHS), range of motion (ROM), pain status (visual analogue scale, VAS) complication rate and patient satisfaction were assessed at a mean follow-up of 11.5 years (range: 3-32). Results: HSS, ROM and VAS improved significantly combined with high patient satisfaction. In total, three (6.1%) periprosthetic infections and five (10.2%) aseptic implant loosenings occurred after THA leading to revision arthroplasty. In two (4.1%) cases, a pseudotumour and one (2.0%) periarticular ossification had to be resected after THA. Conclusion: Total hip replacement in PWH leads to a significant increase of function, reduction of pain and a high satisfaction. Due to the relatively high complication rate (infections and aseptic loosening) compared to patients without haemophilia, an individual assessment of the risk-benefit ratio from surgical and haemostaseological point of view is needed. abstract_id: PUBMED:34774689 Satisfaction With the Process vs Outcome of Care in Total Hip and Knee Arthroplasty. Background: Patient satisfaction, with both process of care and outcome of care, is critical for measuring the quality and value of elective procedures such as arthroplasty. The purpose of this study is to evaluate the correlation between early postoperative satisfaction with the process of care and 2-year satisfaction with the outcome of care after total hip arthroplasty (THA) and total knee arthroplasty (TKA). Methods: We retrospectively analyzed data from institutional arthroplasty registries. Satisfaction with the outcome of care was measured on a scale of 0-100 by a previously validated questionnaire administered 2 years postoperatively. Patient satisfaction with the process of care was measured by the Press Ganey (PG) inpatient survey, also scored 0-100. We examined the correlation between these 2 measures of satisfaction in patients who underwent primary THA or TKA. Results: In total, 721 TKA patients and 760 THA patients underwent surgery and completed both the PG survey and 2-year satisfaction questionnaire. The mean age was 65.1 years with a mean body mass index of 28.8 and 56% were female. The mean PG survey score for the entire cohort was 95.6. The mean 2-year satisfaction score was 90.3. The Spearman correlation coefficient between the PG survey and the 2-year satisfaction survey was 0.23 for TKA patients (P < .001) and 0.13 for THA patients (P < .001). Conclusion: We found a weak correlation between the measurement of satisfaction with the process of care surrounding hip and knee arthroplasty using the PG survey and measurement of satisfaction with the outcome of care after arthroplasty using a validated 2-year satisfaction instrument. abstract_id: PUBMED:32776138 Outcome of total hip and total knee arthroplasty and vitamin D homeostasis. Background: Low levels of vitamin D are common in patients undergoing total hip arthroplasty (THA) and total knee arthroplasty (TKA), but it is still unclear whether the outcome of such surgery is related to vitamin D levels. Sources Of Data: Publicly available electronic databases were systematically reviewed to update knowledge about the role of vitamin D in THA and TKA patients, and to appraise the correlation between vitamin D levels before surgery and post-surgical outcomes. Eight studies were included in our review: six employed prospective observational designs, two reported retrospective database interrogations. Six prospective observational studies and two retrospective database interrogations matched inclusion and exclusion criteria. Areas Of Agreement: Patients undergoing THA and TKA have a high prevalence of hypovitaminosis D. Five of eight studies (62.5%) report a statistically significant association between low vitamin D status and worse outcomes after THA and TKA. Areas Of Controversy: Different explanations have been proposed for vitamin D insufficiency, but some authors have questioned the correctness of the methods used to measure the vitamin D levels. Growing Points: A trial focusing on the role of vitamin D supplementation on safety and outcomes of patients undergoing THA and TKA with low vitamin D status is ongoing. Areas Timely For Developing Research: Further studies should explore the possible causal relationship between vitamin D status and outcomes of THA and TKA and verify the laboratory methods to analyse it. abstract_id: PUBMED:38467458 Direct anterior approach complications for total hip arthroplasty. The direct anterior approach (DAA) for total hip arthroplasty has been popularized in the last decade as a minimally invasive approach used by many surgeons, including the authors, to preserve the integrity of muscle groups and their insertions and the dynamic hip stability resulting in less surgical trauma and faster recovery process with decreased postoperative pain. This surgical approach is not without a variety of complications and pitfalls. This review aims to identify any potential drawbacks and challenges associated with the DAA in THA and guide surgeons on minimizing and avoiding them. abstract_id: PUBMED:38053213 Editorial: factors influencing the outcome of total hip and knee arthroplasty. Background: Total hip and knee arthroplasty for end stage arthritis are among the most cost-effective interventions available in all of medicine. Success of hip and knee arthroplasties is not universal and approximately one in ten patients are not satisfied with their arthroplasty and 10 to 34% of patients have an unfavourable long term pain outcome. The aim of this edition of Arthroplasty was to identify factors associated with: (1) poor joint specific outcome, (2) poor health related quality of life outcome and (3) dissatisfaction following total hip and knee arthroplasty. Methods: The scope was deliberately broad to identify factors (known and unknown) that influence outcome of total hip and knee arthroplasty, and of 40 submissions, eight were selected for this special edition after peer review. Results: Many of the included studies reported subjective patient reported outcome measures as their key outcomes but other objective measures such as muscle mass, timed Up-and-Go test, kneeling ability and postoperative pain are also featured. Some studies involved a focus on early rehabilitation after surgery (ERAS) principles and how to optimise pre-rehabilitation and reduce length of hospital stay readmission and reoperation. The effect of metal allergy and drain usage on outcomes is also explored. A variety of methodologies have been used including one randomised control trial, some machine learning and three systematic reviews investigating the effect of metal allergy on outcomes, associations with postoperative pain and the effect of patella resurfacing. Conclusion: This special edition has advanced our knowledge of factors influencing to the outcome of hip and knee arthroplasty but has also identified several areas of research that need to be addressed to improve the outcomes of our patients. Answer: The available abstracts do not provide a consensus on whether warming up improves surgical outcomes in total hip arthroplasty (THA). One study specifically investigated the effect of warming up prior to THA and found no statistically significant difference in various intraoperative outcomes between the first case of the day (considered the "warm-up procedure") and the second case of the day (the "post-warm-up procedure") when performed by an experienced surgeon. This suggests that warming up does not make a difference for primary THA in terms of the surgical precision metrics evaluated, such as cup abduction and anteversion angles, femoral offset, postoperative leg length discrepancy (LLD), stem alignment, hip center of rotation, and femoral stem canal fill. However, the study also notes that these results may not reflect the effect of warming up on procedures that require fine motor skills or those performed by orthopedic trainees or less experienced surgeons (PUBMED:26289764). Other abstracts provided information on various factors that can influence the outcome of THA, such as the use of intraoperative warming devices to prevent surgical site infection (SSI) (PUBMED:37500931), the relationship between functional anatomy and surgical approaches (PUBMED:31323107), and the comparison of outcome measures and complication rates following different approaches for THA (PUBMED:29310681). However, none of these studies directly addressed the impact of a surgeon's warm-up routine on the surgical outcome of THA. In conclusion, based on the abstract provided (PUBMED:26289764), warming up prior to performing THA does not appear to improve surgical outcomes when the procedure is performed by an experienced surgeon. Further research may be needed to determine if warming up has benefits for less experienced surgeons or in procedures that require more fine motor skills.
Instruction: Randomized, controlled trial on tracheal colonization of ventilated infants: can gravity prevent ventilator-associated pneumonia? Abstracts: abstract_id: PUBMED:18829800 Randomized, controlled trial on tracheal colonization of ventilated infants: can gravity prevent ventilator-associated pneumonia? Objective: The goal was to test the hypothesis that intubated infants positioned on their sides would be less likely to contract bacterial colonization in their tracheae, compared with those positioned supine. Methods: We conducted a prospective, randomized, controlled trial with 60 intubated infants; 30 infants were positioned supine (supine group), and 30 infants were maintained in the lateral position (lateral group). Tracheal aspirates were cultured and bacterial colony counts were recorded after 48 hours and after 5 days of mechanical ventilation. Results: After 2 days, the numbers of positive tracheal cultures in the supine group (67%) and in the lateral group (47%) showed no statistical difference. After 5 days of mechanical ventilation, tracheal cultures differed significantly between groups. Cultures were positive for 26 infants (87%) in the supine group and 9 infants (30%) in the lateral group. Compared with the lateral group, more infants in the supine group experienced increased colony counts or had new organisms in their tracheal aspirates over time (21 vs 8 infants). The most common organisms isolated from tracheal aspirates in both groups were Gram-negative rods. Conclusions: Respiratory contamination is very common among ventilated infants. Therefore, judicious use of mechanical ventilation cannot be overemphasized. Gravitational force can ameliorate the onset of respiratory colonization. The mechanism and clinical applicability of such observations need to be explored further. abstract_id: PUBMED:32506858 Comparative efficacy of honey 12.5% and chlorhexidine 0.2% mouthwashes on the oropharyngeal bacterial colonization in mechanically-ventilated patients: a randomized controlled trial. Objective: To compare the efficacy of honey mouthwash 12.5% and chlorhexidine solution 0.2% to reduce the rate of oropharyngeal bacterial colonization in mechanically-ventilated patients. Methods: This study was a randomized, single blind, phase Ⅲ controlled clinical trial. Sixty patients newly admitted to internal and trauma Intensive Care Units of the two educational hospitals of Sanandaj city affiliated with Kurdistan University of Medical Sciences were selected by convenience sampling and allocated to two groups of 30 patients using random blocks design. In each group, the mouthwash was applied twice a day for four consecutive days. Swab samples were taken from the mouth and throat of all patients three times a day (pre- intervention, two days, and four days after the intervention) and then the samples were transferred onto the blood agar and eosin methylene blue (EMB) culture plates and investigated for bacterial growth and colonization after 24-48 h. Results: The findings showed that oropharyngeal colonization was not significantly different between the two groups, pre-intervention, two days, and four days after the intervention (P > 0.05). Rinsing with honey mouthwash 12.5% led to the inhibition of Staphylococcus aureus and Pseudomonas aeruginosa on the fourth day of the intervention in all samples. Conclusion: None of the studied solutions contributed to the reduction of oropharyngeal bacterial colonization. It seems that the growth inhibition of Staphylococcus aureus and Pseudomonas aeruginosa by the honey 12.5% mouthwash in mechanically-ventilated patients need further investigation. abstract_id: PUBMED:25584431 Randomized intubation with polyurethane or conical cuffs to prevent pneumonia in ventilated patients. Rationale: The occurrence of ventilator-associated pneumonia (VAP) is linked to the aspiration of contaminated pharyngeal secretions around the endotracheal tube. Tubes with cuffs made of polyurethane rather than polyvinyl chloride or with a conical rather than a cylindrical shape increase tracheal sealing. Objectives: To test whether using polyurethane and/or conical cuffs reduces tracheal colonization and VAP in patients with acute respiratory failure. Methods: We conducted a multicenter, prospective, open-label, randomized study in four parallel groups in four intensive care units between 2010 and 2012. A cohort of 621 patients with expected ventilation longer than 2 days was included at intubation with a cuff composed of cylindrical polyvinyl chloride (n = 148), cylindrical polyurethane (n = 143), conical polyvinyl chloride (n = 150), or conical polyurethane (n = 162). We used Kaplan-Meier estimates and log-rank tests to compare times to events. Measurements And Main Results: After excluding 17 patients who secondarily refused participation or had met an exclusion criterion, 604 were included in the intention-to-treat analysis. Cumulative tracheal colonization greater than 10(3) cfu/ml at Day 2 was as follows (median [interquartile range]): cylindrical polyvinyl chloride, 0.66 (0.58-0.74); cylindrical polyurethane, 0.61 (0.53-0.70); conical polyvinyl chloride, 0.67 (0.60-0.76); and conical polyurethane, 0.62 (0.55-0.70) (P = 0.55). VAP developed in 77 patients (14.4%), and postextubational stridor developed in 28 patients (6.4%) (P = 0.20 and 0.28 between groups, respectively). Conclusions: Among patients requiring mechanical ventilation, polyurethane and/or conically shaped cuffs were not superior to conventional cuffs in preventing tracheal colonization and VAP. Clinical trial registered with clinicaltrials.gov (NCT01114022). abstract_id: PUBMED:17943823 Closed tracheal suction systems versus open tracheal suction systems for mechanically ventilated adult patients. Background: Ventilator-associated pneumonia is a common complication in ventilated patients. Endotracheal suctioning is a procedure that may constitute a risk factor for ventilator-associated pneumonia. It can be performed with an open system or with a closed system. In view of suggested advantages being reported for the closed system, a systematic review comparing both techniques was warranted. Objectives: To compare the closed tracheal suction system and the open tracheal suction system in adults receiving mechanical ventilation for more than 24 hours. Search Strategy: We searched CENTRAL (The Cochrane Library 2006, Issue 1) MEDLINE, CINAHL, EMBASE and LILACS from their inception to July 2006. We handsearched the bibliographies of relevant identified studies, and contacted authors and manufacturers. Selection Criteria: The review included randomized controlled trials comparing closed and open tracheal suction systems in adult patients who were ventilated for more than 24 hours. Data Collection And Analysis: We included the relevant trials fitting the selection criteria. We assessed methodological quality using method of randomization, concealment of allocation, blinding of outcome assessment and completeness of follow up. Effect measures used for pooled analyses were relative risk (RR) for dichotomous data and weighted mean differences (WMD) for continuous data. We assessed heterogeneity prior to meta-analysis. Main Results: Of the 51 potentially eligible references, the review included 16 trials (1684 patients), many with methodological weaknesses. The two tracheal suction systems showed no differences in risk of ventilator-associated pneumonia (11 trials; RR 0.88; 95% CI 0.70 to 1.12), mortality (five trials; RR 1.02; 95% CI 0.84 to 1.23) or length of stay in intensive care units (two trials; WMD 0.44; 95% CI -0.92 to 1.80). The closed tracheal suction system produced higher bacterial colonization rates (five trials; RR 1.49; 95% CI 1.09 to 2.03). Authors' Conclusions: Results from 16 trials showed that suctioning with either closed or open tracheal suction systems did not have an effect on the risk of ventilator-associated pneumonia or mortality. More studies of high methodological quality are required, particularly to clarify the benefits and hazards of the closed tracheal suction system for different modes of ventilation and in different types of patients. abstract_id: PUBMED:26523433 Randomized Pilot Trial of Two Modified Endotracheal Tubes To Prevent Ventilator-associated Pneumonia. Rationale: Ventilator-associated pneumonia (VAP) is a prevalent and costly nosocomial infection related to instrumentation of the airway with an endotracheal tube (ETT), enabling microaspiration of contaminated secretions. Modification of the ETT design to reduce microaspiration and/or biofilm formation may play an important role in VAP prevention. However, there is insufficient evidence to provide strong recommendations regarding the use of modified ETT and unaddressed safety concerns. Objectives: We performed a pilot randomized controlled trial comparing two modified ETTs designed specifically to prevent VAP, with the standard ETT, to test the feasibility of and inform planning for a large, pivotal, randomized trial. Methods: This study was conducted with institutional review board approval under exception from informed consent. We randomized in a blinded fashion patients undergoing emergency endotracheal intubation both out of and in hospital to receive one of three different ETT types: (1) a polyurethane-cuffed tube (PUC-ETT), (2) a polyurethane-cuffed tube equipped with a port for continuous aspiration of subglottic secretions (PUC-CASS-ETT), or a (3) standard polyvinylchloride-cuffed tube (PVC-ETT). In addition to investigating feasibility and safety, the study coprimary end points were tracheal bacterial colonization reaching a cfu count >10(6) cfu per milliliter and the incidence of invasively diagnosed VAP. Measurements And Main Results: A total of 102 subjects were randomized and met the eligibility criteria. Randomization procedures performed well and integrity of blinding at randomization was maintained. The majority of intubations occurred in the hospital setting (n = 77), and the remainder occurred out of hospital (n = 25). Compared with the PVC-ETT, there were no significant differences in tracheal colonization for PUC-ETT (odds ratio [OR], 0.98; 95% confidence interval [CI], 0.31-3.09) or for PUC-CASS-ETT (OR, 1.26; 95% CI, 0.42-3.76). There were no differences in the risk of invasively diagnosed VAP (OR, 1.14; 95% CI, 0.21-6.08 for PUC-ETT; OR, 1.47; 95% CI, 0.30-7.10 for PUC-CASS-ETT), or of clinically diagnosed VAP by either clinical signs or chest radiograph criteria. We did not observe unexpected or serious adverse events related to the devices. Conclusions: A randomized trial of ETTs inserted during emergency intubation for the prevention of VAP is feasible and did not appear to carry heightened safety concerns. These preliminary data did not suggest different patterns of tracheal colonization or occurrence of VAP among the study groups. Clinical trial registered with www.clinicaltrials.gov (NCT01744483). abstract_id: PUBMED:25425442 Impact of closed versus open tracheal suctioning systems for mechanically ventilated adults: a systematic review and meta-analysis. Purpose: Whether closed tracheal suctioning systems (CTSS) reduce the incidence of ventilator-associated pneumonia (VAP) compared with open tracheal suctioning systems (OTSS) is inconclusive. We conducted a systematic review and meta-analysis of randomized controlled trials that compared CTSS and OTSS. Methods: PubMed, the Cochrane Central Register of Controlled Trials, the Web of Science, Google Scholar, and a clinical trial registry from inception to October 2014 were searched without language restrictions. Randomized controlled trials of CTSS and OTSS that compared VAP in mechanically ventilated adult patients were included. The primary outcome was the incidence of VAP. Secondary outcomes were mortality and length of mechanical ventilation. Data were pooled using the random effects model. Results: Sixteen trials with 1,929 participants were included. Compared with OTSS, CTSS was associated with a reduced incidence of VAP (RR 0.69; 95 % CI 0.54-0.87; Q = 26.14; I (2) = 46.4 %). Compared with OTSS, CTSS was not associated with reduction of mortality (RR 0.96; 95 % CI 0.83-1.12; Q = 2.27; I (2) = 0.0 %) or reduced length of mechanical ventilation (WMD -0.45 days; 95 % CI -1.25 to 0.36; Q = 6.37; I (2) = 5.8 %). Trial sequential analysis suggested a lack of firm evidence for 20 % RR reduction in the incidence of VAP. The limitations of this review included underreporting and low quality of the included trials, as well as variations in study procedures and characteristics. Conclusions: Based on current, albeit limited evidence, it is unlikely that CTSS is inferior to OTSS regarding VAP prevention; however, further trials at low risk of bias are needed to confirm or refute this finding. abstract_id: PUBMED:28158685 Low Efficacy of Antibiotics Against Staphylococcus aureus Airway Colonization in Ventilated Patients. Background: Airway-colonization by Staphylococcus aureus predisposes to the development of ventilator-associated tracheobronchitis (VAT) and ventilator-associated pneumonia (VAP). Despite extensive antibiotic treatment of intensive care unit patients, limited data are available on the efficacy of antibiotics on bacterial airway colonization and/or prevention of infections. Therefore, microbiologic responses to antibiotic treatment were evaluated in ventilated patients. Methods: Results of semiquantitative analyses of S. aureus burden in serial endotracheal-aspirate (ETA) samples and VAT/VAP diagnosis were correlated to antibiotic treatment. Minimum inhibitory concentrations of relevant antibiotics using serially collected isolates were evaluated. Results: Forty-eight mechanically ventilated patients who were S. aureus positive by ETA samples and treated with relevant antibiotics for at least 2 consecutive days were included in the study. Vancomycin failed to reduce methicillin-resistant S. aureus (MRSA) or methicillin-susceptible S. aureus (MSSA) burden in the airways. Oxacillin was ineffective for MSSA colonization in approximately 30% of the patients, and responders were typically coadministered additional antibiotics. Despite antibiotic exposure, 15 of the 39 patients (approximately 38%) colonized only by S. aureus and treated with appropriate antibiotic for at least 2 days still progressed to VAP. Importantly, no change in antibiotic susceptibility of S. aureus isolates was observed during treatment. Staphylococcus aureus colonization levels inversely correlated with the presence of normal respiratory flora. Conclusions: Antibiotic treatment is ineffective in reducing S. aureus colonization in the lower airways and preventing VAT or VAP. Staphylococcus aureus is in competition for colonization with the normal respiratory flora. To improve patient outcomes, alternatives to antibiotics are urgently needed. abstract_id: PUBMED:34609548 Effect of oral chlorhexidine de-adoption and implementation of an oral care bundle on mortality for mechanically ventilated patients in the intensive care unit (CHORAL): a multi-center stepped wedge cluster-randomized controlled trial. Purpose: Oral chlorhexidine is used widely for mechanically ventilated patients to prevent pneumonia, but recent studies show an association with excess mortality. We examined whether de-adoption of chlorhexidine and parallel implementation of a standardized oral care bundle reduces intensive care unit (ICU) mortality in mechanically ventilated patients. Methods: A stepped wedge cluster-randomized controlled trial with concurrent process evaluation in 6 ICUs in Toronto, Canada. Clusters were randomized to de-adopt chlorhexidine and implement a standardized oral care bundle at 2-month intervals. The primary outcome was ICU mortality. Secondary outcomes were time to infection-related ventilator-associated complications (IVACs), oral procedural pain and oral health dysfunction. An exploratory post hoc analysis examined time to extubation in survivors. Results: A total of 3260 patients were enrolled; 1560 control, 1700 intervention. ICU mortality for the intervention and control periods were 399 (23.5%) and 330 (21.2%), respectively (adjusted odds ratio [aOR], 1.13; 95% confidence interval [CI] 0.82 to 1.54; P = 0.46). Time to IVACs (adjusted hazard ratio [aHR], 1.06; 95% CI 0.44 to 2.57; P = 0.90), time to extubation (aHR 1.03; 95% CI 0.85 to 1.23; P = 0.79) (survivors) and oral procedural pain (aOR, 0.62; 95% CI 0.34 to 1.10; P = 0.10) were similar between control and intervention periods. However, oral health dysfunction scores (- 0.96; 95% CI - 1.75 to - 0.17; P = 0.02) improved in the intervention period. Conclusion: Among mechanically ventilated ICU patients, no benefit was observed for de-adoption of chlorhexidine and implementation of an oral care bundle on ICU mortality, IVACs, oral procedural pain, or time to extubation. The intervention may improve oral health. abstract_id: PUBMED:28303301 Impact of tapered-cuff tracheal tube on microaspiration of gastric contents in intubated critically ill patients: a multicenter cluster-randomized cross-over controlled trial. Purpose: Studies on the impact of tapered-cuff tracheal tubes on rates of microaspiration and ventilator-associated pneumonia (VAP) in intubated patients have reported conflicting results. The aim of this study was to determine the influence of this shape of tracheal cuff on abundant microaspiration of gastric contents in critically ill patients. Methods: All patients intubated in the intensive care unit (ICU) and requiring mechanical ventilation for at least 48 h were eligible for this multicenter cluster-randomized controlled cross-over open-label study. The primary outcome was abundant microaspiration of gastric contents, defined by the presence of pepsin at significant level in >30% of tracheal aspirates. Quantitative measurement of pepsin and salivary amylase was performed in all tracheal aspirates during the 48 h following enrollment. Results: A total of 326 patients were enrolled in the ten participating ICUs (162 in the PVC tapered-cuff group and 164 in the standard-cuff group). Patient characteristics were similar in the two study groups. The proportion of patients with abundant microaspiration of gastric contents was 53.5% in the tapered-cuff and 51.0% in the standard-cuff group (odds ratio 1.14, 95% CI 0.72-1.82). While abundant microaspiration of oropharyngeal secretions was not significantly different (77.4 vs 68.6%, p = 0.095), the proportion of patients with tracheobronchial colonization was significantly lower (29.6 vs 43.3%, p = 0.01) in the tapered-cuff than in the standard-cuff group. No significant difference between the two groups was found for other secondary outcomes, including ventilator-associated events and VAP. Conclusions: This trial showed no significant impact of tapered-cuff tracheal tubes on abundant microaspiration of gastric contents. Trial Registration: ClinicalTrials.gov, number NCT01948635. abstract_id: PUBMED:17320243 Humidification policies for mechanically ventilated intensive care patients and prevention of ventilator-associated pneumonia: a systematic review of randomized controlled trials. The Dutch Working Party on Infection Prevention (WIP) aimed to determine whether certain humidification policies are better than others in terms of prevention of ventilator-associated pneumonia (VAP) in mechanically ventilated intensive care unit (ICU) patients. Publications were retrieved by a systematic search of Medline and the Cochrane Library up to February 2006. All (quasi-) randomized trials and systematic reviews/meta-analyses comparing humidification methods in ventilated ICU patients were selected. Two reviewers independently assessed trial quality and extracted data. If the data was incomplete, clarification was sought from original authors and used to calculate the relative risk of VAP. Data for VAP were combined in the analysis, where appropriate, using a random-effects model. Ten trials were included in the review. In general, the quality of the trials and the way they were reported were unsatisfactory. The results did not show any benefit from specific humidification techniques in terms of reducing VAP. WIP do not recommend either passive or active humidifiers to prevent VAP, nor the type of passive humidifiers to be used. Regarding active humidification, WIP recommends using heated wire circuits. This is due to the theoretical consideration that less condensate reduces colonization and subsequent risk of spread throughout an ICU when condensate is removed. Answer: The randomized, controlled trial described in abstract PUBMED:18829800 aimed to test the hypothesis that positioning intubated infants on their sides, as opposed to supine, would reduce the likelihood of bacterial colonization in their tracheae, which could potentially prevent ventilator-associated pneumonia (VAP). The study involved 60 intubated infants, with 30 positioned supine and 30 maintained in the lateral position. Tracheal aspirates were cultured after 48 hours and after 5 days of mechanical ventilation. The results showed that after 2 days, there was no statistical difference in the number of positive tracheal cultures between the supine group (67%) and the lateral group (47%). However, after 5 days of mechanical ventilation, there was a significant difference between the groups. The supine group had a higher rate of positive cultures (87%) compared to the lateral group (30%). Additionally, more infants in the supine group experienced increased colony counts or had new organisms in their tracheal aspirates over time compared to the lateral group (21 vs 8 infants). The most common organisms isolated from tracheal aspirates in both groups were Gram-negative rods. The study concluded that respiratory contamination is very common among ventilated infants, emphasizing the importance of judicious use of mechanical ventilation. The findings suggest that gravitational force can ameliorate the onset of respiratory colonization, indicating that positioning infants on their sides may have a beneficial effect in reducing tracheal colonization and potentially preventing VAP. However, the mechanism and clinical applicability of these observations require further exploration.
Instruction: Is elective hernia repair worthwhile in old patients? Abstracts: abstract_id: PUBMED:10365833 Is elective hernia repair worthwhile in old patients? Objective: To find out if elective herniorraphy in patients aged 75 and over is worthwhile. Design: Retrospective study. Setting: District hospital, Sweden. Subjects: 146 consecutive patients aged 75 years or more, who had their hernias repaired during the period 1992-95. Main Outcome Measures: Patient satisfaction measured by a five-point analogue scale. Clinical and personal details, morbidity, mortality, and surgical variables were obtained from case records. Results: Community social service was not required by 114 (78%) of the patients and 15 (22%) had no preoperative complaints. Our patients rated their satisfaction with their choice to have an operation, as well as its effect on their preoperative symptoms as 4.9. Emergency operations (p = 0.02), femoral hernias (p = 0.01) and direct inguinal hernias (direct:indirect ratio 0.81) were more common in this age group. Femoral and direct inguinal hernias tended to recur more often than usual. Emergency operation, dementia, and diabetes were associated with a reduced short-term survival. Conclusion: Elective hernia repair in an elderly population is highly appreciated by the patients, and worthwhile. If coexisting disease and domestic arrangements are controlled, the patients' need for hospital care can be minimised. Mesh is recommended in femoral and direct inguinal hernias, which were associated with an increased reoperation frequency. A more vigilant protocol of indications for hernia surgery in the aged may minimise the need for both emergency and unnecessary operations. abstract_id: PUBMED:28342133 Is old age a contraindication to elective ventral hernia repair? Background: Ventral hernia repair (VHR) is a frequent problem in the expanding aging population. However, advanced age is often viewed as a contraindication to elective hernia surgery. We aimed to analyze outcomes of VHR in a large cohort of elderly patients. We hypothesized that elective VHR is safe and effective even in patients over 70 years old. Methods: We conducted a retrospective review of consecutive patients over the age of 70 who underwent VHR at a at a tertiary care hospital. Main outcome measures included postoperative complications and recurrence rate. Results: Between 2006 and 2015, 263 elderly patients who underwent elective VHR were included. Major comorbidities included diabetes, COPD, and smoking history. The majority of the patients underwent open repairs. Surgical site events occurred in 54 patients (21%). Postoperative complications included 17 venous thromboembolism occurrences, 2 myocardial infarctions, 41 patients who required postoperative critical care, and 1 mortality. Readmission within 90 days postoperatively occurred in 34 patients (13%). At a mean follow-up of 25.6 months, 17 patients in the open group and 6 patients in the laparoscopic group had a recurrence. Conclusion: We demonstrated that VHR can be performed reasonably safely and effectively even in this potentially risky cohort. The use of laparoscopy might be associated with further reduction in morbidity. Overall, age should not be a contraindication to elective VHR, even in patients over 70 years old. abstract_id: PUBMED:31285652 Emergent Repair of Paraesophageal Hernias and the Argument for Elective Repair. Background And Objectives: A feared complication of large paraesophageal hernias is incarceration necessitating emergent repair. According to previous studies, patients who require an emergent operation are subject to increased morbidity compared with patients undergoing elective operations. In this study, we detail patients who underwent hernia repair emergently and compare their outcomes with elective patients. Methods: A retrospective analysis was performed of the paraesophageal hernia repair operations between 2010 and 2016. Patients were divided into 2 groups: patients with hernias that were repaired electively and patients with hernias that were repaired emergently. Perioperative complications and follow-up data regarding morbidity, mortality, and recurrence were also recorded. A propensity analysis was used to compare emergent and elective groups. Results: Thirty patients had hernias repaired emergently, and 199 patients underwent elective procedures. Patients undergoing emergent repair were more likely to have a type IV hernia, have a partial gastrectomy or gastrostomy tube insertion as part of their procedure, have a postoperative complication, and have a longer hospital stay. However, propensity analysis was used to demonstrate that when characteristics of the emergent and elective groups were matched, differences in these factors were no longer significant. Having an emergent operation did not increase a patient's risk for recurrence. Conclusion: Patients who had their hernias repaired emergently experienced complications at similar rates as those of elective patients with advanced age or comorbid conditions as demonstrated by the propensity analysis. The authors therefore recommend evaluation of all paraesophageal hernias for elective repair, especially in younger patients who are otherwise good operative candidates. abstract_id: PUBMED:32577212 Incarcerated femoral hernia treated with elective totally extraperitoneal repair: a case report. Incarcerated groin hernia is a common surgical emergency. However, reports of incarcerated femoral hernia treated with elective totally extraperitoneal repair are extremely rare. A 62-year-old woman visited our hospital with lower abdominal pain and bulging from a right groin lesion. The patient was diagnosed as having right incarcerated femoral hernia containing greater omentum by computed tomography. As there were no clear findings of intestinal obstruction and peritonitis, elective surgery was performed. Intraoperatively, the hernia sac had herniated into the right femoral canal. We could release the hernia sac using laparoscopic forceps. After reduction of the hernia sac, polypropylene mesh was placed in the preperitoneal space and fixed to Cooper's ligament. The patient's postoperative course was uneventful, and she was discharged 3 days after surgery. We consider elective totally extraperitoneal repair for incarcerated femoral hernia to be an effective procedure for selected patients who have been diagnosed accurately. abstract_id: PUBMED:38446277 Outcomes of elective and emergency surgical repair of incisional hernia: a comparative observational study. Purpose: Incisional hernia (IH) is a common complication following abdominal surgery. Surgical repair of IH is associated with the alleviation of symptoms and improvement of quality of life. Operative intervention can pose a significant burden to the patient and healthcare facilities. This study aims to describe and compare outcomes of elective and emergency surgical repair of IH. Methods: This study is a single-centre comparative retrospective study including patients who had repair of IH. Patients were divided into Group I (Emergency) and Group II (Elective), and a comparison was conducted between them. Results: Two hundred sixty-two patients were identified with a mean age of 61.8 ± 14.2 years, of which 152 (58%) were females. The mean BMI was 31.6 ± 7.2 kg/m2. More than 58% had at least one comorbidity. 169 (64.5%) patients had an elective repair, and 93 (35.5%) had an emergency repair. Patients undergoing emergency repair were significantly older and had higher BMI, p = 0.031 and p = 0.002, respectively. The significant complication rate (Clavien-Dindo III and IV) was 9.54%. 30 and 90-day mortality rates were 2.3% (n = 6) and 2.68% (n = 7), respectively. In the emergency group, the overall complications, 30-day and 90-day mortality rates were significantly higher than in the elective group, p ≤ 0.001, 0.002 and 0.001, respectively. Overall, 42 (16.1%) developed wound complications, 25 (9.6%) experienced a recurrence, and 41 (15.71%) were readmitted within 90 days, without significant differences between the two groups. Conclusion: Patients who underwent emergency repair were significantly older and had a higher BMI than the elective cases. Emergency IH repair is associated with higher complication rates and mortality than elective repair. abstract_id: PUBMED:35641700 Mortality after emergency versus elective groin hernia repair: a systematic review and meta-analysis. Background: Emergency groin hernia repair is associated with increased mortality risk, but the actual risk is unknown. Therefore, this review aimed to investigate 30- and 90-day postoperative mortality in adult patients who had undergone emergency or elective groin hernia repair. Methods: This review was reported following PRISMA 2020 guidelines, and a protocol (CRD42021244412) was registered to PROSPERO. A systematic search was conducted in PubMed, EMBASE, and Cochrane CENTRAL in April 2021. Studies were included if they reported 30- or 90-day mortality following an emergency or elective groin hernia repair. Meta-analyses were conducted when possible, and subgroup analyses were made for bowel resection, sex, and hernia type. According to the study design, the risk of bias was assessed using either the Newcastle-Ottawa Scale or Cochrane Risk of Bias tool. Results: Thirty-seven studies with 30,740 patients receiving emergency repair and 457,253 receiving elective repair were included. The 30-day mortality ranged from 0-11.8% to 0-1.7% following emergency and elective repair, respectively. The risk of 30-day mortality following emergency repair was estimated to be 26-fold higher than after elective repair (RR = 26.0, 95% CI 21.6-31.4, I2 = 0%). A subgroup meta-analysis on bowel resection in emergency repair estimated 30-day mortality to be 7.9% (95% CI 6.5-9.3%, I2 = 6.4%). Subgroup analyses on sex and hernia type showed no differences regarding the mortality risk in elective surgery. However, femoral hernia and female sex significantly increased the risk of mortality in emergency surgery, both given by a risk ratio of 1.7. Conclusion: The overall mortality after emergency groin hernia repair is 26-fold higher than after elective repair, but the increased risk is attributable mostly to female and femoral hernias. Trial Registration: PROSPERO protocol (CRD42021244412). abstract_id: PUBMED:35082406 Elective incisional hernia repair: lower risk of postoperative wound infection with laparoscopic versus open repair. Introduction: The superiority of laparoscopic repair over open repair of incisional hernias (IHs) in the elective setting is still controversial. Our study aimed to compare the postoperative outcomes of laparoscopic and open elective IH repair in an Asian population. Methods: This retrospective study was conducted in an acute general hospital in Singapore between 2010 and 2015. Inclusion criteria were IH repair in an elective setting, IHs with diameter of 3-15 cm, and location at the ventral abdominal wall. We excluded patients who underwent emergency repair, had recurrent hernias or had loss of abdominal wall domain (i.e. hernia sac containing more than 30% of abdominal contents or any solid organs). Postoperative outcomes within a year such as recurrence, pain, infection, haematoma and seroma formation were compared between the two groups. Results: There were 174 eligible patients. The majority were elderly Chinese women who were overweight. Open repair was performed in 49.4% of patients, while 50.6% underwent laparoscopic repair. The mean operation time for open repair was 116 minutes (116 ± 60.6 minutes) and 139 minutes (136 ± 64.1 minutes) for laparoscopic repair (P = 0.079). Within a year after open repair, postoperative wound infection occurred in 15.1% of the patients in the open repair group compared to 1.1% in the laparoscopic group (P = 0.0007). Postoperative pain, recurrence and haematoma/seroma formation were comparable. Conclusion: Elective laparoscopic IH repair has comparable outcomes with open repair and may offer the advantage of reduced postoperative wound infection rates. abstract_id: PUBMED:25982375 Inpatient outcomes after elective versus nonelective ventral hernia repair. Background: Patients who present emergently with hernia-related concerns may experience increased morbidity with repair when compared with those repaired electively. We sought to characterize the outcomes of patients who undergo elective and nonelective ventral hernia (VH) repair using a large population-based data set. Materials And Methods: The Nationwide Inpatient Sample was queried for primary International Classification of Diseases, Ninth Revision codes associated with VH repair (years 2008-2011). Outcomes were inhospital mortality and the occurrence of a preidentified complication. Multivariable analysis was performed to determine the risk factors for complications and mortality after both elective and nonelective VH repair. Results: We identified 74,151 VH repairs performed during the study interval. Of these procedures, 67.3% were elective and 21.6% were performed laparoscopically. Nonelective repair was associated with a significantly higher rate of morbidity (22.5% versus 18.8%, P < 0.01) and mortality (1.8% versus 0.52, P < 0.01) than elective repair. Elective repairs were more likely to occur in younger patients, Caucasians, and were more likely to be performed laparoscopically. Logistic modeling revealed that female gender, Caucasian race, elective case status, and laparoscopic approach were independently associated with a lower probability of complications and mortality. Minority status and Medicaid payer status were associated with increased probability of nonelective admission. Conclusions: Patients undergoing elective VH repair in the United States tend to be younger, Caucasian, and more likely to have a laparoscopic repair. Nonelective VH is associated with a substantial increase in morbidity and mortality. We recommend that patients consider elective repair of VHs because of the increased morbidity and mortality associated with nonelective repair. abstract_id: PUBMED:27165663 The effectiveness of i.v. cefuroxime prophylaxis of surgical site infection after elective inguinal hernia repair with mesh: A retrospective observational study. Purpose: The efficacy of routine antibiotic prophylaxis for prevention of surgical site infection (SSI) after elective inguinal hernia repair with a mesh patch remains uncertain. The authors of a recent Cochrane meta-analysis based on 17 randomized trials were unable to draw a definitive conclusion on this subject. The purpose of this study was to determine the effectiveness of prophylactic antibiotics for prevention of SSI after elective inguinal hernia repair with mesh and the risk factors for SSI. Methods: All low-risk patients who underwent elective inguinal hernia repair with mesh at our institution between 2010 and 2015 were enrolled in this study, with the exception of patients with recurrent hernias or immunosuppressive diseases. All patients received a single intravenous (i.v.) injection of cefuroxime (1.5 g) within 2 h prior to surgery at the discretion of the surgeon. SSI was defined using criteria of the Centers for Disease Control and Prevention. The variables which could influence the rate of SSI were analyzed by multivariate analysis to determine the independent risk factors for SSI. Results: Among the 605 patients who underwent elective inguinal hernia repair with mesh during the study period, 553 were eligible for enrolment in the study. Of these, 331 received a single dose of cefuroxime preoperatively. The overall SSI rate was 5.4 %; 9.4 % of those patients who did not receive preoperative antibiotic prophylaxis developed SSI versus 2.8 % of those who did receive prophylaxis (P = 0.001). All infections were superficial. Factors independently associated with SSI were advanced age, smoking and preoperative stay. Conclusions: The incidence of SSI among low-risk patients who did and did not receive preoperative antibiotic prophylaxis after elective inguinal hernia repair with mesh differed significantly, particularly among patients of advanced age, smokers and patients with a prolonged preoperative stay in the hospital. abstract_id: PUBMED:32592152 Return to outdoor walking, car driving, and sexual activity following elective inguinal hernia repair: surgeons' perspective versus patients' reality. Purpose: To determine the time to resumption of outdoor walking, car driving, sports, and sexual activity following elective inguinal hernia repair, and to reveal if there are differences between surgeons' recommendations and patients' real return times. Methods: A questionnaire including questions about recommendations to hernia patients for times to resume outdoor walking ability without assistance, driving and sexual intercourse after an elective inguinal hernia repair was sent to surgeons. Also, a short questionnaire was sent to patients who had undergone elective inguinal hernia repair to search the exact times for resuming these physical activities. Results: Surgeons' thoughts and recommendations to their patients varied significantly. The range of recommendations were same day to 20 days for outdoor walking, and same day to 3 months both for driving and sexual intercourse. Patients' actual resumption of postoperative activities were 1-14 days for outdoor walking, 1 day to 3 months for driving, and 1 day to 2 months for sexual intercourse. When the answers from the two questionnaires were compared, it was observed that the mean times for resumption of outdoor walking and sexual intercourse were significantly longer in the patients' lives than recommended by the surgeons. Patients ≥ 60 years were able to walk outside, drive, and participate in sexual activity earlier than the younger patients. Bilateral and recurrent hernia repairs caused slower resumption of different activities in comparison to primary hernias. Conclusions: Patients reported that times for resumption of outdoor walking, driving, and sexual activity were significantly longer than those recommended by surgeons. Age, BMI, bilateral repair, and recurrent hernias were found to be factors affecting return time to different activities. Answer: Yes, elective hernia repair is considered worthwhile in old patients. A retrospective study on patients aged 75 and over who underwent elective herniorrhaphy showed that the majority of these patients did not require community social service postoperatively, and they expressed high satisfaction with their decision to have the operation and its effect on their preoperative symptoms. The study concluded that elective hernia repair in an elderly population is highly appreciated by the patients and worthwhile, especially if coexisting disease and domestic arrangements are controlled to minimize the need for hospital care. The study also recommended the use of mesh in femoral and direct inguinal hernias, which were associated with an increased reoperation frequency (PUBMED:10365833). Additionally, a retrospective review of consecutive patients over the age of 70 who underwent elective ventral hernia repair (VHR) at a tertiary care hospital found that VHR can be performed reasonably safely and effectively in this potentially risky cohort. The study concluded that age should not be a contraindication to elective VHR, even in patients over 70 years old (PUBMED:28342133). Furthermore, a retrospective analysis of paraesophageal hernia repair operations indicated that patients who had their hernias repaired emergently experienced complications at similar rates as those of elective patients with advanced age or comorbid conditions, as demonstrated by the propensity analysis. The authors recommended evaluation of all paraesophageal hernias for elective repair, especially in younger patients who are otherwise good operative candidates (PUBMED:31285652). In summary, the evidence suggests that elective hernia repair is worthwhile and can be safely and effectively performed in elderly patients, with high patient satisfaction and reasonable safety profiles.
Instruction: BCA-225 immunostain: is it of value in liver fine needle aspiration? Abstracts: abstract_id: PUBMED:34459153 Significance of hepatocyte atypia in liver fine needle aspiration. Fine needle aspiration (FNA) of the liver is frequently the diagnostic procedure of choice for sampling hepatic lesions. One of the main diagnostic challenges in the interpretation of liver FNA is distinguishing dysplastic lesions and well-differentiated hepatocellular carcinoma (WD-HCC) from benign processes, as they share significant cytomorphologic overlap. Furthermore, the diagnosis of HCC often requires evaluation of stroma for invasion, which may not be present on cytology and small needle biopsy specimens. A reporting system for liver cytopathology has yet to be instituted. Without standardized and well-defined criteria for hepatocyte atypia, we recommend limiting the use of atypia in evaluation of liver FNA specimens to describe a diagnosis of exclusion, in which all known benign and neoplastic processes have been ruled out. The cytologic findings on the FNA of a liver nodule may be best reported as atypical hepatocytes in the absence of a core needle biopsy or cell block sufficient to render a definitive diagnosis of HCC. abstract_id: PUBMED:36140619 Endoscopic Ultrasound-Guided Fine-Needle Biopsy versus Fine-Needle Aspiration in the Diagnosis of Focal Liver Lesions: Prospective Head-to-Head Comparison. Endoscopic ultrasound-guided fine-needle biopsy (EUS-FNB) or fine-needle aspiration (EUS-FNA) from focal liver lesions are indicated in selected cases, but there has been no previous comparison of needle types of the same size. The aim of our study was to compare the histologic diagnostic accuracy and adequacy of cores obtained with EUS-FNB needles in contrast to those obtained with FNA needles in focal liver lesions. This prospective one-center study included patients with left lobe hepatic focal lesions with contraindications for percutaneous liver biopsy or need for EUS for concomitant lesions. Each patient had one pass of 22G EUS-FNB (Franseen) needle and one pass of 22G EUS-FNA in a crossover manner, without macroscopic on-site evaluation. Each sample was analyzed separately for histologic adequacy and diagnosis. The final diagnosis was based on histology results or on imaging follow-up in the case of negative biopsies. The EUS-FNB samples (n = 30) were found to be more adequate for histologic analysis, with more cellularity and longer tissue aggregates than the EUS-FNA samples (n = 30). The accuracy of EUS-FNB was 100%, whereas that of EUS-FNA was 86.7% (p = 0.039). No post-procedure complications were noted. The 22G EUS-FNB needle proved superior to 22G EUS-FNA in terms of tissue acquisition diagnostic accuracy and histologic adequacy in focal liver lesions. abstract_id: PUBMED:28105065 An unexpected diagnosis of ectopic liver diagnosed by fine needle aspiration. The differential diagnosis of perigastric masses is broad, ranging from benign to malignant entities. Among the benign entities, accessory liver lobes and ectopic liver are unusual and often incidentally discovered. Here, we report a patient with malignant melanoma who was clinically suspected to have a perigastric metastasis or a gastrointestinal stromal tumor but was ultimately diagnosed by fine needle aspiration (FNA) to have benign ectopic liver. A 47-year-old male was diagnosed with malignant melanoma of the scalp in May 2015 at a tertiary care hospital. He was found to have a 2.6 cm enhancing mass adjacent to the fundus of the stomach and below the diaphragm by computed tomography imaging. To exclude metastasis, the patient was referred to endoscopy, and an endoscopic ultrasound-guided FNA was performed with rapid on-site evaluation (ROSE) by a cytopathologist. A relatively new FNA needle (Shark Core) was used, which produced useful core biopsy material. Cytopathology demonstrated flat sheets, single cells, and small clusters of polygonal cells. There was abundant granular cytoplasm, often containing pigment. Cells lacked pleomorphism. The smear findings appeared consistent with hepatocytes. The cell block demonstrated small core fragments of hepatic parenchyma with portal tracts. Immunohistochemistry for arginase-1 confirmed that this was hepatic tissue. ROSE was useful for communicating with the endoscopist that the mass was both far from, and not connected to, the liver. This is the first documented account of perigastric ectopic liver diagnosed by FNA. This entity should be considered in the differential of perigastric masses. abstract_id: PUBMED:24551435 Combination of aspiration and non-aspiration fine needle biopsy for cytological diagnosis of thyroid nodules. Background: Good cytological sample is very important for the cytological diagnosis of thyroid nodules. The aim of this study was to evaluate the adequacy of prepared samples by the combination of aspiration and non- aspiration fine needle biopsy. Methods: In this descriptive - analytical study, sampling was done simultaneously for each patient in fine needle aspiration and non-aspiration biopsy. The sufficiency of samples was studied using Mair Scoring System. Wilcoxon Signed Rank test was used for the data analysis. Results: Three hundred two cases (289 females, 13 males) with the mean age of 43.83±12.9 years were evaluated. Inadequate samples were 31 (10.3%) in fine needle aspiration, 40 (13.2%) in non-aspiration and 13 cases (4.3%) by using two methods together (p=0.0001). The average total score was 6.00±2.17 in fine needle aspiration and 5.76±2.26 in non-aspiration method (p=0.08), and 6.6±1.98 in the combination of the two methods (p<0001 comparing with one method alone). Conclusion: The results show that using both methods simultaneously in each nodule considerably increases the efficiency of samples for cytological diagnosis. abstract_id: PUBMED:35499200 Fine needle aspiration cytology of a space-occupying lesion in the liver. The fine needle aspiration cytology and histopathology of a space-occupying lesion in the liver. abstract_id: PUBMED:25190979 Fine needle aspiration cytology of lesions of liver and gallbladder: An analysis of 400 consecutive aspirations. Background: Patients presenting with mass lesions of liver and gallbladder are a common occurrence in a cancer hospital in north central part of India. Fine-needle aspiration cytology (FNAC) serves as first line of pathological investigations, but there are pros and cons involved. Aim: The main objective of the present study was to establish adequacy of the procedure and to find out diagnostic pitfalls. An attempt was made to analyze inconclusive and inadequate aspirations. Materials And Methods: A total of 400 consecutive fine-needle aspirates of liver, belonging to 328 cases over a period of 2 years, were analyzed. Hematoxylin and eosin and May-Grόnwald-Giemsa stains were used. Chi-square test was carried out to compare significant degree of difference in different kind of diagnosis. Results: Out of 400 aspirations, 289 (72.2%) were adequate, 75 (18.7%), inconclusive and 36 (9%), inadequate. Among positive aspirations the most common was metastatic adenocarcinoma, 128 (44.2%). The positive diagnosis and adequate aspirations were significantly high (P < 0.0001). Major differential diagnostic problems were: Distinguishing the poorly differentiated hepatocellular carcinoma from the metastatic adenocarcinoma; and leukemia/lymphoma from other malignant round cell tumors. Common diagnostic pitfalls were repeated aspirations from the necrotic area and aspiration of atypical, disorganized and reactive hepatocytes, adjacent to a metastasis. No complications were observed. Conclusion: FNAC can be used successfully for the diagnosis of liver and gallbladder lesions, thus avoiding open biopsy. Study indicates the potential of using FNAC in clinical intervention where the incidence of gall-bladder and liver cancer is very high and open biopsy and surgery are not an option. abstract_id: PUBMED:23911255 Diagnostic yield of fine-needle aspiration puncture for cytology of multifocal liver lesions: a comparison between image-guided and blind aspirations. Background: In patients with palpable hepatomegaly and space occupying lesions demonstrated by imaging techniques, blind fine needle aspiration puncture (FNAP) at the patient's bedside is feasible. Objective: To compare the diagnostic yield of the fine needle aspiration puncture (FNAP) performed blindly or under radiological control in patients with hepatomegaly and multiple solid space occupying lesions in the liver, demonstrated by ultrasonography. Methods: A retrospective study was made of 169 consecutive FNAPs of liver tumors performed either blindly at the bedside by an internist (55 patients) or imaged-guided by a radiologist (114 patients). Results: The diagnostic yield of the technique performed blindly to demonstrate malignancy was 78% (95% confidence interval [CI]: 66-87%) versus 83% (95% CI: 75-89%, P=.42) obtained in the image-guided FNAPs. The diagnostic yield did not vary based on type of tumor invading the liver. Conclusion: Blind FNAP procedures in patients with palpable enlarged liver due to solid space occupying lesions is as effective as those performed under radiological control. abstract_id: PUBMED:30464152 Optimal needle size for thyroid fine needle aspiration cytology. Concerning the needle size for thyroid fine needle aspiration cytology (FNAC), 25-27-gauge needles are generally used in Western countries. However, in Japan, the use of larger needles (21-22-gauge needles) is common. The aim of our study was to determine the optimal needle size for thyroid FNAC. We performed ultrasound-guided FNAC for 200 thyroid nodules in 200 patients using two different-sized needles (22 and 25 gauge). For each nodule, two passes with the different-sized needles were performed. The order of needle sizes was reversed for the second group of 100 nodules. The second aspiration was more painful than the first, regardless of the needle size. An association with more severe blood contamination was more frequently observed with the use of 22-gauge needles (32.0%) than with the use of 25-gauge needles (17.5%) and in the second aspiration (37.5%) than in the initial aspiration (12.0%). The initial aspiration samples were more cellular than the second aspiration samples. Regarding the unsatisfactory and malignancy detection rates, there was no statistical difference between the needles. In three of seven markedly calcified nodules, it was difficult to insert 25-gauge needles into the nodules. In terms of the diagnostic accuracy and pain, either needle size can be used. We recommend using 22-gauge needles for markedly calcified nodules because 25-gauge needles bend more easily in such cases. We demonstrated that the initial aspiration tended to obtain more cellular samples and to be less contaminated. Thus, the initial aspiration is more important and should be closely attended. abstract_id: PUBMED:31043250 For diagnosis of liver masses, fine-needle aspiration versus needle core biopsy: which is better? Introduction: Both fine needle aspiration (FNA) and needle core biopsy (NCB) are widely accepted methods for obtaining diagnostic material. There is variability in how different institutions use these techniques in assessing liver masses. The aim of this study is to compare the diagnostic accuracy and tissue quality between FNA and NCB, and create a cost-effective algorithm for evaluating liver masses. Materials And Methods: A database search was performed to detect all liver FNA cases and their corresponding NCB between January 2014 and August 2016. A retrospective chart review was performed to gather pertinent clinicopathologic information. Results: Seventy-seven FNA and 68 corresponding NCB were reviewed from 74 patients. Diagnoses in the 74 patients included 36 hepatocellular carcinomas (HCC), 29 metastatic malignancies (MET), 5 poorly differentiated carcinomas (PDC), 2 cholangiocarcinomas (CHO), and 2 benign lesions (BEN). More immunohistochemical (IHC) studies (P < 0.05) were performed on NCB tissues than FNA tissues in HCC (mean, 2.1 versus 0.8), MET (2.5 versus 0.5), and PDC groups (11.2 versus 0.2). The false negative rate (FNR) of NCB was lower (P < 0.05) than that of FNA in the HCC group; and FNR of NCB was higher (P < 0.05) than that of FNA in the MET group. Conclusions: For HCC, NCB usually has better tissue quality and diagnostic accuracy than FNA; for metastatic lesions in the liver, FNA has better diagnostic accuracy than NCB, although NCB can provide more tissue for ancillary testing and has better diagnostic quality. Appropriate diagnostic method is important for improving diagnostic accuracy and saving medical resources. abstract_id: PUBMED:31448027 The importance of fine needle aspiration biopsy in the diagnosis of parotid tumors This study aimed to determine the diagnostic value of fine needle aspiration biopsy (FNAB) and of its contribution to the management of parotid tumors. We conducted a retrospective study of 47 patients who had undergone parotidectomy as well as preoperative fine needle aspiration biopsy. The study highlighted that eighty one percent of patients had a benign tumor while 19% of patients had a malignant tumor. The sensitivity and specificity of FNAB were 78% and 92% respectively. Parotid tumors were correctly classified as malignant or benign in 89% of cases, overall accuracy was 64.4%. FNAB is a reliable examination providing preoperative informations about the treatment plan and the postoperative course. Answer: The provided abstracts do not mention the use of BCA 225 immunostain in the context of liver fine needle aspiration (FNA). Therefore, based on the given abstracts, it is not possible to determine the value of BCA 225 immunostain in liver FNA. Immunostains are often used in pathology to help identify specific types of cells and tissues, but the abstracts provided focus on the diagnostic challenges, techniques, and accuracy of FNA in liver lesions, as well as comparisons between FNA and other biopsy methods (PUBMED:34459153, PUBMED:36140619, PUBMED:28105065, PUBMED:24551435, PUBMED:35499200, PUBMED:25190979, PUBMED:23911255, PUBMED:30464152, PUBMED:31043250, PUBMED:31448027). They do not provide information on the use of BCA 225 or its relevance in liver cytology. To answer the question about the value of BCA 225 immunostain in liver FNA, one would need to consult additional literature that specifically addresses the use of this immunostain in hepatic cytology.
Instruction: Public attitudes to people with depression: have there been any changes over the last decade? Abstracts: abstract_id: PUBMED:15555711 Public attitudes to people with depression: have there been any changes over the last decade? Background: A consistent finding of representative surveys that were conducted in Germany in the early 1990s was that people with depression encountered a substantial amount of stigma and discrimination. The aim of this study was to examine whether public attitudes have improved over the last decade or not. Methods: In 2001, a representative survey was carried out among the adult population of the "old" Federal Republic of Germany using the same methodology as in a previous survey in 1990. Results: Regarding emotional reactions of the respondents towards people with depression, our findings are inconsistent. While there has been an increase in the readiness to feel pity and also a slight increase in the tendency to react aggressively, the expression of fear remained unchanged. The public's desire for social distance from people with depression was as strong in 2001 as it had been in 1990. Conclusion: The hypothesis of a change for the better regarding public attitudes towards depressed people is not supported by our findings. Further efforts are needed to reach this goal. abstract_id: PUBMED:21408037 Schizophrenia research: Indian scene in last decade. Schizophrenia is a major psychiatric disorder that not only carries significant morbidity and disability for the sufferer but also a major burden to the society in terms of cost of care. Since its recognition, big strides have been made worldwide in order to understand and treat this disorder. In a developing country such as India, research into the various aspects of this disorder is still not a priority. Despite this scenario, large number of centres across the country have been conducting research in the last four decades. During this time, scope, emphasis and quality of research seem to have undergone a lot of change. In order to examine whether such a change actually exists and the emphasis of research in last decade, a review of research published in Indian Journal of Psychiatry was undertaken. Various studies published on schizophrenia in the last decade (1990-2000) were reviewed. Important landmarks have been highlighted and limitations pointed out. abstract_id: PUBMED:33260413 Advances in the Chemical and Biological Characterization of Amaryllidaceae Alkaloids and Natural Analogues Isolated in the Last Decade. Amaryllidaceae are bulbous wild and cultivated plants well known for their beautiful flowers and pharmaceutical applications, essentially due to the alkaloids and flavonoids content. Hundreds of alkaloids have been isolated until now and several scientific publications reported their sources, chemical structures, and biological activities. During the last decade, some unstudied Amaryllidaceae plants were the object of in-depth investigations to isolate and chemically and biologically characterize new and already known alkaloids as well as some analogues. This review describes the isolation and chemical and biological characterization of the Amaryllidaceae alkaloids, and their analogues obtained in the last decade, focusing the discussion on the new ones. abstract_id: PUBMED:37105152 Public attitudes towards protecting the human rights of people with mental illness: a scoping review and data from a population trend study in Germany. The human rights of people with mental illness are constantly threatened. We conduct a scoping review showing how public attitudes towards protecting human rights have so far been examined and providing an overview of our present knowledge of these attitudes, and present novel findings from a trend study in Germany over nine years, reporting attitudes elicited in 2020 and examining whether these attitudes have changed since 2011. Few studies address attitudes towards human rights explicitly, but several studies contain single items on either first generation human rights, mainly concerning involuntary admission, or civil liberties like the right to vote, or second generation human rights, mainly with regard to funding for healthcare, but also for example regarding career choice. Recent data from Germany showed little improvement in attitudes towards protecting human rights over the last decade and particularly high support for restricting job opportunities for people with mental illness. Although generally, most restrictions were supported by a minority of respondents only, both our data and our scoping review indicate substantial support for several restrictions in several countries, showing that public attitudes pose a challenge to the human rights of persons with mental illness. We discuss possible lines of future research. abstract_id: PUBMED:19252160 Public attitudes towards people with mental illness in England and Scotland, 1994-2003. Background: Understanding trends in public attitudes towards people with mental illness informs the assessment of ongoing severity of stigma and evaluation of anti-stigma campaigns. Aims: To analyse trends in public attitudes towards people with mental illness in England and Scotland using Department of Health Attitudes to Mental Illness Surveys, 1994-2003. Method: We analysed trends in attitudes for 2000 respondents in each survey year (6000 respondents in 1996 and 1997) using quota sampling methods and the adapted Community Attitudes Toward the Mentally Ill scale. Results: Comparing 2000 and 2003, there was significant deterioration for 17/25 items in England and for 4/25 items in Scotland. Neither country showed significant improvements in items between 2000 and 2003. Conclusions: Public attitudes towards people with mental illness in England and Scotland became less positive during 1994-2003, especially in 2000-2003, and to a greater extent in England. The results are consistent with early positive effects for the 'see me' anti-stigma campaign in Scotland. abstract_id: PUBMED:37204075 Older people's attitudes towards emerging technologies: A systematic literature review. Public attitudes towards technology have been studied extensively for decades, but older people were not largely involved in early studies. In recent years, with the trend of digitalisation and the rapid growth of the older population around the world, the attitudes of older people towards emerging technologies have attracted the attention of researchers. This article is a systematic review of 83 relevant studies, to summarise the factors that impact older adults' attitudes towards adopting and using technology. It is found that older people's attitudes are influenced by their personal characteristics, technology-related factors and the social context of technology adoption. The complex relationship between older people and technology is interpreted by researchers with the framing of older people's identity, the role of technology, the interaction of the above factors and the opportunity for older adults to act as co-designers. abstract_id: PUBMED:22473969 Public attitudes towards people with intellectual disabilities: a qualitative comparison of white British & South Asian people. Background: National and international polices promote the acceptance, integration and inclusion of people with intellectual disabilities into mainstream society. However, there is little systematic research into general population attitudes towards people with intellectual disabilities, and even less research, which considers the impact of culture on attitudes. The aim of this study was to explore how young people from White British and South Asian backgrounds differ in their attitudes towards people with intellectual disabilities and above all, how they arrive at their beliefs. Materials And Method: A qualitative design utilizing focus groups and individual interviews with White British and South Asian adolescents aged 16-19 years (N = 61) was employed. Questionnaire data were collected to compare this sample to findings from a larger study run concurrently (Attitudes to people with intellectual disabilities: a cross cultural study. Thesis, University College London). Interview and focus group data were analysed using thematic analysis. Results: Thematic analysis yielded five themes and pointed to widespread confusion about the concept of 'intellectual disability', not helped by the continuing invisibility of people with intellectual disabilities in the media. Participants expressed many positive beliefs, yet closer analysis revealed that underlying these may be more ambivalent or even hostile attitudes. Key differences between the two cultural groups are discussed. Conclusions: The findings highlight the need for raising public awareness and the importance of culturally sensitive support. abstract_id: PUBMED:23553695 Public knowledge, attitudes and behaviour regarding people with mental illness in England 2009-2012. Background: Public stigma against people with mental health problems is damaging to individuals with mental illness and is associated with substantial societal burden. Aims: To investigate whether public knowledge, attitudes and behaviour in relation to people with mental health problems have improved among the English population since the inception of the Time To Change programme in 2009. Method: We analysed longitudinal trends in public knowledge, attitudes and behaviour between 2009 and 2012 among a nationally representative sample of English adults. Results: There were improvements in intended behaviour (0.07 standard deviation units, 95% CI 0.01-0.14) and a non-significant trend for improvement in attitudes (P = 0.08) among the English population. There was, however, no significant improvement in knowledge or reported behaviour. Conclusions: The findings provide support for effectiveness of the national Time to Change programme against stigma and discrimination in improving attitudes and intended behaviour, but not knowledge, among the public in England. abstract_id: PUBMED:22845699 Stigma, public awareness about intellectual disability and attitudes to inclusion among different ethnic groups. Background: Attitudes to the inclusion of people with intellectual disabilities (IDs) have been studied extensively, yet evidence on public awareness about ID and stigma is limited. The relationship between attitudes, knowledge and stigma associated with ID is poorly understood. The present study examined these factors and the relationships between them in the context of a multicultural society. Method: UK residents of working age (n = 1002) were presented with a diagnostically unlabelled vignette of someone with a mild ID. They were asked to label the difficulties presented and to complete measures of social distance and attitudes to the inclusion of people with IDs. Results: While attitudes to the inclusion of people with IDs were relatively positive overall, social contact was viewed with ambivalence. Inclusion attitudes and social distance were only moderately correlated. Across the whole sample 28% recognised typical symptoms of mild ID. Recognition of ID was associated with lower stigma and more positive attitudes than attribution of the difficulties presented to other causes. White Westerners showed increased knowledge, lower stigma and favoured inclusion more than participants from ethnic minorities. Among the latter group, Asians showed lower stigma and attitudes more in line with inclusion policies than participants of Black African/Caribbean backgrounds. Once a host of contextual factors were considered jointly, only contact was consistently associated with the variables measured. Conclusions: Stigma associated with ID is of concern across all ethnic groups, although it appears to be increased among the public from ethnic minorities. Given that contact and awareness are associated with reduced stigma, they should be considered as prime foci for efforts to tackle ID stigma. The current findings serve as baseline for attempts to increase public awareness and tackle stigma. abstract_id: PUBMED:26615405 The public debate on psychotropic medication and changes in attitudes 1990-2011. Over the last 25 years, the appraisal of psychotropic drugs within the scientific community and their representation in the media has changed considerably. The initial optimism in the wake of the introduction of second-generation drugs has increasingly made room for a more critical evaluation of alleged advantages of these drugs. The question arises as to what extent this is reflected in similar changes in the public's attitudes towards psychiatric medication. Three representative population surveys on attitudes towards psychotropic medication were carried out in Germany in 1990 (N = 3075), 2001 (N = 2610) and 2011 (N = 1223), using the same sampling procedure, interview mode and instrument for assessing attitudes. In order to disentangle time-related effects, an age-period-cohort analysis was performed. Over the time period of 21 years, the German public's evaluation of psychotropic medication has become markedly more favourable. This change was mostly due to a period effect, i.e. concurrent influences of the social environment people are exposed to. Changes were much more pronounced in the 1990s, while over the following decade only a small, although statistically significant, increase in the favourable appraisal of medication was found. Age and birth cohort had only a minor effect on public attitudes. Our findings suggest that changes in the evaluation of the effects of psychotropic drugs within the psychiatric community and their representation in the media also affect public opinion. Given the ongoing debate about side effects and efficacy of psychiatric medication, future changes of public opinion can be expected. Answer: Based on the provided abstracts, public attitudes towards people with depression have not shown a significant improvement over the last decade. The study conducted in Germany in 2001, using the same methodology as a previous survey in 1990, found that the public's desire for social distance from people with depression was as strong in 2001 as it had been in 1990. There was an increase in the readiness to feel pity and a slight increase in the tendency to react aggressively, but the expression of fear remained unchanged. The conclusion drawn was that the hypothesis of a change for the better regarding public attitudes towards depressed people is not supported by the findings, indicating that further efforts are needed to reach the goal of improving public attitudes (PUBMED:15555711). Additionally, a trend study in Germany over nine years, reporting attitudes elicited in 2020 and examining whether these attitudes have changed since 2011, showed little improvement in attitudes towards protecting human rights over the last decade. There was particularly high support for restricting job opportunities for people with mental illness, suggesting that public attitudes still pose a challenge to the human rights of persons with mental illness (PUBMED:37105152). In England and Scotland, public attitudes towards people with mental illness became less positive during 1994-2003, especially in 2000-2003, and to a greater extent in England. This suggests that there was a deterioration in public attitudes towards people with mental illness, which includes depression (PUBMED:19252160). Overall, these abstracts suggest that public attitudes towards people with depression have not significantly improved over the last decade, and in some cases, attitudes may have even become less positive.
Instruction: Adipocytes Viability After Suction-Assisted Lipoplasty: Does the Technique Matter? Abstracts: abstract_id: PUBMED:27194429 Adipocytes Viability After Suction-Assisted Lipoplasty: Does the Technique Matter? Background: Suction-assisted lipoplasty (SAL; liposuction) is an established aesthetic procedure in plastic surgery. The main parameters differentiating one method of lipoplasty from another are safety, consistency of results, and other more technical parameters. Due to the recent popularity of lipotransfer, the quality of extracted fat has become a relevant parameter. We compare the viability of extracted adipocytes after dry SAL, hyper-tumescent PAL (power-assisted lipoplasty), and water-assisted lipoplasty (WAL). Methods: We used fluorescent microscopy to differentiate viable from necrotic/apoptotic cells after liposuction using each of the mentioned methods. Results: The ratio of living cells between the three methods was significantly different with dry liposuction yielding inferior ratios (p = 0.011). When omitting extreme results, we found that the body-jet technique (WAL) yielded higher ratios of living cells than the hyper-tumescent technique (p < 0.001). The total number of cells was highest in the hyper-tumescent method (p = 0.013). Conclusions: Our results indicate that the hyper-tumescent technique yields the highest number of cells, whereas the body-jet technique yields the highest living cells ratio. The dry technique is clearly inferior to both. No Level Assigned: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 . abstract_id: PUBMED:29672782 Comparison of the regenerative cytokine profile of adipose-derived stromal cells (ASCs) harvested via abdominoplasty, suction-assisted liposuction (SAL) and ultrasound-assisted liposuction (UAL) Introduction: Mesenchymal stem cells are already used in numerous areas of medicine. In the field of plastic surgery, the main focus of recent research has been to utilise adipose-derived stromal cells (ASCs), mainly harvested via liposuction. The ability to release regenerative cytokines is supposed to be the key mechanism for the therapeutic efficacy of ASCs. There are currently a variety of methods and devices in clinical use for harvesting lipoaspirate and consequently ASCs. For most of these devices, the effect on the regenerative cytokine profile of ASCs is unknown. The aim of this study is to compare the expression of key regenerative cytokines of ASCs derived by three standard procedures of plastic surgery: abdominoplasty, suction-assisted liposuction (SAL) and ultrasound-assisted liposuction (UAL). Patients And Methods: UAL and SAL lipoaspirates were obtained from 3 healthy female volunteers before abdominoplasty was performed in the same patients. ASCs were isolated from UAL and SAL lipoaspirates as well as from abdominoplasty resections and RNA expression of important regenerative cytokines (HGF (hepatocyte growth factor), FGF-2 (basic fibroblast growth factor), MCP-1 (monocyte chemotactic protein 1), SDF-1 (stromal cell-derived factor 1) and VEGF (vascular endothelial growth factor) was compared. Results: There was no difference in the expression of HGF, FGF-2, SDF-1 and VEGF between the individual samples, but we were able to demonstrate significantly increased expression of MCP-1 in ASCs obtained from UAL lipoaspirate. Conclusion: UAL and SAL lipoaspirates are suitable for obtaining functional ASCs and are comparable to minimally manipulated cells from abdominoplasty resections. abstract_id: PUBMED:38212268 A newly developed surgeon-controlled suction device in robotic-assisted thoracoscopic surgery. Introduction: Assistant surgeons usually clean the surgical field with a suction cannula in robotic-assisted surgery. This manipulation requires skill and experience to avoid interfering with the operation of the console surgeon. Recently, we created a new suction device that a console surgeon can manipulate with the robotic arms. Materials And Surgical Technique: A small metal suction tip with as a lumen and small side pores for suction and can be connected to a silicone tube connected to wall suction. The tip of the silicone tube can be grasped with robotic forceps and used for organ retraction as well as suction. The suction device has been used in eight lung lobectomy cases and four lung segmentectomy cases to date. There were no major difficulties related to the new suction device except for metal tip disconnection and blood clots clogging. Discussion: Our newly developed surgeon-controlled suction device is inexpensive, easy to handle, and useful for suction, blunt dissection, and organ retraction in robotic-assisted thoracoscopic surgery, especially when performing lymph node dissection. abstract_id: PUBMED:34043750 Fate of Fat Grafting In Vivo and In Vitro: Does the Suction-Assisted Lipectomy Device Matter? Background: Recently, there has been increasing research interest in identifying the effect of liposuction procedures on fat graft survival in order to clarify whether different harvest techniques affect the quality of fat grafts. Objectives: The aim of this study was to investigate the effect of 2 liposuction methods on the survival and regeneration potential of grafted fat tissue. The proliferation and differentiation potentials of adipose-derived stem cells (ASCs) isolated by both methods was also investigated. Methods: Fat grafts were collected from patients who underwent liposuction procedures by 2 different methods: traditional suction-assisted liposuction (TSAL) and vibration amplification of sound energy at resonance (VASER). One portion of the lipoaspirates was implanted into the subcutaneous layer of nu mice for 4 and 12 weeks. ASCs were isolated from the other portion of the lipoaspirate and subjected to proliferation and differentiation assays. Results: Although in vivo fat grafting presented similar adipose tissue survival for the 2 different liposuction methods, more angiogenesis and less fibrosis was observed in the VASER group based on histologic evaluation. Furthermore, VASER-derived ASCs presented better quality in terms of cell differentiation capacity. Conclusions: The in vivo study confirmed better graft angiogenesis with less inflammation, apoptosis, and scar formation in the VASER group. ASCs harvested with VASER exhibited increased differentiation capacity compared with those obtained by TSAL, and represent an excellent source for fat grafting and regenerative medicine. abstract_id: PUBMED:27114181 Metabolic and cardiovascular consequences of suction-assisted lipectomy: Systematic review Background: Suction-assisted lipectomy is one of the most frequent procedures in plastic surgery. The aim of this study was to investigate whether suction-assisted lipectomy causes changes in the carbohydrates and lipid metabolism and the potential effects on cardiovascular risk factors. Methods: We interrogated five databases: Medline, American College of Physicians Journal Club Database, Cochrane central register of controlled trials, Cochrane database of systematic reviews, Database of abstracts of reviews of effects. A systematic review of the literature was performed in order to compare results of randomized controlled trials and observational studies concerning changes in weight, metabolism, endocrinology, inflammatory markers and cardiovascular risk factors after suction-assisted lipectomy. All articles were assessed by criteria from Oxford Center For Evidence Based Medicine (OCEBM). Results: The search resulted in 40 articles: 12 experimental animal studies and 28 human studies. Conclusion: Different metabolic parameters are affected by suction-assited lipectomy. First, all articles point out a decrease of body weight after suction-assisted lipectomy. Weight lost only affects fat mass without any change of lean mass. The potential compensatory growth of visceral fat seems to be counteracted by physical activity. Then, resting energy expenditure seems to be stable or decrease after the surgery. This reduction is significantly related to the decrease of leptin levels and also seems to be counteracted by physical activity. About adipocytokines, leptin level decreases after suction-assisted lipectomy while results are contradictory about adiponectin and resistin levels. However adiponectin seems to tend to increase after surgery. Inflammatory markers seem to increase within first hours after surgery. Then they seem to decrease or remain at the preoperative levels. Fasting insulin level decreases and is linked to the aspirated volume. So insulin sensitivity seems to be improved. Concerning lipid profil, it tends to remain the same or to be improved by suction-assisted lipectomy. In conclusion, regarding all the literature, there is still debate about metabolic effect of suction-assisted lipectomy. Prospective clinical studies are needed to confirm or invalidate some hypotheses. These studies must consider some potential biases as physical activity, diet and medical treatment modifications (statins). abstract_id: PUBMED:36729108 Syringe Suction Yields Greater Adipocyte Viability for Fat Grafting: Fact or Fiction? Abstract: Autologous fat grafting is a technique that can be used for cosmetic and reconstructive indications such as oncologic defects, aging, trauma, and congenital malformations. However, there is no standardized technique, and one of the main challenges is the unpredictable rate of fat resorption. When using fat grafting, it is crucial to understand the different factors that contribute to adipocyte viability. A literature search, using PubMed, was conducted in 2022 with variations of the terms "autologous fat grafting," "fat harvesting," "fat processing," and "fat injection." Articles in the English language that presented original data about different factors that may affect adipocyte viability for fat grafting were included in this review. Syringe suction harvests (lower pressures), compared with other methods with higher pressures, were found to have increased adipocyte counts and viability, but this did not translate clinically during in vivo studies. The studies have shown that, despite our efforts in optimizing fat harvest, processing, and injection, no statistical or clinical differences have been found. Additional studies are still needed to determine a universal protocol for optimal fat graft survival. abstract_id: PUBMED:31821809 Suction Ventilation for Uniportal Video-Assisted Thoracic Surgery Without Endotracheal Intubation. The aim of this study is to describe the technique of pulmonary expansion by suction control from a surgical wound during nonintubated uniportal video-assisted thoracic surgery. Five patients developed oxygen desaturation intraoperatively even with facial mask ventilation. Oxygenation was completed after 3 minutes with the suction technique through the uniportal wound. All the patients underwent the operation under spontaneous ventilation, without conversion to endotracheal intubation. The suction ventilation technique can provide an alternative solution during deoxygenation conditions of nonintubated video-assisted thoracic surgery. abstract_id: PUBMED:19434444 Suction-assisted lipectomy for treatment of lower-extremity lymphedema. Lymphedema typically occurs on the extremities and affects millions of people throughout the world. Although currently there is no single treatment proven effective for lymphedema in every patient, suction-assisted lipectomy has been shown to be effective in some patients. Suction-assisted lipectomy offers patients with lower-extremity lymphedema a less invasive, less morbid surgical option compared with traditional excisional techniques. In this article we present a case of lymphedema reduction with suction-assisted lipectomy in a patient with bilateral lower-extremity lymphedema. abstract_id: PUBMED:24223006 The suction-assisted curettage procedure. Surgical soft tissue debridement of an inflamed or necrotic wound can be a challenging, time-consuming exercise, often with disproportionate blood loss. Suction-assisted curettage is a safe, simple and effective technique for these common, frustrating problems. abstract_id: PUBMED:7484471 Viability of fat obtained by syringe suction lipectomy: effects of local anesthesia with lidocaine. The results of transplantation of free autologous fat obtained by blunt syringe suction lipectomy are unpredictable. We examined if adipose tissue viability is compromised by using syringe suction lipectomy and by infiltration of the tissue with local anesthetics. As reference, we used adipose tissue samples excised during elective surgery. Fat obtained intraoperatively and by lipectomy was digested with collagenase to isolate adipocytes. The mechanical damage associated with sample handling and cell isolation in both procedures was similar and did not exceed 6% of the total cell mass. In addition, cells isolated from intraoperative and lipectomy samples did not differ functionally, responded similarly to insulin stimulation of glucose transport and epinephrine-stimulated lipolysis, and retained the same growth pattern in culture. Since during fat transplantation the graft is exposed to local anesthetics at both the donor and the recipient sites, we reexamined adipocyte function in the presence of lidocaine. Lidocaine potently inhibited glucose transport and lipolysis in adipocytes and their growth in culture. That effect, however, persisted only as long as lidocaine was present; after washing, the cells were able to fully regain their function and growth regardless of whether the exposure was as short as 30 minutes or as long as 10 days. These results indicate that adipose tissue obtained by syringe lipectomy consists of fully viable and functional adipocytes, but local anesthetics may halt their metabolism and growth. Answer: Yes, the technique used in suction-assisted lipoplasty (SAL) does matter for adipocyte viability. A study comparing the viability of extracted adipocytes after dry SAL, hyper-tumescent power-assisted lipoplasty (PAL), and water-assisted lipoplasty (WAL) found significant differences in the ratio of living cells among these methods. Dry liposuction yielded inferior ratios of living cells, while the body-jet technique (WAL) produced higher ratios of living cells compared to the hyper-tumescent technique. However, the hyper-tumescent technique resulted in the highest total number of cells (PUBMED:27194429). Furthermore, the viability of adipocytes is also influenced by the pressure used during the harvest. Syringe suction, which uses lower pressures, has been found to have increased adipocyte counts and viability compared to methods that use higher pressures. Despite this, clinical outcomes in in vivo studies did not show statistical or clinical differences, indicating that more research is needed to establish a universal protocol for optimal fat graft survival (PUBMED:36729108). Additionally, the viability and functionality of adipocytes are not compromised by the use of syringe suction lipectomy, and the mechanical damage associated with sample handling and cell isolation is similar to that of adipose tissue samples excised during elective surgery. However, local anesthetics like lidocaine can temporarily inhibit adipocyte function and growth, although this effect is reversible once the anesthetic is washed away (PUBMED:7484471). In terms of regenerative potential, a comparison of adipose-derived stromal cells (ASCs) harvested via abdominoplasty, SAL, and ultrasound-assisted liposuction (UAL) showed no difference in the expression of key regenerative cytokines except for a significantly increased expression of MCP-1 in ASCs obtained from UAL lipoaspirate. This suggests that UAL and SAL lipoaspirates are suitable for obtaining functional ASCs (PUBMED:29672782). Lastly, a study investigating the effect of two liposuction methods, traditional suction-assisted liposuction (TSAL) and vibration amplification of sound energy at resonance (VASER), on the survival and regeneration potential of grafted fat tissue found that VASER-derived ASCs presented better quality in terms of cell differentiation capacity.
Instruction: Does competitive food and beverage legislation hurt meal participation or revenues in high schools? Abstracts: abstract_id: PUBMED:22867073 Does competitive food and beverage legislation hurt meal participation or revenues in high schools? Background: There is limited evidence to evaluate the influence of competitive food and beverage legislation on school meal program participation and revenues. Methods: A representative sample of 56 California high schools was recruited to collect school-level data before (2006–2007) and the year after (2007–2008) policies regarding limiting competitive foods and beverages were required to be implemented. Data were obtained from school records, observations, and questionnaires. Paired t-tests assessed significance of change between the two time points. Results: Average participation in lunch increased from 21.7% to 25.3% (p < 0.001), representing a 17.0% increase, while average participation in breakfast increased from 8.9% to 10.3% (p = 0.02), representing a 16.0% increase. There was a significant (23.0%) increase in average meal revenue, from $0.70 to $0.86 (per student per day) (p < 0.001). There was a nonsignificant decrease (18.0%) in average sales from à la carte foods, from $0.45 to $0.37 (per student per day). Compliance with food and beverage standards also increased significantly. At end point, compliance with beverage standards was higher (71.0%) than compliance with food standards (65.7%). Conclusion: Competitive food and beverage legislation can increase food service revenues when accompanied by increased rates of participation in the meal program. Future studies collecting expense data will be needed to determine impact on net revenues. abstract_id: PUBMED:22867067 Understanding school food service characteristics associated with higher competitive food revenues can help focus efforts to improve school food environments. Many school food services sell extra foods and beverages, popularly referred to as “competitive foods,” in addition to USDA school meals. On the basis of national survey data, most competitive foods and beverages selected by students are of low nutritional value. Recent federal legislation will allow schools that participate in USDA school meal programs to sell competitive foods only if the food items they sell meet nutrition standards based on the Dietary Guidelines for Americans. Concerns have been raised about the potential effects of limiting competitive foods on local school food service finances. However, national data indicate that only in a subset of schools do food services receive large amounts of revenues from competitive foods. These food services are typically located in secondary schools in more affluent districts, serving higher proportions of students who do not receive free or reduced price meals. Compared to other food services, these food services couple higher competitive food revenues with lower school meal participation. Increasing school meal participation could increase meal revenues to offset any loss of competitive food revenues. Replacing less-healthful competitive items with healthier options could also help maintain school food service revenues while improving the school food environment. Nationally consistent nutrition standards for competitive foods may encourage development and marketing of healthful products. abstract_id: PUBMED:27147133 Healthier Standards for School Meals and Snacks: Impact on School Food Revenues and Lunch Participation Rates. Introduction: In 2012, the updated U.S. Department of Agriculture school meals standards and a competitive food law similar to the fully implemented version of the national Smart Snack standards went into effect in Massachusetts. This study evaluated the impact of these updated school meal standards and Massachusetts' comprehensive competitive food standards on school food revenues and school lunch participation. Methods: Revenue and participation data from 11 Massachusetts school districts were collected from 2011 to 2014 and analyzed in 2015 using multilevel modeling. The association between the change in compliance with the competitive food standards and revenues/participation was assessed using linear regression. Results: Schools experienced declines in school food revenues of $15.40/student in Year 1 from baseline (p=0.05), due to competitive food revenue losses. In schools with 3 years of data, overall revenues rebounded by the second year post-implementation. Additionally, by Year 2, school lunch participation increased by 15% (p=0.0006) among children eligible for reduced-price meals. Better competitive food compliance was inversely associated with school food revenues in the first year only; an absolute change in compliance by 10% was associated with a $9.78/student decrease in food revenues over the entire school year (p=0.04). No association was seen between the change in compliance and school meal participation. Conclusions: Schools experienced initial revenue losses after implementation of the standards, yet longer-term school food revenues were not impacted and school meal participation increased among children eligible for reduced-price meals. Weakening the school meal or competitive food guidelines based on revenue concerns appears unwarranted. abstract_id: PUBMED:26210085 Implementation of Competitive Food and Beverage Standards in a Sample of Massachusetts Schools: The NOURISH Study (Nutrition Opportunities to Understand Reforms Involving Student Health). Background: During 2012, Massachusetts adopted comprehensive school competitive food and beverage standards that closely align with Institute of Medicine recommendations and Smart Snacks in School national standards. Objective: We examined the extent to which a sample of Massachusetts middle schools and high schools sold foods and beverages that were compliant with the state competitive food and beverage standards after the first year of implementation, and complied with four additional aspects of the regulations. Design: Observational cohort study with data collected before implementation (Spring 2012) and 1 year after implementation (Spring 2013). Participants/setting: School districts (N=37) with at least one middle school and one high school participated. Main Outcome Measures: Percent of competitive foods and beverages that were compliant with Massachusetts standards and compliance with four additional aspects of the regulations. Data were collected via school site visits and a foodservice director questionnaire. Statistical Analyses Performed: Multilevel models were used to examine change in food and beverage compliance over time. Results: More products were available in high schools than middle schools at both time points. The number of competitive beverages and several categories of competitive food products sold in the sample of Massachusetts schools decreased following the implementation of the standards. Multilevel models demonstrated a 47-percentage-point increase in food and 46-percentage-point increase in beverage compliance in Massachusetts schools from 2012 to 2013. Overall, total compliance was higher for beverages than foods. Conclusions: This study of a group of Massachusetts schools demonstrated the feasibility of schools making substantial changes in response to requirements for healthier competitive foods, even in the first year of implementation. abstract_id: PUBMED:23678930 Impact of Connecticut legislation incentivizing elimination of unhealthy competitive foods on National School Lunch Program participation. Objectives: We analyzed the impact of Connecticut legislation incentivizing voluntary school district-level elimination of unhealthy competitive foods on National School Lunch Program (NSLP) participation. Methods: We analyzed data on free, reduced, and paid participation in the NSLP from 904 schools within 154 Connecticut school districts from the 2004-2005 to the 2009-2010 school year, resulting in 5064 observations of annual school-level meal participation. We used multilevel regression modeling techniques to estimate the impact of the state competitive food legislation on the count of NSLP lunches served per student in each school. Results: Overall, the state statute was associated with an increase in school lunch participation. We observed increases between 7% and 23% for middle- and high-school meal programs, and a slight decrease of 2.5% for the elementary school free meal eligibility category, leading to an estimated revenue increase of roughly $30 000 for an average school district per school year. Conclusions: This study provides support for national implementation of proposed rigorous competitive food standards that can improve the health of students while supporting local school district finances. abstract_id: PUBMED:28895921 Food and Beverage Marketing in Schools: A Review of the Evidence. Despite growing interest from government agencies, non-governmental organizations and school boards in restricting or regulating unhealthy food and beverage marketing to children, limited research has examined the emerging knowledge base regarding school-based food and beverage marketing in high-income countries. This review examined current approaches for measuring school food and beverage marketing practices, and evidence regarding the extent of exposure and hypothesized associations with children's diet-related outcomes. Five databases (MEDLINE, Web of Science, CINAHL, Embase, and PsycINFO) and six grey literature sources were searched for papers that explicitly examined school-based food and beverage marketing policies or practices. Twenty-seven papers, across four high-income countries including Canada (n = 2), Ireland (n = 1), Poland (n = 1) and United States (n = 23) were identified and reviewed. Results showed that three main methodological approaches have been used: direct observation, self-report surveys, and in-person/telephone interviews, but few studies reported on the validity or reliability of measures. Findings suggest that students in the U.S. are commonly exposed to a broad array of food and beverage marketing approaches including direct and indirect advertising, although the extent of exposure varies widely across studies. More pervasive marketing exposure was found among secondary or high schools compared with elementary/middle schools and among schools with lower compared with higher socio-economic status. Three of five studies examining diet-related outcomes found that exposure to school-based food and beverage marketing was associated with food purchasing or consumption, particularly for minimally nutritious items. There remains a need for a core set of standard and universal measures that are sufficiently rigorous and comprehensive to assess the totality of school food and beverage marketing practices that can be used to compare exposure between study contexts and over time. Future research should examine the validity of school food and beverage marketing assessments and the impacts of exposure (and emerging policies that reduce exposure) on children's purchasing and diet-related knowledge, attitudes and behaviors in school settings. abstract_id: PUBMED:30639201 Assessing exposure to food and beverage advertisements surrounding schools in Vancouver, BC. Recent policy initiatives call for restricting food marketing to children, yet little is known about children's current exposure to outdoor advertisements. This paper describes the prevalence and characteristics of food- or beverage-related advertisements surrounding 25 public elementary and secondary schools in Vancouver, Canada and assesses whether the informational food environment differs by neighbourhood or school characteristics. All but four schools had at least one food- or beverage-related advertisement within 400 m (median: 18, range: 0-96) and approximately 90% of food or beverage advertisements were for items not recommended for frequent consumption by provincial school food guidelines. After controlling for commercial density, secondary schools were associated with more outdoor food and beverage advertisements overall in comparison with elementary schools. The presence of an additional limited-service food outlet within 400 m was associated with a 7% increase in the number of overall advertisements (p < 0.001) while an additional grocery store was associated with fewer advertisements (IRR: 0.69, p < 0.001), controlling for commercial density. Findings suggest the need to consider the informational food environment as part of broader assessments of the school and retail food environments. abstract_id: PUBMED:31264235 The Impact of Local Food Expenditure on School Foodservice Revenues. Background: Locally grown foods, through farm-to-school (FTS) activities, may be a key component to balancing foodservice budgets and alleviating financial constraints in school districts. Therefore, the purpose of this study is to examine the impact of local food expenditures on school foodservice revenues and earnings. We anticipated a positive impact of local food expenditures on foodservice revenues and earnings. Methods: Ordinary Least Squares (OLS) regression analysis was conducted using data from the 2013 US Department of Agriculture Farm to School Census. The questionnaire primarily asked all US public school districts about their FTS operations during 2011-2012 school year. Results: Although our results initially showed a negative impact of local milk and nonmilk expenditures on foodservice revenues from food sales, when combined with revenues from the federal government, the impact is positive. The positive effect seems to hold when adding foodservice revenues from both food sales and federal funds. Our study found a similar pattern for foodservice earnings. Conclusions: This may indicate that competitive foods are still widely preferred in school districts. Revenue from the federal government is critical to maintain FTS activities viable to students and community members although federal funds and food sales may not cover total foodservice expenditures. abstract_id: PUBMED:31409436 Impact of the Community Eligibility Provision on meal counts and participation in Pennsylvania and Maryland National School Lunch Programs. Objective: To determine whether school-level participation in the federal Community Eligibility Provision (CEP), which provides free school lunch to all students, is associated with school meal participation rates. Participation in school meals is important for decreasing food insecurity and improving child health and well-being. Design: Quasi-experimental evaluation using negative binomial regression to predict meal count rates per student-year overall and by reimbursement level adjusted for proportion eligible for free and reduced-price lunch (FR eligibility) and operating days. Setting: Schools (grades kindergarten to 12th) participating in the National School Lunch Program (NSLP) in Maryland and Pennsylvania, USA, from the 2013-2015 (n 1762) and 2016-2017 (n 2379) school years. Participants: Administrative, school-level data on school lunch counts and student enrolment. Results: CEP was associated with a non-significant 6 % higher total NSLP meal count adjusting for FR eligibility, enrolment and operating days (rate ratio = 1·06, 95 % CI 0·98, 1·14). After controlling for participation rates in the year prior to CEP implementation, the programme was associated with a significant 8 % increase in meal counts (rate ratio = 1·08, 95 % CI 1·03, 1·12). In both analyses, CEP was associated with lower FR meal participation and substantial increases in paid meal participation. Conclusions: School-level implementation of CEP is associated with increases in total school meal participation. Current funding structures may prevent broader adoption of the programme by schools with fewer students eligible for FR meals. abstract_id: PUBMED:28277028 Targeting Hispanic adolescents with outdoor food & beverage advertising around schools. Objectives: Although some research has focused on the food environment and food marketing, little has examined outdoor food and beverage (FB) advertising, particularly its relationship to the Hispanic composition in schools. Therefore, the purpose of this study was to determine if the prevalence of outdoor FB advertising was greater around middle and high schools with a majority Hispanic population as compared to schools with a lower Hispanic population. Design: All FB advertisements located within a half-mile of 47 schools in Central Texas were documented. Advertisements were coded as free standing or on establishments. Advertisements were coded for theme including price (emphasizing price) and deals/value meals (promoting discounted price/meal deals). These two themes were combined to create an overall price promotion variable. In order to determine if the prevalence of FB advertising varied by the Hispanic composition of the students in the school, data from the Texas Education Agency was used to create a variable which dichotomized the schools into two groups: schools that reported ≥60% Hispanic students or 'Hispanic schools' (n = 21) and schools that reported <60% Hispanic students or 'non-Hispanic schools' (n = 26). Linear regression analyses were conducted to determine if the prevalence of outdoor FB advertising was greater around Hispanic schools as compared to non-Hispanic schools. Results: Hispanic schools had more overall outdoor FB advertisements as compared to non-Hispanic schools (p = 0.02). Similarly, we found significantly more outdoor FB establishment (p = 0.02) and price promotion (p = 0.05) around Hispanic schools as compared to non-Hispanic schools. Differences in freestanding advertisements by school type approached significance (p = 0.07) with Hispanic schools having more freestanding FB advertisements on average. Conclusion: Further research is needed that documents the content of these advertisements and determines the extent to which these advertisements affect Hispanic and other racial/ethnic minority youth's attitudes and behaviors toward the consumption of these products. Answer: The evidence from the provided abstracts suggests that competitive food and beverage legislation does not necessarily hurt meal participation or revenues in high schools. In fact, several studies indicate that such legislation can have positive effects. For instance, in California, after implementing policies to limit competitive foods and beverages, there was a significant increase in both lunch and breakfast participation, as well as an increase in average meal revenue, despite a nonsignificant decrease in sales from à la carte foods (PUBMED:22867073). Similarly, in Massachusetts, although schools experienced initial revenue losses after implementing updated school meal standards and competitive food standards, overall revenues rebounded by the second year post-implementation, and school lunch participation increased among children eligible for reduced-price meals (PUBMED:27147133). Moreover, the implementation of competitive food and beverage standards in Massachusetts schools led to a significant increase in compliance with these standards and demonstrated the feasibility of schools making substantial changes in response to healthier competitive foods requirements (PUBMED:26210085). In Connecticut, legislation incentivizing the elimination of unhealthy competitive foods was associated with an increase in school lunch participation, particularly in middle and high schools, leading to an estimated revenue increase for school districts (PUBMED:23678930). However, it is important to note that the impact of such legislation can vary. For example, the Community Eligibility Provision, which provides free school lunch to all students, was associated with increases in total school meal participation, but with lower free and reduced meal participation and substantial increases in paid meal participation (PUBMED:31409436). Additionally, the presence of competitive foods may still be preferred in some school districts, and revenue from the federal government is critical to maintain farm-to-school activities viable (PUBMED:31264235). Overall, the evidence suggests that while there may be initial challenges, competitive food and beverage legislation can lead to increased meal participation and stable or even increased revenues for high schools in the long term.
Instruction: Are age-related differences in the consequence of work injury greater when occupational physical demands are high? Abstracts: abstract_id: PUBMED:24464769 Are age-related differences in the consequence of work injury greater when occupational physical demands are high? Background: To examine if age differences in the consequences of work injury are exacerbated when occupational physical demands are higher. Methods: A secondary analysis of workers' compensation claims in British Columbia (N = 373,672). Regression models examined the relationship between age and health care expenditures, days of wage replacement and the occurrence of long-term-disability following a work-related injury in occupations with lower and higher physical demands. Models were adjusted for individual and injury related covariates. Results: Older age and higher occupational physical demands were associated with worse work-injury outcomes. The relationship between age and each outcome was not exacerbated when occupational physical demands were higher compared to when they were lower. Counter to our hypotheses age differences in health care expenditures were smaller among women in more demanding occupations. Conclusions: In this study, we found no evidence that the relationship between age and the consequences of work injury is exacerbated when physical occupational demands are high. abstract_id: PUBMED:25168227 Age, occupational demands and the risk of serious work injury. Background: Interest in the relationship between age and serious work injury is increasing, given the ageing of the workforce in many industrialized economies. Aims: To examine if the relationship between age and risk of serious musculoskeletal injury differs when the physical demands of work are higher from those when they are lower. Methods: A secondary analysis of workers' compensation claims in the State of Victoria, Australia, combined with estimates of the insured labour force. We focused on musculoskeletal claims, which required 10 days of absence or health care expenditures beyond a pecuniary threshold. Regression models examined the relationship between age and claim-risk across workers with different occupational demands, as well as the relationship between occupational demands and musculoskeletal claim-risk across different age groups. Results: Older age and greater physical demands at work were associated with an increased risk of musculoskeletal claims. In models stratified by occupational demands, we observed the relationship between age and claim-risk was steeper when occupational demands were higher. We also observed that the relationship between occupational demands and risk of work injury claim peaked among workers aged 25-44, attenuating among those aged 45 and older. Conclusions: This study's results suggest that although older workers and occupations with higher demands should be the targets of primary preventive efforts related to serious musculoskeletal injuries, there may also be gains in targeting middle-aged workers in the most physically demanding occupations. abstract_id: PUBMED:15317915 Examining the associations between physical work demands and work injury rates between men and women in Ontario, 1990-2000. Aims: To describe the decline in injury rates between 1990 and 2000 within occupations stratified across three levels of physical demands and gender, adjusting for industry, in Canada's largest province. Methods: Records of injury compensation claims were obtained from the Ontario Workplace Safety & Insurance Board. The population likely to be insured by the Ontario Workplace Safety & Insurance Board was estimated from Statistics Canada's Labour Force Survey. Injury rates were calculated by three broad levels of physical demands, separately for men and women. Results: Injury rates decreased across each grouping of lower physical demands at work for both men and women, with the largest absolute differences in manual occupational groups (high physical demands). Occupations classified as manual (high physical demands) and mixed (moderate physical demands) showed larger differences in injury rates between genders than did non-manual (low physical demands), although the directions of these differences were not always consistent across different natures of injury classification. Conclusions: The absolute reduction in injury rates in Ontario between 1990 and 2000 was dominated by the reduction in injury rates for men and women in manual and mixed occupations. However, not all types of injury have declined to the same extent. A large proportion of differences in injury rates between men and women can be attributed to the differential labour force participation across occupations and industries, as well as the differential tasks within occupational groups. abstract_id: PUBMED:19504176 Work-related musculoskeletal disorders and injuries: differences among older and younger occupational and physical therapists. Introduction: Occupational and physical therapists are at significant risk of work-related musculoskeletal disorders and injuries. As the therapy workforce ages, and shortages of therapists are predicted, it is important to evaluate differences in injuries and injury behaviors between older and younger therapists. Methods: Half of all occupational and physical therapists practicing in Wisconsin were randomly selected for a cross sectional study examining the relationship between aging and characteristics of self-reported work injuries and musculoskeletal symptoms. Results: Data from a sample of 1,158 therapists revealed older workers to have similar injury incidence rates, however, days away from work due to injury were higher among older workers. Overall, older and younger therapists were more similar than different, especially in regards to work behaviors. Older therapists tended to report more severe pain symptoms than younger therapists and were two and a half times more likely to report that they changed jobs due to their pain symptoms. Conclusions: Older and younger therapists have similar work-related injury experiences. Older workers may be more vulnerable to lost work time and may experience more severe pain symptoms. Many interventions exist to prevent work-related injuries to therapists and more guidance in the best way to integrate these interventions (e.g., mechanical lift devices) into therapy practice is needed. abstract_id: PUBMED:31111000 The management of work-related musculoskeletal injuries in an occupational health setting: the role of the physical therapist. With the emergence of work-related musculoskeletal disorders and the associated high cost of injured workers, physical therapists are in a unique position to help employers manage these concerns through multidisciplinary injury prevention programs, education, ergonomics, on-site treatment, and return to work programs. The purpose of this paper, through a review of the literature, is to describe the effect that workplace injuries have on employees and the economic burden on employers. Furthermore, this paper will highlight the effectiveness a physical therapist can have in the occupational health setting as part of a multidisciplinary team on ergonomics, employee health, prevention of work-related musculoskeletal disorders, return to work programs, and on-site treatment interventions, such as therapeutic exercise and manual intervention. This review of the literature suggests that participation in ergonomics and on-site physical therapy treatment in the workplace (e.g., education, exercise, manual intervention) has a positive effect on decreasing work-related musculoskeletal disorders, decreasing costs associated with these injuries, increasing work productivity, and decreasing absenteeism and improving presenteeism amongst workers. It is important to ensure that payers, employers, and employees are aware of the crucial role physical therapists can play in occupational health. Physical therapists also need to advocate for their role in occupational health and offer expert care to workers in a nontraditional manner. abstract_id: PUBMED:30675734 A longitudinal study of work-related psychosocial factors and injuries: Implications for the aging United States workforce. Introduction: This study aimed to identify psychosocial work factors that may individually or, in combination, influence injury outcomes among aging United States (U.S.) workers. Methods: Data from the U.S. Health and Retirement Study (HRS) of 3305 working adults, aged 50 years and above, were used to identify associations between work-related psychosocial factors and injury incidence from 2006 to 2014, using adjusted incidence rate ratios. Results: Employees perceiving their work as high in psychological and physical demands/efforts, low in support, and rewards, compared to those in workplaces with low demands, high support, and high rewards, had a risk of injury two times greater. Males, compared with females, had a greater risk for injuries when interactions among several psychosocial work-related factors were modeled. Conclusions: The fact that important gender-based differences emerged when interactions among the psychosocial factors and injury were modeled, suggests opportunities for further research and potential interventions to enhance the working environment. abstract_id: PUBMED:18543279 Physical work capacity in older adults: implications for the aging worker. Background: In many developed countries, the workforce is rapidly aging. Occupational demands however, have not decreased despite the fact that workers see a decline in physical work capacity with age. The purpose of this review is to examine the physiological adaptations to aging, the impact of aging on performance and the benefits of physical fitness in improving functional work capacity in aging individuals. Methods: An extensive search of the scientific literature was performed, acquiring published articles which examined the physiological changes associated with age-related decrements in the physical work capacity of healthy aging adults. The databases accessed included AARP Ageline, AccessScience, Annual Reviews, CISTI, Cochrane Library, Clinical Evidence, Digital Dissertations (Proquest), Embase, HealthSTAR, Medline, PubMed, Scopus, and PASCAL and included relevant information sites obtained on the world wide web. Results: While a great deal of variation exists, an average decline of 20% in physical work capacity has been reported between the ages of 40 and 60 years, due to decreases in aerobic and musculoskeletal capacity. These declines can contribute to decreased work capacity, and consequential increases in work-related injuries and illness. However, differences in habitual physical activity will greatly influence the variability seen in individual physical work capacity and its components. Well-organized, management-supported, work-site health interventions encouraging physical activity during work hours could potentially decrease the incidence of age-related injury and illness. Conclusions: Age-associated functional declines and the accompanying risk of work-related injury can be prevented or at least delayed by the practice of regular physical activity. Older workers could optimally pursue their careers until retirement if they continuously maintain their physical training. abstract_id: PUBMED:27518884 Work-related musculoskeletal disorders among physical therapists: A comprehensive narrative review. Healthcare workers, especially those with direct patient contact are amongst professions with the highest rate of workrelated musculoskeletal disorders (WMSDs), physical therapists (PTs) being one of them. Our objective was to review current knowledge relating to the prevalence, risk factors and prevention of WMSDs among PTs. Pubmed, Google Scholar and PEDro databases were searched for terms relating to WMSDs in PTs from inception to 2015. The prevalence of WMSDs among PTs was high, with lifetime prevalence reported as 55-91%, and 12-month prevalence ranges 40-91.3%, and the lower back as the most frequently affected, with estimates of a lifetime prevalence ranging 26-79.6%, and a 12-month prevalence ranging 22-73.1%, followed most often by the neck, upper back and shoulders. The major risk factors for workrelated low back pain (LBP) were: lifting, transferring, repetitive movements, awkward and static postures, physical load, treating a large number of patients in a single day and working while injured. Low back pain seems to be age- and genderrelated with a higher prevalence in females, younger PTs and PTs working in rehabilitation settings. Physical therapists, as a consequence of work-related LBP, may seek treatment, modify their daily living and leisure (lifestyle) activities, use aids and equipment or change their specialty area either within the profession or by leaving it. Skills and knowledge as to correct body mechanics do not prevent work-related injuries. Mechanical aids used for a patient transfer should be adopted by PTs and new strategies should be developed to reduce their WMSDs without compromising the quality of treatment. Int J Occup Med Environ Health 2016;29(5):735-747. abstract_id: PUBMED:19522144 Work-related musculoskeletal injuries and disorders among occupational and physical therapists. Occupational therapists are at risk of work-related injuries (WRIs) because of the demanding nature of their work. However, information about WRIs and musculoskeletal disorders among occupational therapists is limited. For comparison, research indicates that up to 91% of physical therapists experience work-related musculoskeletal disorders (WMSDs) and pain. The purpose of this study was to gather new information about the prevalence, severity, and characteristics of work-related musculoskeletal symptoms and injuries among occupational therapists and to compare this information with physical therapists in the state of Wisconsin. Investigators mailed surveys to 3,297 randomly selected physical and occupational therapists living in Wisconsin. Results indicated a 2006 annual incidence rate of 16.5 injuries per 100 full-time workers among occupational therapists and 16.9 injuries per 100 full-time workers among physical therapists, a rate similar to workers employed in heavy manufacturing. Occupational therapists and physical therapists face similar and significant risks of injury and WMSDs. abstract_id: PUBMED:29530016 Role of working conditions in the explanation of occupational inequalities in work injury: findings from the national French SUMER survey. Background: Social inequalities in work injury have been observed but explanations are still missing. The objectives of this study were to evaluate the contribution of working conditions in the explanation of social inequalities in work injury in a national representative sample of employees. Methods: The study was based on the cross-sectional sample of the national French survey SUMER 2010 including 46,962 employees, 26,883 men and 20,079 women. The number of work injuries within the last 12 months was studied as the outcome. Occupation was used as a marker of social position. Psychosocial work factors included various variables related to the classical job strain model, psychological demands, decision latitude, social support, and other understudied variables related to reward, job insecurity, job promotion, esteem, working time and hours and workplace violence. Occupational exposures of chemical, biological, physical and biomechanical nature were also studied. Weighted age-adjusted Poisson regression analyses were performed. Results: Occupational gradients were observed in the exposure of most psychosocial work factors and occupational exposures. Strong occupational differences in work injury were found, blue-collar workers being more likely to have work injury. Chemical, biological, physical and biomechanical exposures contributed to explain the occupational differences in work injury substantially. Noise, thermic constraints, manual materials handling, postural/articular constraints and vibrations had significant contributions. Psychosocial work factors also contributed to explain the differences especially among women. Conclusion: Prevention policies oriented toward chemical, biological, physical, biomechanical and psychosocial work exposures may contribute to reduce the magnitude of occupational differences in work injury. Answer: The relationship between age-related differences in the consequences of work injury and occupational physical demands appears to be complex and varies across studies. According to the analysis of workers' compensation claims in British Columbia, older age and higher occupational physical demands were associated with worse work-injury outcomes. However, the study found no evidence that the relationship between age and the consequences of work injury was exacerbated when physical occupational demands were high (PUBMED:24464769). In contrast, a study based on workers' compensation claims in the State of Victoria, Australia, observed that the relationship between age and risk of serious musculoskeletal injury was steeper when occupational demands were higher (PUBMED:25168227). Additionally, a study examining injury rates between men and women in Ontario from 1990 to 2000 found that injury rates decreased across each grouping of lower physical demands at work for both genders, with the largest absolute differences in manual occupational groups (high physical demands) (PUBMED:15317915). Another study focusing on occupational and physical therapists in Wisconsin reported that older workers had similar injury incidence rates to younger workers, but older workers had more days away from work due to injury and reported more severe pain symptoms (PUBMED:19504176). Overall, while some studies suggest that age-related differences in the consequences of work injury are not necessarily greater when occupational physical demands are high (PUBMED:24464769), others indicate that older workers may be at a steeper risk for serious musculoskeletal injuries in physically demanding occupations (PUBMED:25168227). The variability in findings may be influenced by the specific occupational contexts, the nature of the injuries examined, and the methodologies of the studies.
Instruction: Does active treatment of rheumatoid arthritis limit disease-associated bone loss? Abstracts: abstract_id: PUBMED:12209040 Does active treatment of rheumatoid arthritis limit disease-associated bone loss? Objective: Generalized bone loss in rheumatoid arthritis (RA) is multi-factorial, with the inflammatory disease itself thought to contribute to bone loss. To study the extent to which control of disease activity affects bone turnover in RA and whether treatment with disease-modifying anti-rheumatic drugs (DMARDs) reduces bone turnover and loss of bone mass, we measured bone density and biochemical markers of bone resorption in a group of patients with active RA starting on DMARDS. Methods: Patients with active RA were enrolled on starting a new DMARD. Patients were mobile and none took steroids or any treatment for osteoporosis. Clinical and laboratory measures of disease activity were made at 3-monthly intervals and an index of disease activity (DAS) calculated. Bone density was assessed at 0, 1 and 2 yr (Hologic QDR 4500c). Urinary deoxypyridinoline (D-PYR) and pyridinoline (PYR) were measured by ELISA at 0, 3, 6, 9 and 12 months. Results: Forty patients were enrolled, mean age 59.5 (range 31-76), 26 female, 14 male, 25 had established RA, 15 had RA for <2 yr. Baseline D-PYR was elevated (8.4+/-4.55 nmol/mmol creatinine) and correlated with ESR (r=0.6, P<0.01) and DAS (r=0.4, P<0.05). On treatment ESR and DAS fell by 38.5 and 29.3%, respectively. D-PYR was reduced by 12.3% by 9 months (P<0.01). Spearman rank order correlation showed ESR to be the most significant determinant of D-PYR over 1 yr (r=0.43, P<0.001). Serial bone density was available on 21 patients. There was no significant change in BMD over the 2 yr. The change in DAS over 0-3 months showed an inverse relationship with the percent change in spine over 1 yr (r=-0.5, P=0.05). The change in D-PYR over 0-3 months was not closely related to the change in BMD at hip or spine at 1 yr. Conclusion: Disease activity is a significant determinant of bone turnover in RA. Bone resorption markers fall on treatment of RA with DMARDs and no change in BMD was demonstrated at 2 yr. This study suggests the need to control disease activity in RA in order to prevent systemic bone loss. abstract_id: PUBMED:36386147 The bone-protective mechanisms of active components from TCM drugs in rheumatoid arthritis treatment. Rheumatoid arthritis (RA) is an autoimmune disease whose hallmarks are synovial inflammation and irreversible bone destruction. Bone resorption resulting from osteoclasts involves the whole immune and bone systems. Breakdown of bone remodeling is attributed to overactive immune cells that produce large quantities of cytokines, upregulated differentiation of osteoclasts with enhanced resorptive activities, suppressed differentiation of osteoblasts, invading fibroblasts and microbiota dysbiosis. Despite the mitigation of inflammation, the existing treatment in Western medicine fails to prevent bone loss during disease progression. Traditional Chinese medicine (TCM) has been used for thousands of years in RA treatment, showing great efficacy in bone preservation. The complex components from the decoctions and prescriptions exhibit various pharmacological activities. This review summarizes the research progress that has been made in terms of the bone-protective effect of some representative compounds from TCM drugs and proposes the substantial mechanisms involved in bone metabolism to provide some clues for future studies. These active components systemically suppress bone destruction via inhibiting joint inflammation, osteoclast differentiation, and fibroblast proliferation. Neutrophil, gut microenvironment and microRNA has been proposed as future focus. abstract_id: PUBMED:37646860 Elevated serum leptin may be associated with disease activity and secondary osteoporosis in Chinese patients with rheumatoid arthritis. Introduction: Rheumatoid arthritis (RA) is a systemic chronic autoimmune disease in adults that is associated with significant joint issues and systemic inflammation. One of the signs of bone damage in RA is osteoporosis (OP). Leptin is an inflammatory protein that has been reported to be related to RA. The potential relationships among leptin, disease activity, and OP in Chinese patients with RA are not well known. Methods: In total, 245 patients with RA and 120 healthy controls were included in this study. Detailed data on the clinical characteristics and laboratory features were collected. Information about physical activity and functional status was recorded using specific questionnaires. Bone mineral density (BMD) was measured by dual-energy X-ray absorptiometry (DXA). The MECALL castor-50-hf model X-ray scanner was used for the two-hand (including wrist) photographs. Results: Serum leptin levels differed significantly between the RA group and healthy control subjects (1.27/3.29 vs. 0.17/0.24, Z=13.29, P<0.001). The positive rate of leptin protein in RA patients was 86.35%, which was higher than that in controls (19.55%) (χ2=28.51, P<0.001). Pearson's correlation test showed that morning stiffness, disease duration, joint swelling, joint tenderness, swollen joint count (SJC), tender joint count (TJC), health assessment questionnaire (HAQ) score, and Sharp-van der Heijde method (Sharp) score were positively correlated with the level of serum leptin (r=0.212, r=0.312, r=0.322, r=0.501, r=0.291, r=0.334, P<0.05). There was a clear increasing trend in the level of serum leptin according to the different disease activity scores and in the 28 joint activity (DAS28) groups (F=13.936, P<0.001). Elevated leptin was a risk factor for increased disease activity and OP according to logistic regression analysis. The median leptin level differed significantly between the normal bone mass group, osteopenia group, and OP group (P<0.001). An increased serum leptin level was a risk factor for RA-induced osteoporosis according to logistic regression analysis (P<0.001). Conclusion: These results suggest that the level of serum leptin is associated with disease activity and secondary OP among Chinese patients with RA. Key Points • Serum leptin levels in RA patients are higher than those in normal control group. • Leptin was associated with disease activity. • Leptin was associated with the occurrence of systemic osteoporosis and affects bone erosion in RA patients. abstract_id: PUBMED:27238195 Osteoarticular manifestations associated with HIV infection. About 150,000 people are HIV-positive in France, and the number of new cases is estimated at 7000-8000 per year, with no tendency to diminish over time. Admissions of HIV-positive patients have been decreasing, in contrast, since 2008, reflecting the dramatic improvements in quality of life and survival provided by triple antiretroviral regimens. HIV infection is now a chronic disease that exposes patients to the virus and antiretroviral drugs for many years. One consequence has been the emergence of new health conditions in HIV-positive patients, such as tumors, cardiovascular disease, and osteoarticular complications. These epidemiological and clinical changes have made it necessary for rheumatologists to learn about the osteoarticular abnormalities associated with the HIV, which they are likely to encounter at some point during their everyday practice. Osteoporosis is one such abnormality, and this review article starts with a discussion of the literature on this topic. Bone loss is common, chiefly in males. Multiple factors are involved. Studies have demonstrated an increase in the fracture risk and, consequently, recommendations about the screening and treatment of osteoporosis have been issued. The focus of this review article then turns to the other rheumatic manifestations seen in HIV-positive patients, including osteomalacia, avascular necrosis, and inflammatory joint disease. Osteoarticular pain is frequently reported by HIV-positive patients. Identifying the cause is essential to determine the best treatment strategy. Interestingly, immunosuppressant drugs, and even biotherapies, have shown a good safety profile in these immunodeficient patients. abstract_id: PUBMED:7880187 Prednisone treatment of elderly-onset rheumatoid arthritis. Disease activity and bone mass in comparison with chloroquine treatment. Objective: Prednisone is frequently used in the treatment of elderly-onset rheumatoid arthritis (RA), but the balance between efficacy and toxicity, including the effect on bone mass, has not been investigated in long-term studies. This prospective, randomized study was undertaken to compare disease activity and bone mass during long-term treatment with prednisone versus chloroquine in this patient population. Methods: Patients with active RA diagnosed at age > or = 60 were randomized to receive prednisone (15 mg/day for 1 month, with the dosage tapered as low as possible thereafter) (n = 28) or chloroquine (n = 28). Patients who did not show a response received other second-line drugs as an adjunct to prednisone or as a replacement for chloroquine. Bone mass was measured by dual-energy x-ray absorptiometry. The study duration was 2 years. Results: During the 2 years, treatment with other second-line drugs was needed for 12 patients in the prednisone group (43%) and 8 in the chloroquine group (29%). Functional capacity and disease activity improved significantly in both groups and did not differ significantly between the groups, except for a greater improvement in the prednisone group at 1 month. Radiographic scores for joint destruction progressed similarly in both groups. There was a nonsignificant excess bone loss of 1.8% in the spine and 1.5% in the hip in the prednisone group, compared with the chloroquine group. Conclusion: Neither treatment was entirely satisfactory since a significant number of patients needed an additional second-line drug over the 2-year period. abstract_id: PUBMED:26369319 Bone mineral density of postmenopausal women with rheumatoid arthritis depends on disease duration regardless of treatment. The aim of this study was to determine the associations of disease activity and disease duration with the bone mineral density (BMD) in rheumatoid arthritis (RA) patients. We also evaluated the associations of biological drugs with bone loss. A total of 138 postmenopausal RA patients were retrospectively assessed to identify the associations of disease activity, disease duration, and biological drug use with BMD. We assessed the associations of disease duration, a C-reactive protein based disease activity score in 28 joints (DAS28), simplified disease activity index, clinical disease activity index, health assessment questionnaire scores, and the use of biological drugs with the lumbar spine, total hip, and femoral neck BMDs using univariate and multivariate linear regression analyses in bisphosphonate treatment and non-bisphosphonate treatment groups at 1 year of follow-up. The multivariate linear regression analyses showed that disease duration was significantly related to the BMD of the femoral neck and total hip regardless of bisphosphonate treatment. The use of biological drugs was not significantly associated with BMD. Hip BMD in postmenopausal women with RA depends on the disease duration regardless of bisphosphonate use. Biological drugs for RA treatment were not negatively associated with general bone loss. abstract_id: PUBMED:23504384 Osteonecrosis of the jaw and nonmalignant disease: is there an association with rheumatoid arthritis? Objective: To review cases of bisphosphonate-related osteonecrosis of the jaw (BRONJ) occurring in association with benign disease and to describe and compare the clinical course and outcome for patients with BRONJ and rheumatoid arthritis (RA) or osteoporosis. Methods: We retrospectively reviewed observations of all patients referred for treatment and followup for BRONJ from January 2007 to December 2011. Only patients with malignant disease were excluded. Demographic data, medical history, maxillofacial findings, BRONJ treatment, and followup were reviewed for each case. Results: Over a 5-year period, we diagnosed 112 patients with BRONJ. Among these patients, 15 received bisphosphonate (BP) treatment for nonmalignant disease (mean age 65.7 ± 19.8 yrs, 80% women). Patients received BP for a variety of reasons: 8 (53%) to prevent osteoporosis in association with underlying RA; 6 (40%) to prevent idiopathic osteoporosis; and 1 (7%) to treat ankle algodystrophy. The mean oral BP exposure period was 48.4 months (median 36 mo). In 13 cases (86.6%), BRONJ was diagnosed following dental extraction. Of the 8 patients with RA, 5 (62.5%) were taking prednisone at the time of the discovery of BRONJ. Major surgery, sequestrectomy, or alveolectomy was performed in 9 patients (60%), all of whom healed within 3 to 36 months (mean 11.5 mo). Comparative analysis of all the variables showed no statistically significant differences between patients with RA and others. Conclusion: ONJ is a rare adverse effect of BP therapy, especially when administered orally. Within the limits of our study, we were unable to demonstrate a difference in BRONJ disease spectrum, clinical course, or outcome between patients with and those without RA. abstract_id: PUBMED:21768167 Non-psychiatric comorbidity associated with Alzheimer's disease. The burden of medical comorbidity in individuals with Alzheimer's disease is greater than that observed in matched individuals without dementia. This has important implications for all clinicians and healthcare providers who deal with this common condition. The prevalence of vascular risk factors and vascular disease is particularly high. Additionally, associations with a number of other chronic medical conditions have been described, including thyroid disorders, sleep apnoea, osteoporosis and glaucoma. This review gives an overview of evidenced medical (non-psychiatric) comorbidity associated with Alzheimer's disease and briefly explores the underlying mechanisms that may account for these associations. abstract_id: PUBMED:10531077 A randomised trial of differentiated prednisolone treatment in active rheumatoid arthritis. Clinical benefits and skeletal side effects. Objectives: To study benefits and skeletal side effects of carefully monitored prednisolone treatment in patients with active rheumatoid arthritis. Methods: One hundred and two patients with active rheumatoid arthritis were randomly allocated to treatment with disease modifying anti-inflammatory drug (DMARD) alone or DMARD and prednisolone in a one year follow up study. Prednisolone was given in a dose regimen adapted to the disease activity of the individual patient. The mean dose was 6 mg and the mean cumulated dose was 2160 mg. Patients were followed up with disease activity parameters, radiograph of the hands (Larsen score), and bone mineral density (BMD) of the lumbar spine, distal forearm and hand. At one year 26 patients had withdrawn from the investigation leaving 76 patients for evaluation. Results: The results showed that disease activity in the prednisolone treated group was reduced within two weeks. In the DMARD alone group disease activity was gradually reduced over months. At six months there was no difference between the groups as evaluated by an improvement score using a number of ACR criteria. Prednisolone in the present set up was not able to protect significantly against radiological disease progression, although there was a trend towards less progression in Larsen score in the prednisolone group, a matter that was further underlined in an intention to treat analysis. BMD data revealed a significant reduction in spinal BMD in the prednisolone group, whereas prednisolone seemed to have a protective effect against bone loss in the hand and distal forearm. Conclusions: This study does not allow any firm conclusions for or against the treatment of rheumatoid arthritis with prednisolone. The data suggest that the beneficial effects of prednisolone are not as clear cut in established rheumatoid arthritis as in early disease. Furthermore the data indicate that treatment in the chosen relatively low dose does not provide sufficient control of disease. On the other hand the spinal bone loss observed in the prednisolone group does invite considerations about using higher doses. abstract_id: PUBMED:29802059 Conservative surgical treatment of medication related osteonecrosis of the jaw (MRONJ) lesions in patients affected by osteoporosis exposed to oral bisphosphonates: 24 months follow-up. The purpose of the study was to evaluate the efficacy of conservative surgical treatment of medication related osteonecrosis of the jaw (MRONJ) in patients affected by osteoporosis and exposed to oral bisphosphonates (BPs). Subjects diagnosed with MRONJ and osteoporosis under oral BPs that had undergone conservative surgery and had at least 24 months follow-up were included. All patients received medical-antibiotic therapy and then underwent conservative surgical treatment consisting of sequestrectomy, soft tissue debridement and bone curettage with limited or no extension. A total of 53 patients, mean age of 71.9 ± 10.2 years (range 41-87), were enrolled. Two years after conservative surgical therapy 45 patients (91.8%) showed complete healing. The presence of rheumatoid arthritis (p = 0.003) and a more severe initial MRONJ stage (p = 0.023) were associated with a negative surgical outcome while the presence of bone sequestrum was strongly associated with a positive outcome (p = 0.036). Conservative surgical treatment of MRONJ lesions in patients affected by osteoporosis and receiving only oral BPs may represent a valid therapeutic approach determining a high number of complete healing cases. Conservative surgery should be encouraged at early MRONJ stages and after medical therapy failure. Answer: Active treatment of rheumatoid arthritis (RA) does appear to limit disease-associated bone loss. A study of patients with active RA starting on disease-modifying anti-rheumatic drugs (DMARDs) found that treatment led to a reduction in bone resorption markers and no change in bone mineral density (BMD) over two years, suggesting that controlling disease activity is important for preventing systemic bone loss (PUBMED:12209040). Additionally, traditional Chinese medicine (TCM) drugs have shown efficacy in bone preservation in RA treatment by systemically suppressing bone destruction (PUBMED:36386147). Furthermore, elevated serum leptin, an inflammatory protein, has been associated with disease activity and secondary osteoporosis in Chinese patients with RA, indicating that managing disease activity could mitigate bone loss (PUBMED:37646860). In contrast, a study on elderly-onset RA patients treated with prednisone or chloroquine showed that neither treatment satisfactorily prevented bone loss, as a significant number of patients required additional second-line drugs over a two-year period (PUBMED:7880187). Moreover, the bone mineral density of postmenopausal women with RA was found to depend on disease duration regardless of treatment, including the use of biological drugs, which were not negatively associated with general bone loss (PUBMED:26369319). This suggests that while active treatment can help, the duration of the disease itself is a critical factor in bone density outcomes. In summary, active treatment of RA, particularly with DMARDs and potentially with components from TCM, can limit disease-associated bone loss, although the impact may vary based on the type of treatment, the duration of the disease, and individual patient factors such as the presence of elevated leptin levels.
Instruction: Can family history and cord blood IgE predict sensitization and allergic diseases up to adulthood? Abstracts: abstract_id: PUBMED:25420698 Can family history and cord blood IgE predict sensitization and allergic diseases up to adulthood? Background: Long-term studies of the predictive value of family history and cord blood IgE level until adulthood are few, and their conclusions have been contradictory. Methods: Screening of total IgE in 1617 cord blood samples was performed in a Danish birth cohort. All infants with cord blood IgE (CB-IgE) ≥ 0.5 kU/l and a corresponding randomly chosen group with CB-IgE <0.5 kU/l were chosen for follow-up. Questionnaire-based interviews, physical examination, specific IgE testing, and from 10 yr also spirometry, were carried out at 1½, 5, 10, 15, and 26 yr. Predefined diagnostic criteria were used. Results: A total of 455 infants were included, 188 with CB-IgE ≥ 0.5 kU/l and 267 with CB-IgE <0.5 kU/l. Follow-up rates were high, 288 (63%) attended the 26-yr follow-up. Family history and elevated CB-IgE were significantly associated to allergic disease until 26 yr. Concerning any allergic symptoms at 1½ yr the positive and negative predictive values (PPV and NPV), the sensitivity and specificity of CB-IgE ≥ 0.5 kU/l, was 29%, 81%, 54%, and 61%, respectively. The corresponding figures at 26 yr were 46%, 62%, 43%, and 65%. Overall, family history as well as CB-IgE ≥ 0.5 kU/l was associated with high NPV and specificity, but low PPV and sensitivity. Conclusion: Although family history and elevated CB-IgE were significantly associated with primarily atopic disease until 26 yr, none of these were strong predictors for subsequent sensitization and allergic symptoms from childhood until early adulthood. It appears that the predictive capacity of CB-IgE decreases in adolescence and early adulthood. abstract_id: PUBMED:36003048 Cord blood IgE predicts allergic sensitization, elevation of exhaled nitric oxide, and asthma in schoolchildren. Background: Few data are available in Asian children regarding the validity of cord blood immunoglobulin E (IgE) in predicting allergic sensitization and pulmonary function. The relationship between cord blood IgE and fraction of exhaled nitric oxide (FeNO) remains unknown. This study investigated the associations of cord blood IgE with allergic sensitization, FeNO, pulmonary function, and allergic diseases in Asian children. Methods: Five hundred and sixty-six Asian children with valid cord blood IgE measurements at birth participated a 6-year follow-up visit including a questionnaire, serum total and allergen-specific IgE, FeNO measurement, and spirometry. Regression-based analyses with covariates adjustment were applied. Results: Cord blood IgE levels were significantly associated with FeNO levels (β = 0.131, p < .001) and serum total IgE levels (β = 0.325, p < .001). Cord blood IgE levels were positively associated with allergic sensitization (adjusted odds ratio [AOR] = 2.22, p < .001), and sensitization to mites (p = .002), animals (p = .023), and foods (p = .048). Subjects with cord blood IgE ≥0.24 kU/L (the optimal cutoff) were significantly associated with an increased risk of allergic sensitization (AOR = 2.63, p < .001) and asthma (AOR = 2.35, p = .024) than those with cord blood IgE <0.24 kU/L. Subjects with cord blood IgE ≥0.24 kU/L had significantly higher FeNO levels than those with cord blood IgE <0.24 kU/L (p = .028). There were no significant associations between cord blood IgE levels and pulmonary function parameters. Conclusion: Cord blood IgE ≥0.24 kU/L predicts allergic sensitization, FeNO elevation, and asthma among Asian schoolchildren, suggesting cord blood IgE would be useful for identifying newborns at risk of subsequent allergic sensitization and allergic airway inflammation. abstract_id: PUBMED:23805276 Asthma Heredity, Cord Blood IgE and Asthma-Related Symptoms and Medication in Adulthood: A Long-Term Follow-Up in a Swedish Birth Cohort. Cord blood IgE has previously been studied as a possible predictor of asthma and allergic diseases. Results from different studies have been contradictory, and most have focused on high-risk infants and early infancy. Few studies have followed their study population into adulthood. This study assessed whether cord blood IgE levels and a family history of asthma were associated with, and could predict, asthma medication and allergy-related respiratory symptoms in adults. A follow-up was carried out in a Swedish birth cohort comprising 1,701 consecutively born children. In all, 1,661 individuals could be linked to the Swedish Prescribed Drug Register and the Medical Birth Register, and 1,227 responded to a postal questionnaire. Cord blood IgE and family history of asthma were correlated with reported respiratory symptoms and dispensed asthma medication at 32-34 years. Elevated cord blood IgE was associated with a two- to threefold increased risk of pollen-induced respiratory symptoms and dispensed anti-inflammatory asthma medication. Similarly, a family history of asthma was associated with an increased risk of pollen-induced respiratory symptoms and anti-inflammatory medication. However, only 8% of the individuals with elevated cord blood IgE or a family history of asthma in infancy could be linked to current dispensation of anti-inflammatory asthma medication at follow-up. In all, 49 out of 60 individuals with dispensed anti-inflammatory asthma medication at 32-34 years of age had not been reported having asthma at previous check-ups of the cohort during childhood. Among those, only 5% with elevated cord blood IgE and 6% with a family history of asthma in infancy could be linked to current dispensation of anti-inflammatory asthma medication as adults. Elevated cord blood IgE and a positive family history of asthma were associated with reported respiratory symptoms and dispensed asthma medication in adulthood, but their predictive power was poor in this long-time follow-up. abstract_id: PUBMED:33938188 Relationship of cord blood IgE with maternal, fetal, and environmental factors in the Chinese population. Introduction And Objectives: Previous studies reported that history of pregnancy and delivery and family environment might influence cord blood IgE (CB-IgE) levels and development of allergies; however, the association between them is not well-established. This study aimed at investigating the IgE level in the newborn's umbilical cord blood and its relationship with maternal, fetal, and environmental factors. Materials And Methods: A total of 989 mothers and their infants were analyzed in this study. Mothers were given a questionnaire that had a series of questions to evaluate demographic information, maternal allergic status, and environmental exposures during pregnancy. Neonatal cord blood samples were taken at the same time for IgE assay. Results: By univariate analysis, we found statistically significant correlations between CB-IgE levels and gender (P = 0.000) and delivery mode (P = 0.017). By multivariate analysis, gender was found to have a significant association with CB-IgE levels (P = 0.001). No significant difference was found between CB-IgE levels and antenatal complications, the season of birth, birth weight, gestational age, and household income (P > 0.050). Conclusions: In this study, newborn gender was found to be a strong predictor of elevated CB-IgE. The delivery mode was a probable predictor. abstract_id: PUBMED:9842212 IgE levels in cord blood in an area of the Veneto region Background: Cord blood IgE levels have been studied as a possible marker of allergy in infants but few studies are available in our Region. The aim of this paper was to test IgE levels in cord blood of 60 consecutive newborns in a restricted area of Veneto, to correlate cord blood IgE levels with family history of allergy and to verify the risk of contamination from mother's blood. Methods: Cord blood was obtained from 60 consecutive newborns. Immunoglobulin levels (IgG, IgA, IgM, and IgE) were measured in cord blood of newborns and in serum of all mothers. Family history for allergy was previously investigated from the mothers. Results: IgE were detectable in cord blood of 5 newborns but only 2 of them had positive family history for allergy which was pointed out in 11/60. In one of these cases the contamination of sample from mother's blood was postulated. IgG levels in newborn cord blood were higher than in mothers' blood and it was not related with IgE levels or other investigated factors. Conclusions: Only 6.6% of newborns in a restricted area of Veneto region have detectable IgE in cord blood whereas 18.3% of them have positive family history for allergy. Measurable levels of IgE in cord blood are not related with positive family history of allergy and are rarely influenced by mothers' blood contamination. abstract_id: PUBMED:31951684 Family history and cord blood eosinophil count as predictors for atopic manifestations. Objectives: The aim of our study was to investigate the correlation between several clinical parameters and the appearance of atopic manifestations (atopic eczema, food allergy, wheezing bronchitis, allergic rhinoconjunctivitis) in the first four years of life. Methods: A total of 139 unselected full-term newborns were included in a prospective follow up from birth to age 4. Cord blood total immunoglobulin E (cIgE) and cord blood absolute eosinophil count (cEo), positive family history of allergy, maternal smoking during pregnancy, mode of delivery, and duration of exclusive and overall breastfeeding were evaluated as predictors for appearance of atopic manifestations. Results: We found that children with a positive family history of both mother and father are 19.03 times more likely to develop atopic manifestations and those with a positive family history of only mothers are 12.55 times more likely to develop atopy compared with children with a negative family history. Neonates with cord blood eosinophilia had 5.30 times higher chances for developing atopic manifestations. No statistically significant associations were found between cIgE (p = 0.099), mode of delivery (p = 0.379), maternal smoking (p = 0.661), exclusive (p = 0.867) and overall breastfeeding duration (p = 0.675) and the presence of atopic manifestations up to age 4. Conclusions: A positive medical history, especially of mothers and cEo, seem to be predictive in screening for the onset of allergic diseases. abstract_id: PUBMED:8173642 Cord blood IgE levels in the United Arab Emirates. Cord blood IgE was assayed in 164 newborn babies from the United Arab Emirates. The serum IgE levels ranged between < or = 0.1-13.5 kU/l with a geometric mean of 0.28 kU/l. The cord blood IgE in the 89 babies without immediate family history of allergy was < or = 0.1-3.2 kU/l with a geometric mean of 0.25 kU/l and 1.13 kU/l as the 90th percentile. An influence of prenatal sensitization to helminth antigens on cord blood IgE level was not likely. The data are similar to cord blood IgE values reported in other populations. This indicates that ethnic differences do not influence cord blood IgE levels and that previously published studies on the predictive value of cord blood IgE determination in Caucasians are relevant also for other populations. abstract_id: PUBMED:17208588 Predictors of cord blood IgE levels in children at risk for asthma and atopy. Background: Increased cord blood IgE levels, in conjunction with a family history of atopy, are associated with the development of allergic diseases in children. However, little is known about predictors of cord blood IgE levels. Objective: Our objective was to identify predictors of cord blood IgE levels among infants at increased risk of atopy. Methods: Cord blood IgE levels were measured in 874 infants who were screened for participation in a birth cohort. Questionnaires were administered after birth of the infant, and maternal and cord blood was obtained for measurement of IgE levels. Logistic and tobit regression models were used to study the association between perinatal factors and cord blood IgE levels. Results: In multivariable models infant male sex, increased maternal total IgE level, maternal allergen sensitization, Hispanic ethnicity, and residence in low-income areas were associated with detectable or increased cord blood IgE levels, whereas increasing maternal age was associated with undetectable or lower cord blood IgE levels. Although maternal smoking during pregnancy was positively associated with cord blood IgE levels in univariable models, the effect did not persist after adjusting for potential confounders. Conclusion: Maternal allergen sensitization, markers of socioeconomic disadvantage and race/ethnicity, maternal age, and infant sex might influence fetal production of IgE. We found no association of maternal parity, mode of delivery, gestational age, or season of birth with cord blood IgE levels. Clinical Implications: The identification of these definable familial and environmental factors that predict cord blood IgE levels might help in the early detection of infants at risk for atopic disorders. abstract_id: PUBMED:29104683 Relationship of Cord Blood Immunoglobulin E and Maternal Immunoglobulin E with Birth Order and Maternal History of Allergy in Albanian Mother/Neonate Pairs. Background: Previous studies reported that familial factors such as birth order and mothers atopy might influence cord blood levels and development of allergies. Aim: The aim of the study was to evaluate the relationship of cord blood IgE and maternal IgE with birth order and mothers history of allergy in Albanian mother/neonate pairs. Material And Methods: Study population represented 291 mother-infant pairs. Mothers were interviewed with a questionnaire for personal history of allergy and pregnancy history whereas serum IgE levels were determined using sandwich IRMA assay. Results: The mean level of cIgE in neonates with detectable levels was 1.59 (n = 78). No significant difference in means of cIgE was found between first born and later born neonates (p = 0.232) and between neonates of mothers with a negative and positive history of allergy (p = 0.125). Also, no significant difference was found between means of mIgE by birth order, whereas there was a significant difference of mIgE between mothers with and without a history of allergy (p = 0.01). In a group of neonates with detectable cIgE levels, maternal IgE levels were moderately correlated with cIgE levels. Conclusion: Cord blood IgE is not affected by birth order and mothers history of allergy, whereas mothers IgE are affected by the history of allergy but not by birth order. abstract_id: PUBMED:16444083 Parental history of allergy, maternal serum IgE & cord serum IgE. Background: Allergic reactions are responsible for much illness, irritation and disquiet in the life of the child. Recognition and management of allergy in childhood is of great importance to prevent major allergic syndromes. Aim: To correlate cord serum IgE with parental history of allergy and maternal serum IgE. Settings And Design: In consultation with the attending gynecologist a survey of 950 pregnant women from different parts of Mumbai, visiting gynecology clinics was carried out. Method: 100 pregnant women were randomly recruited for the study. Allergic history of both the parents was collected. Blood samples of mothers and matched cord blood samples were collected. Total serum IgE levels were determined using solid-phase immunoradiometric assay. Total serum IgA from each cord blood sample was checked. Observation & Results: (1) Positive correlation was observed between cord serum IgE and matched maternal serum IgE, with mothers having history of allergy. (2) There was no relation (P=0.99) between cord serum IgE and matched maternal serum IgE, where mothers had no history of allergy. (3) A significant elevation in cord serum IgE was found only with maternal but not with paternal history of allergy. (4) No elevation was found in cord serum IgE (0.5 IU/mL). (6) All cord serum samples had IgA< 0.06 IU/mL ruling out the possibility of contamination with maternal blood. Conclusion: The maternal history of allergic disease is associated with an elevated cord serum IgE among newborns. Cord serum IgE is a simple, non-invasive and convenient test, which can supplement the maternal history to provide more discriminative information about the allergic status of the newborn. Answer: Family history and cord blood IgE (CB-IgE) have been studied as potential predictors for sensitization and allergic diseases, but their predictive value into adulthood has shown varying results across different studies. A Danish birth cohort study found that family history and elevated CB-IgE were significantly associated with allergic disease until 26 years of age. However, neither family history nor elevated CB-IgE were strong predictors for subsequent sensitization and allergic symptoms from childhood until early adulthood, with the predictive capacity of CB-IgE decreasing in adolescence and early adulthood (PUBMED:25420698). In Asian children, a study demonstrated that cord blood IgE levels were significantly associated with allergic sensitization, elevation of exhaled nitric oxide (FeNO), and asthma in schoolchildren. This suggests that cord blood IgE could be useful for identifying newborns at risk of subsequent allergic sensitization and allergic airway inflammation (PUBMED:36003048). A Swedish birth cohort study assessed whether cord blood IgE levels and a family history of asthma were associated with, and could predict, asthma medication and allergy-related respiratory symptoms in adults. The study concluded that elevated cord blood IgE and a positive family history of asthma were associated with reported respiratory symptoms and dispensed asthma medication in adulthood, but their predictive power was poor in this long-term follow-up (PUBMED:23805276). In the Chinese population, a study found that newborn gender was a strong predictor of elevated CB-IgE, and the delivery mode was a probable predictor. However, no significant difference was found between CB-IgE levels and other factors such as antenatal complications, the season of birth, birth weight, gestational age, and household income (PUBMED:33938188). In a study from the Veneto region, detectable IgE levels in cord blood were not related to a positive family history of allergy and were rarely influenced by mothers' blood contamination (PUBMED:9842212). A study investigating predictors for the appearance of atopic manifestations in the first four years of life found that a positive family history, especially of mothers, and cord blood eosinophilia were predictive of developing atopic manifestations (PUBMED:31951684). In summary, while family history and cord blood IgE levels have been associated with allergic diseases and sensitization in some studies, their predictive value, especially into adulthood, is limited and not consistently strong across different populations and age groups.
Instruction: Ulceration of the nasal dorsum: a rare cause? Abstracts: abstract_id: PUBMED:28839290 Rare occupational cause of nasal septum perforation: Nickel exposure. Many etiologies are held accountable for nasal septum perforations. Topical nasal drug usage, previous surgeries, trauma, nose picking, squamous cell carcinoma, some rheumatological disorders such as granulomatosis with polyangiitis (Wegener granulomatosis), some infectious diseases such as syphilis and leprosy are among the causes of the perforations. Occupational heavy metal exposures by inhalation rarely may also cause nasal septum perforation. Here, we present a 29-year-old patient without any known diseases, who is a worker at a metallic coating and nickel-plating factory, referred for investigation of his nasal cartilage septum perforation from an otorhinolaryngology clinic. The patient questioning, physical examination and laboratory assessment about rheumatic and infectious diseases were negative. There was a metallic smell in the breath during the physical examination. The analysis showed serum nickel level at 31 μg/l and urine nickel at 18 μg/l (84.11 μg/g creatinine). Other possible serum and urine heavy metal levels were within normal ranges. Nickel exposure is usually together with other heavy metals (chromium or cadmium), it is rarely alone. Nickel ingested by inhalation usually leads to respiratory problems such as reduced olfactory acuity, ulcers, septum perforation or tumors of the nasal sinuses. This case demonstrates the importance of occupational anamnesis and awareness of diagnosis. Int J Occup Med Environ Health 2017;30(6):963-967. abstract_id: PUBMED:2664655 Nasal septal perforations and nasal obstruction. Nasal septal perforations cause a wide range of symptoms that vary from mild discomfort and nasal obstruction to life-threatening epistaxis. The differential diagnosis, pathophysiology, and treatment of this disorder are reviewed. abstract_id: PUBMED:23814722 Rigid nasal endoscopy in the diagnosis and treatment of epistaxis. Background And Objectives: Epistaxis is one of the common symptoms encountered in the Otorhinolaryngology department. Many times the cause for epistaxis is not found on anterior and posterior rhinoscopy. The present study was undertaken to assess the role of rigid nasal endoscope in the diagnosis and treatment of epistaxis, where normal anterior and posterior rhinoscopy did not reveal any specific finding. Methods: Fifty patients with epistaxis were studied using rigid nasal endoscope under local anaesthesia. Patients who were above 15 years with nasal bleeding and who were willing for rigid nasal endoscopy were included in the study. Patients less than 15 years were not included in the study because nasal endoscopy was difficult in them under local anaesthesia. Only those patients in whom, the cause for epistaxis could not be made out on anterior and posterior rhinoscopy were chosen for the study, this was done in order to remove the bias for nasal endoscopy. Results: The use of the nasal endoscope allowed diagnosis of bleeding points and treating them directly. Epistaxis was more in male patients especially in the 3rd and after the 5th decade. On endoscopic examination,the bleeding points were identified as coming from the crevices of the lateral nasal wall, posterior spur on the septum, posterior deviation of the septum with ulcer, congested polyps, enlarged and congested adenoids, scabs or crusts in the crevices of the lateral nasal wall and angiofibroma. Endoscope also helps in the treatment of epistaxis, which includes endoscopic selective nasal packing using gelfoam, endoscopic cautery or diathermy and endoscopic polypectomy. Other patients with adenoids, scabs and crusts and angiofibroma were managed on their merits. Interpretation And Conclusion: Nasal endoscopy helps not only in the localisation of the bleeding point but also in the treatment of those bleeding areas that are situated in the posterior and lateral part of the nose. abstract_id: PUBMED:25991946 Nasal septum perforation in patient with pyoderma gangrenosum. Introduction: The cocaine is obtained from the leaves of the coca (Erythroxylon coca). It can be used in many ways, but the most common is the drug inhalation. The Cocaine also causes vasoconstriction at nasal mucous membrane and its chronic use can cause necrosis and nasal septum perforation. Pyoderma gangrenosum is an uncommon idiopathic disease characterized by ulcerations, usually observed on the legs. Its diagnosis is most common an exclusion of others diseases. So far, there is no specific treatment based on evidence by randomized controlled trials. Objective: Describe the rare association between Pyoderma gangrenosum and cocaine. Case Report: E. A., 27-year-old woman with destruction of nasal septum and palate who has been using a big amount of cocaine, been necessary note the difference from which disease cause de damage. Final Comments: Also there are only three cases of Pyoderma gangrenosum complicated with nasal septum perforation in cocaine users. abstract_id: PUBMED:15911018 Magnetic nasal foreign bodies in a 9-year-old male: opposites attract when it comes to nasal foreign bodies. We report the case of a 9-year-old male who developed nasal ulcerations from magnets which were placed in his nose, and became attracted to each other across the nasal septum. The magnets came from a non-penetrating earring. Normally harmless, but when attracted to each other magnets can cause ulcerations and possibly perforation of the nasal septum and should be removed urgently whenever encountered. abstract_id: PUBMED:36846183 A case of COVID-19-related necrotic nasal ulceration. We are reporting a case of a COVID-19-related necrotic nasal ulcer. All other common etiologies were excluded after a full round of investigations. Even though COVID-19 has been known to cause skin ulcers via different mechanisms, this is the first nasal ulcer to be reported in the current literature. abstract_id: PUBMED:15403152 Perforating ulcers of the nasal septum of irritant cause N/A abstract_id: PUBMED:8928640 Effects of topical nasal decongestants on histology of nasal respiratory mucosa in rabbits. The aim of this study was to evaluate histologic changes after long-term administration of the topical nasal decongestants phenylephrine and oxymetazoline. Ninety healthy rabbits were divided into 3 groups for topical administration for 1 week, 2 weeks, and 4 weeks. Each group was subdivided into 3 subgroups by topical administration of phenylephrine. oxymetazoline, and physiologic saline as controls. Each study group thus consisted of 10 rabbits. Phenylephrine was administered by 2 puffs in the left nostril 5 times daily and oxymetazoline by 2 puffs twice a day using metered sprayers giving 0.10 ml in each puff with a dosage used in clinical practice. For statistical significance the Kolmogorov-Smirnov test was used. Light and electron microscopic examination were performed after obtaining nasal respiratory mucosa from the nasal septum of the rabbits. Administration of phenylephrine and oxymetazoline for more than 2 weeks caused histologic changes including ciliary loss, epithelial ulceration, inflammatory cell infiltration and subepithelial edema, and the changes were more pronounced with increasing administration duration of the decongestants. Ciliary loss was prominent in the 4-week phenylephrine and oxymetazoline groups. There were significantly severer changes in the morphologic variables in the decongestant groups compared with the control group according to administration duration (p < 0.05). Dilatation or vacuolization of mitochondria and endoplasmic reticula and vesicles ill the cytoplasm were observed in the 2- and 4-week phenylephrine groups as well as the 2- and 4-week oxymetazoline groups. Mild widening of the intercellular space was observed in the 4-week phenylephrine group. Purulent maxillary sinusitis developed in 6 of 10 rabbits treated for 4 weeks with phenylephrine. The results of this study suggest that the administration of decongestants may cause ciliary loss with subsequent inflammatory changes in the nasal respiratory mucosa. abstract_id: PUBMED:30425381 Cavernous hemangioma of the nasal bones: a case report Cavernous hemangiomas represent less than 1% of all bone tumours, with the vertebrae and the skull being the most common locations. These tumours are extremely rare in the bones of the nose, with very few cases published in the medical literature, where they have been reported in turbinates, vomer, nasal bones or perpendicular lamina of the ethmoid. They usually present as a tumour lesion that grows over time, involving the bone and even the soft tissues, which can cause complications such as nasal obstruction, bleeding, ulcerations and infection. We present the case of a 37 year-old patient with a cavernous hemangioma located in the bones of the nose, reviewing the most notable aspects of this rare tumour. Keywords. Cavernous Hemangioma. Bone hemangioma. Nasal tumor. Nasal bones. Surgical treatment. abstract_id: PUBMED:24183397 Olfactory function and nasal manifestations of Behçet's disease. Objective: To investigate the effect of Behçet's disease on olfactory function, the nasal mucosa, and nasal symptoms. Methods: A total of 30 patients with Behçet's disease and 30 healthy individuals volunteered to enroll in the study. Any condition leading to olfactory dysfunction was a criterion for exclusion. Nasal endoscopy was carried out individually, and nasal symptoms were assessed. An olfactory function assessment test defined by the Connecticut Chemosensory Clinical Research Center (CCCRC) was carried out. n-Butanol odor threshold and odor identification tests were performed, and the CCCRC test score (0: worst score; 7: best score) was calculated. The relationship between nasal findings, nasal symptoms, and olfactory function was evaluated. Results: The mean age of each group was the late 1930s, and demographic differences were not significant. n-Butanol threshold test scores were 5.57 ± 1.0 and 6.47 ± 0.7 out of 7 for subjects with Behçet's disease and the healthy control group, respectively. Identification test scores were 4.93 ± 1.3 and 6.15 ± 0.8 out of 7 and the mean CCCRC scores were 5.25 ± 1.0 and 6.31 ± 0.6, respectively, with significant differences (p<0.001). Some nasal symptoms were significantly more common in patients with nasal findings, but no correlation was found between nasal findings and olfactory function. Conclusion: Although Behçet's disease has been shown to involve the nasal cavity, its effects on olfactory function are unknown. The significant difference in CCCRC scores suggests, for the first time, that olfactory dysfunction is associated with Behçet's disease. Nasal symptoms are associated with the presence of nasal lesions, but there is no correlation between nasal findings and olfactory function. Answer: Ulceration of the nasal dorsum can be caused by various factors, some of which are indeed rare. For instance, a case of COVID-19-related necrotic nasal ulceration has been reported, which is considered a rare occurrence since this is the first nasal ulcer of its kind to be documented in the literature (PUBMED:36846183). Additionally, nasal septum perforation can occur due to rare occupational exposure to heavy metals such as nickel, as seen in a patient who worked in a metallic coating and nickel-plating factory (PUBMED:28839290). Other uncommon causes of nasal septum perforation include the presence of magnetic nasal foreign bodies, which can cause ulcerations and potentially lead to perforation (PUBMED:15911018). While not specifically about the nasal dorsum, these cases highlight that rare etiologies can lead to nasal ulcerations and perforations. It is important to consider a wide range of potential causes, including occupational exposures, foreign bodies, and systemic diseases, when diagnosing and treating such conditions.
Instruction: Morphometric analyses of elastic tissue fibers in dermatofibroma: clues to etiopathogenesis? Abstracts: abstract_id: PUBMED:19615002 Morphometric analyses of elastic tissue fibers in dermatofibroma: clues to etiopathogenesis? Background: The etiopathogenesis of dermatofibroma (DF), a common benign fibrohistiocytic tumor, is debatable. The goal of this study was to ascertain the density of elastic tissue fibers in DF in an effort to investigate whether this provides an insight into its etiopathogenesis. Method: Three groups comprising eight cellular DFs, eight paucicellular DFs and eight scars (control group) were stained with a modified Verhoeffs-van Gieson (without counterstain), and elastic fibers in three randomly selected fields within the lesional area/case semiquantitatively analyzed and examined in a blinded fashion. Result: The mean density of elastic tissue fibers in cellular DF was 6.81 (1.38-15.89); in paucicellular DF, 2.46 (0.14-5.79) and in scar, 2.95 (0.97-10.69). Overall, significant differences in density of elastic tissue fibers were observed only between cellular DF and the other two groups (vs. paucicellular variant, p = 0.03 and vs. scar, p = 0.05). Morphological changes observed included thickness, clumping, elongation and waviness (cellular DF) and margination of elastic tissue fibers (paucicellular variant). Conclusion: While the jury still appears to be out regarding the etiopathogenesis of DF, the reduction in density of elastic tissue fibers in the paucicellular variant compared with its cellular counterpart lends credence to the concept of evolutionary stages of DF. abstract_id: PUBMED:29309822 Fluorescence microscopy for the evaluation of elastic tissue patterns within fibrous proliferations of the skin on hematoxylin-eosin-stained slides. Background: Diagnosis of fibrous tumors can be challenging and expensive due to the use of special stains. Objective: Determine the usefulness of fluorescence microscopy in the evaluation of elastic tissue patterns on hematoxylin-eosin-stained slides. Methods: In total, 228 slides representing different fibrous tumors were evaluated for their elastic tissue patterns by fluorescence microscopy, and sensitivity and specificity were determined for relevant comparisons. Results: Fluorescence microscopy was found to be useful, especially for distinguishing dermatofibroma from dermatofibrosarcoma protuberans and dermatomyofibroma from other fibrous tumors. Limitations: In some cases, excessive background staining made patterns difficult to interpret. Conclusion: Evaluation of elastic tissue patterns by fluorescence microscopy in fibrous tumors is a cheap and efficient means to further delineate these often challenging tumors. abstract_id: PUBMED:13522367 The question of alveolar, epitheliomorphous & myeloblastic-like histiocytoma of the elastic connective tissue N/A abstract_id: PUBMED:30142716 Atrophic dermatofibroma in an elderly male - a rarely described variant of a common lesion. Dermatofibroma frequently presents as a red-brown nodule on the extremities of the middle aged. Atrophic dermatofibroma is a rare variant that has been most commonly described as an atrophic depressed, erythematous lesion in females. The correct diagnosis of atrophic dermatofibroma is often hindered by its infrequent presentation. It has a female preponderance with an occurrence ratio of 10:1. We describe a case of an atrophic dermatofibroma on the back of an elderly man. Skin biopsy demonstrated a spindle cell proliferation in a storiform pattern, loss of elastic fibers, and substantial atrophy of both the underlying dermis and subcutaneous tissue. An aggregation of elastic fibers was found in the periphery of the tumor. These histologic features supported the diagnosis of atrophic dermatofibroma. The dermal and adipocyte atrophy was likely responsible for the retracted appearance of the lesion. abstract_id: PUBMED:7793762 Buschke-Ollendorff syndrome The Buschke-Ollendorff syndrome (BOS) is a rare connective tissue disorder inherited in an autosomal dominant pattern characterized by cutaneous lesions, dermatofibrosis lenticularis disseminata, and osteopoikilosis. We report a new case of this syndrome in a 66 year old man, interesting by its association with a protein C deficiency, another rare genetically transmitted disease. Diagnosis of the BOS is difficult on the mere cutaneous lesions; it is therefore important to systematically practice bone X-rays in the presence of atypical pseudoxanthoma elasticum, disseminated collagenoma or disseminated connective tissue or elastic nevi. The radiologically detectable osteopoikilotic bone lesions, evoking Paget's disease, easily sign the diagnosis. In our case, the association of a protein C deficiency with the BOS may not be fortuitous because both the elastin and protein C genes are localized on chromosome 2q. abstract_id: PUBMED:33896021 Possible case of dermatomyofibroma exhibiting paradoxical reduction of elastic fibers. N/A abstract_id: PUBMED:11685477 Grading in soft tissue tumors: principles and problems. Histologic grading has been considered the most important prognostic factor for soft tissue sarcomas. Several grading systems have been proposed based on the assessment of morphologic features in heterogeneous groups of sarcomas. Currently, the French Federation of Cancer Centers (FNCLCC) and the National Cancer Institute (NCI) grading systems are the most commonly used. These systems are based on a few morphologic predictors of biologic behavior, which is justifiable because of the rarity of soft tissue sarcomas. Nonetheless, over- or underestimation of prognosis may occur because of an uneven representation of specific sarcomas with rather distinct biologic behaviors among studies of grading systems. In addition, lack of standardization of morphologic criteria and frequent omission of the influence of clinical factors on the final survival analyses preclude universal acceptance of a particular grading system. New advances in diagnostic imaging, quantitative morphometric technologies, cytogenetics, and molecular genetics, allied with alternative analytic data systems, may provide better validation, reproducibility, and prognostic capabilities for current and future grading systems. This article summarizes and critically analyzes the various important grading systems that have thus far been proposed and suggests alternatives for the elaboration of more reproducible systems with higher predictive capabilities. abstract_id: PUBMED:10885409 Atrophic dermatofibroma. Elastophagocytosis by the tumor cells. A 55-year-old woman presented with an asymptomatic red plaque on the left upper back for 6 or 7 years. The lesion was depressed in response to finger pressure. The clinical diagnosis was anetoderma. Histopathologically, the characteristic cells of cellular dermatofibroma proliferated within the thinned dermis, which showed atrophy of about 60 or 70%. The proliferated cells were positive for factor XIIIa and negative for CD34. The involved dermis showed the loss of elastic fibers on elastica van Gieson stain. Electron microscopically, the proliferating cells phagocytized the elastic fibers. We report a typical case of atrophic dermatofibroma and show the possibility that the cause of this disease might be elastophagocytosis between the collagen fibers by the dermatofibroma cells. abstract_id: PUBMED:12079232 Connective tissue tumors. Connective tissue consists of collagen, elastic fibers and ground substances produced by fibrocytes. These cells are usually spindle-shaped with slender nuclei and bipolar cytoplasmic extensions. Apart from labeling for vimentin and variable reactivity for factor XIIIa and CD34, fibrocytes are immunonegative. Electron microscopy reveals prominent endoplasmic reticulum, but is otherwise indistinct. Lesions with fibrocytic differentiation can be divided into five categories: scars, keloids, dermatofibromas, nodular fasciitis, and superficial fibromatoses are inflammatory lesions. Thereby, dermatofibromas and their subcutaneous/deep soft tissue counterpart nodular fasciitis can present with a wide variety of clinicopathologic variants which may be misinterpreted as malignancies. Prurigo nodularis, chondrodermatitis nodularis helicis, acanthoma fissuratum, and knuckle pads are hyperplasias; fibroma molle, fibrous papules, connective tissue nevi, and elastofibroma are hamartomas; and fibroma of tendon sheath, pleomorphic fibroma, and giant cell tumor of tendon sheath are benign neoplasms. Deep fibromatoses, dermatofibrosarcoma protuberans, giant cell fibroblastoma, giant cell angiofibroma, hyalinizing spindle cell tumor with giant rosettes, solitary fibrous tumor, myxofibrosarcoma, low-grade fibromyxoid sarcoma, acral myxoinflammatory fibroblastic sarcoma, and classical fibrosarcoma, are malignant neoplasms, that is fibrosarcomas of variable malignant potential. Lesions dominated by myocytes/ myofibroblasts, e.g. cutaneous myofibroma/infantile myofibromatosis, or by macrophages, e.g. xanthogranulomas, are not part of this chapter. abstract_id: PUBMED:7826983 Nuclear morphometry of soft tissue tumors of fibrous origin. In this study we investigated the diagnostic significance of a set of different morphometric nuclear parameters in the differential diagnosis of soft tissue tumors. Nuclear area, the standard deviation of the nuclear area, the shape factor and other parameters such as Feret's Diameter and Martin's Radii were assessed using a computer assisted image analyzer system. A statistically significant difference (p < 0.01) between benign and malignant tumors and tumor-like lesions could be confirmed for the nuclear area and the standard deviation of the nuclear area, with the significance level being lower (p = 0.5) for the latter parameter. The shape factor also discriminated between the examined groups. Reclassification of the assessed histological diagnosis was performed by linear discriminant analysis using all possible combinations of the different nuclear parameters. This procedure disclosed an increasing rate of correctly reclassified cases with rising number of parameters applied. We conclude that the assessment of nuclear parameters may be helpful in the correct diagnosis and differential diagnosis of soft tissue tumors and tumor-like lesions of fibrous origin. Answer: Morphometric analyses of elastic tissue fibers in dermatofibroma (DF) have provided some insights into the etiopathogenesis of this common benign fibrohistiocytic tumor. A study aimed to ascertain the density of elastic tissue fibers in DF found significant differences in the density of these fibers between cellular DF and other groups, such as paucicellular DF and scars. The mean density of elastic tissue fibers in cellular DF was higher compared to paucicellular DF and scars. Morphological changes in the elastic tissue fibers, such as thickness, clumping, elongation, and waviness in cellular DF, and margination in the paucicellular variant, were observed. The reduction in the density of elastic tissue fibers in the paucicellular variant compared to the cellular counterpart supports the concept of evolutionary stages of DF (PUBMED:19615002). Additionally, fluorescence microscopy has been found useful in evaluating elastic tissue patterns on hematoxylin-eosin-stained slides of fibrous tumors, which can help distinguish DF from other fibrous tumors like dermatofibrosarcoma protuberans and dermatomyofibroma. This method offers a cheap and efficient means to further delineate these often challenging tumors (PUBMED:29309822). Atrophic dermatofibroma, a rare variant of DF, presents with a loss of elastic fibers and substantial atrophy of both the underlying dermis and subcutaneous tissue. Histologic features such as spindle cell proliferation in a storiform pattern and aggregation of elastic fibers at the periphery of the tumor support the diagnosis of atrophic dermatofibroma (PUBMED:30142716). In conclusion, morphometric analyses of elastic tissue fibers in DF provide clues to its etiopathogenesis, suggesting that changes in the density and morphology of elastic fibers may be associated with the evolutionary stages of the tumor. These findings contribute to the understanding of DF and may aid in its diagnosis and differentiation from other fibrous tumors.
Instruction: Process of care and outcome after acute myocardial infarction for patients with mental illness in the VA health care system: are there disparities? Abstracts: abstract_id: PUBMED:12650380 Process of care and outcome after acute myocardial infarction for patients with mental illness in the VA health care system: are there disparities? Objective: To compare process of care and outcome after acute myocardial infarction, for patients with and without mental illness, cared for in the Veterans Health Administration (VA) health care system. Data Sources/setting: Primary clinical data from 81 VA hospitals. Study Design: This was a retrospective cohort study of 4,340 veterans discharged with clinically confirmed acute myocardial infarction. Of these, 859 (19.8 percent) met the definition of mental illness. Measures were age-adjusted in-hospital and 90-day cardiac procedure use; age-adjusted relative risks (RE) of use of thrombolytic therapy, beta-blockers, angiotensin converting enzyme (ACE) inhibitors, or aspirin at discharge; risk-adjusted 30-day and one-year mortality. Results: Patients with mental illness were marginally less likely than those without mental illness to undergo in-hospital angiography (age-adjusted RR 0.90 [95 percent confidence interval: 0.83, 0.98]), but there was no significant difference in the age-adjusted RR of coronary artery bypass graft surgery in the 90 days after admission (0.85 [0.69, 1.05]), or in the receipt of medications of known benefit. For example, ideal candidates with and without mental illness were equally likely to receive beta-blockers at the time of discharge (age-adjusted RR 0.92 [0.82, 1.02]). The risk-adjusted odds ratio (OR) for death in patients with mental illness versus those without mental illness within 30 days was 1.00 (0.75, 1.32), and for death within one year was 1.25 (1.00, 1.53). Conclusions: Veterans Health Administration patients with mental illness were marginally less likely than those without mental illness to receive diagnostic angiography, and no less likely to receive revascularization or medications of known benefit after acute myocardial infarction. Mortality at one year may have been higher, although this finding did not reach statistical significance. These findings are consistent with other studies showing reduced health care disparities in the VA for other vulnerable groups, and suggest that an integrated health care system with few financial barriers to health care access may attenuate some health care disparities. Further work should address how health care organizational features might narrow disparities in health care for vulnerable groups. abstract_id: PUBMED:21975821 Yield of practice-based depression screening in VA primary care settings. Background: Many patients who should be treated for depression are missed without effective routine screening in primary care (PC) settings. Yearly depression screening by PC staff is mandated in the VA, yet little is known about the expected yield from such screening when administered on a practice-wide basis. Objective: We characterized the yield of practice-based screening in diverse PC settings, as well as the care needs of those assessed as having depression. Design: Baseline enrollees in a group randomized trial of implementation of collaborative care for depression. Participants: Randomly sampled patients with a scheduled PC appointment in ten VA primary care clinics spanning five states. Measurements: PHQ-2 screening followed by the full PHQ-9 for screen positives, with standardized sociodemographic and health status questions. Results: Practice-based screening of 10,929 patients yielded 20.1% positive screens, 60% of whom were assessed as having probable major depression based on the PHQ-9 (11.8% of all screens) (n = 1,313). In total, 761 patients with probable major depression completed the baseline assessment. Comorbid mental illnesses (e.g., anxiety, PTSD) were highly prevalent. Medical comorbidities were substantial, including chronic lung disease, pneumonia, diabetes, heart attack, heart failure, cancer and stroke. Nearly one-third of the depressed PC patients reported recent suicidal ideation (based on the PHQ-9). Sexual dysfunction was also common (73.3%), being both longstanding (95.1% with onset >6 months) and frequently undiscussed and untreated (46.7% discussed with any health care provider in past 6 months). Conclusions: Practice-wide survey-based depression screening yielded more than twice the positive-screen rate demonstrated through chart-based VA performance measures. The substantial level of comorbid physical and mental illness among PC patients precludes solo management by either PC or mental health (MH) specialists. PC practice- and provider-level guideline adherence is problematic without systems-level solutions supporting adequate MH assessment, PC treatment and, when needed, appropriate MH referral. abstract_id: PUBMED:31441939 Association of Mental Health Disorders With Health Care Utilization and Costs Among Adults With Chronic Disease. Importance: A population-based study using validated algorithms to estimate the costs of treating people with chronic disease with and without mental health disorders is needed. Objective: To determine the association of mental health disorders with health care costs among people with chronic diseases. Design, Setting, And Participants: This population-based cohort study in the Canadian province of Alberta collected data from April 1, 2012, to March 31, 2015, among 991 445 adults 18 years and older with a chronic disease (ie, asthma, congestive heart failure, myocardial infarction, diabetes, epilepsy, hypertension, chronic pulmonary disease, or chronic kidney disease). Data analysis was conducted from October 2017 to August 2018. Exposures: Mental health disorder (ie, depression, schizophrenia, alcohol use disorder, or drug use disorder). Main Outcomes And Measures: Resource use, mean total unadjusted and adjusted 3-year health care costs, and mean total unadjusted 3-year costs for hospitalization and emergency department visits for ambulatory care-sensitive conditions. Results: Among 991 445 participants, 156 296 (15.8%) had a mental health disorder. Those with no mental health disorder were older (mean [SD] age, 58.1 [17.6] years vs 55.4 [17.0] years; P < .001) and less likely to be women (50.4% [95% CI, 50.3%-50.5%] vs 57.7% [95% CI, 57.4%-58.0%]; P < .001) than those with mental health disorders. For those with a mental health disorder, mean total 3-year adjusted costs were $38 250 (95% CI, $36 476-$39 935), and for those without a mental health disorder, mean total 3-year adjusted costs were $22 280 (95% CI, $21 780-$22 760). Having a mental health disorder was associated with significantly higher resource use, including hospitalization and emergency department visit rates, length of stay, and hospitalization for ambulatory care-sensitive conditions. Higher resource use by patients with mental health disorders was not associated with health care presentations owing to chronic diseases compared with patients without a mental health disorder (chronic disease hospitalization rate per 1000 patient days, 0.11 [95% CI, 0.11-0.12] vs 0.06 [95% CI, 0.06-0.06]; P < .001; overall hospitalization rate per 1000 patient days, 0.88 [95% CI, 0.87-0.88] vs 0.43 [95% CI, 0.43-0.43]; P < .001). Conclusions And Relevance: This study suggests that mental health disorders are associated with substantially higher resource utilization and health care costs among patients with chronic diseases. These findings have clinical and health policy implications. abstract_id: PUBMED:11886320 Racial disparities in the quality of care for enrollees in medicare managed care. Context: Substantial racial disparities in the use of some health services exist; however, much less is known about racial disparities in the quality of care. Objective: To assess racial disparities in the quality of care for enrollees in Medicare managed care health plans. Design And Setting: Observational study, using the 1998 Health Plan Employer Data and Information Set (HEDIS), which summarized performance in calendar year 1997 for 4 measures of quality of care (breast cancer screening, eye examinations for patients with diabetes, beta-blocker use after myocardial infarction, and follow-up after hospitalization for mental illness). Participants: A total of 305 574 (7.7%) beneficiaries who were enrolled in Medicare managed care health plans had data for at least 1 of the 4 HEDIS measures and were aged 65 years or older. Main Outcome Measures: Rates of breast cancer screening, eye examinations for patients with diabetes, beta-blocker use after myocardial infarction, and follow-up after hospitalization for mental illness. Results: Blacks were less likely than whites to receive breast cancer screening (62.9% vs 70.9%; P<.001), eye examinations for patients with diabetes (43.6% vs 50.4%; P =.02), beta-blocker medication after myocardial infarction (64.1% vs 73.8%; P<.005), and follow-up after hospitalization for mental illness (33.2 vs 54.0%; P<.001). After adjustment for potential confounding factors, racial disparities were still statistically significant for eye examinations for patients with diabetes, beta-blocker use after myocardial infarction, and follow-up after hospitalization for mental illness. Conclusion: Among Medicare beneficiaries enrolled in managed care health plans, blacks received poorer quality of care than whites. abstract_id: PUBMED:23821169 Mental health care reforms in Asia: the regional health care strategic plan: the growing impact of mental disorders in Japan. In April 2013 Japan designated mental disorders as the fifth "priority disease" for national medical services, after cancer, stroke, acute myocardial infarction, and diabetes. All prefectures will be required to assess local mental health needs and develop necessary service components. This column provides an overview of the Regional Health Care Strategic Plan in the context of mental health and welfare reforms. The goals of the plan are to alter the balance between institutional and community-based care for patients with severe and persistent mental disorders, integrate general medical and mental health care, and support greater independence for people with mental disorders. It is a political challenge for Japan to reallocate resources to rebalance care services while maintaining free access to care. abstract_id: PUBMED:16378784 Quality of care in for-profit and not-for-profit health plans enrolling Medicare beneficiaries. Background: For-profit health plans now enroll the majority of Medicare beneficiaries who select managed care. Prior research has produced conflicting results about whether for-profit health plans provide lower quality of care. Objective: The objective was to compare the quality of care delivered by for-profit and not-for-profit health plans using Medicare Health Plan Employer Data and Information Set (HEDIS) clinical measures. Research Design: This was an observational study comparing HEDIS scores in for-profit and not-for-profit health plans that enrolled Medicare beneficiaries in the United States during 1997. Outcome Measures: Outcome measures included health plan quality scores on each of 4 clinical services assessed by HEDIS: breast cancer screening, diabetic eye examination, beta-blocker medication after myocardial infarction, and follow-up after hospitalization for mental illness. Results: The quality of care was lower in for-profit health plans than not-for-profit health plans on all 4 of the HEDIS measures we studied (67.5% vs 74.8% for breast cancer screening, 43.7% vs 57.7% for diabetic eye examination, 63.1% vs 75.2% for beta-blocker medication after myocardial infarction, and 42.1% vs 60.4% for follow-up after hospitalization for mental illness). Adjustment for sociodemographic case-mix and health plan characteristics reduced but did not eliminate the differences, which remained statistically significant for 3 of the 4 measures (not beta-blocker medication after myocardial infarction). Different geographic locations of for-profit and not-for-profit health plans did not explain these differences. Conclusion: By using standardized performance measures applied in a mandatory measurement program, we found that for-profit health plans provide lower quality of care than not-for-profit health plans. Special efforts to monitor and improve the quality of for-profit health plans may be warranted. abstract_id: PUBMED:35146280 Care for the chronically ill in Germany - The challenges during the COVID-19 pandemic. The COVID-19 pandemic is posing major challenges to the health care sector. This scoping review compiles evidence concerning changes to health care service availability and utilisation as well as possible impacts on health for selected groups of chronically ill people in Germany. The focus is on cancer, cardiovascular diseases, diabetes mellitus and mental disorders. Most empirical data available concerned inpatient care and showed a clear decline in the utilisation of inpatient treatments in March and April 2020 in the areas of oncology and cardiology as well as in mental health. For cardiovascular emergencies such as heart attack and stroke, a decline was observed especially regarding less serious cases. Although there were indications of treatment delays, there was no evidence thus far that emergency care had been generally compromised due to adjustments to inpatient care capacities. In the outpatient setting, extensive adjustments to health care services availability were observed for all disease groups considered. Overall, very limited empirical data were available. In particular, hardly any data were available on how changes in care impacted population health. There is an urgent need for continuous surveillance and evaluation based on health care and epidemiological data. abstract_id: PUBMED:35049490 Association of severe mental illness with stroke outcomes and process-of-care quality indicators: nationwide cohort study. Background: Severe mental illness (SMI) is associated with increased stroke risk, but little is known about how SMI relates to stroke prognosis and receipt of acute care. Aims: To determine the association between SMI and stroke outcomes and receipt of process-of-care quality indicators (such as timely admission to stroke unit). Method: We conducted a cohort study using routinely collected linked data-sets, including adults with a first hospital admission for stroke in Scotland during 1991-2014, with process-of-care quality indicator data available from 2010. We identified pre-existing schizophrenia, bipolar disorder and major depression from hospital records. We used logistic regression to evaluate 30-day, 1-year and 5-year mortality and receipt of process-of-care quality indicators by pre-existing SMI, adjusting for sociodemographic and clinical factors. We used Cox regression to evaluate further stroke and vascular events (stroke and myocardial infarction). Results: Among 228 699 patients who had had a stroke, 1186 (0.5%), 859 (0.4%), 7308 (3.2%) had schizophrenia, bipolar disorder and major depression, respectively. Overall, median follow-up was 2.6 years. Compared with adults without a record of mental illness, 30-day mortality was higher for schizophrenia (adjusted odds ratio (aOR) = 1.33, 95% CI 1.16-1.52), bipolar disorder (aOR = 1.37, 95% CI 1.18-1.60) and major depression (aOR = 1.11, 95% CI 1.05-1.18). Each disorder was also associated with marked increased risk of 1-year and 5-year mortality and further stroke and vascular events. There were no clear differences in receipt of process-of-care quality indicators. Conclusions: Pre-existing SMI was associated with higher risks of mortality and further vascular events. Urgent action is needed to better understand and address the reasons for these disparities. abstract_id: PUBMED:29781683 Tuberculosis non-communicable disease comorbidity and multimorbidity in public primary care patients in South Africa. Background: Little is known about the prevalence of non-communicable disease (NCD) multimorbidity among tuberculosis (TB) patients in Africa.Aim and setting: The aim of this study was to assess the prevalence of NCD multimorbidity, its pattern and impact on adverse health outcomes among patients with TB in public primary care in three selected districts of South Africa. Methods: In a cross-sectional survey, new TB and TB retreatment patients were interviewed, and medical records assessed in consecutive sampling within 1 month of anti-TB treatment. The sample included 4207 (54.5% men and 45.5% women) TB patients from 42 primary care clinics in three districts. Multimorbidity was measured as the simultaneous presence of two or more of 10 chronic conditions, including myocardial infarction or angina pectoris, arthritis, asthma, chronic lung disease, diabetes mellitus, hypertension, dyslipidaemia, malignant neoplasms, tobacco and alcohol-use disorder. Results: The prevalence of comorbidity (with one NCD) was 26.9% and multimorbidity (with two or more NCDs) was 25.3%. We identified three patterns of multimorbidity: (1) cardio-metabolic disorders; (2) respiratory disorders, arthritis and cancer; and (3) substance-use disorders. The likelihood of multimorbidity was higher in older age, among men, and was lower in those with higher education and socio-economic status. The prevalence of physical health decreased, and common mental disorders and post-traumatic stress disorder increased with an increase in the number of chronic conditions. Conclusion: High NCD comorbidity and multimorbidity were found among TB patients predicted by socio-economic disparity. abstract_id: PUBMED:1638455 Psychiatric morbidity during the early phase of coronary care for myocardial infarction: association with cardiac diagnosis and outcome. We evaluated the association of psychiatric morbidity during the early phase of admission to a coronary care unit with cardiac diagnosis and subsequent morbidity. Ninety-two patients admitted for the first time for presumed myocardial infarction were evaluated within 48 hours of hospitalization. Anxiety and depressive symptoms and cognitive impairment were rated. Data were collected on cardiac diagnosis and morbidity. Three and 12 months after hospitalization, cardiac morbidity, psychiatric symptoms and psychosocial morbidity were assessed. Answer: The retrospective cohort study of 4,340 veterans discharged with clinically confirmed acute myocardial infarction within the VA health care system found that patients with mental illness were marginally less likely to undergo in-hospital angiography compared to those without mental illness. However, there was no significant difference in the likelihood of receiving coronary artery bypass graft surgery within 90 days after admission or in the receipt of medications of known benefit, such as beta-blockers at discharge. The study also found that the risk-adjusted odds ratio for death within one year was higher for patients with mental illness, although this finding did not reach statistical significance. These results suggest that while there are some disparities in the process of care for patients with mental illness after an acute myocardial infarction, the VA health care system may attenuate some health care disparities compared to other settings (PUBMED:12650380). This is consistent with the broader context of health care for individuals with mental illness, where studies have shown that mental health disorders are associated with higher health care costs and resource utilization among patients with chronic diseases (PUBMED:31441939). Additionally, there is evidence of disparities in the quality of care for mental health conditions, such as follow-up after hospitalization for mental illness, where racial disparities have been observed in Medicare managed care health plans (PUBMED:11886320). In the context of the VA health care system, the findings suggest that an integrated health care system with few financial barriers to health care access may help reduce some disparities in care for vulnerable groups, including those with mental illness. However, further work is needed to understand how health care organizational features might narrow these disparities (PUBMED:12650380).
Instruction: Detection of masked hypertension by home blood pressure measurement: is the number of measurements an important issue? Abstracts: abstract_id: PUBMED:15564984 Detection of masked hypertension by home blood pressure measurement: is the number of measurements an important issue? Background: Office blood pressure (OBP) and home blood pressure (HBP) enable the identification of patients with masked hypertension. Masked hypertension is defined by normal OBP and high HBP and is known as a pejorative cardiovascular risk factor. Objective: The objective was to evaluate in the SHEAF study the influence of the number of office or home blood pressure measurements on the classification of patients as masked hypertensives. Methods: Patients with OBP <140/90 mmHg (mean of six values: three measurements at two separate visits, V1 and V2) and HBP >135/85 mmHg (mean of all valid measurements performed over a 4-day period) were the masked hypertensive reference group. The consistency of the classification was evaluated by using five definitions of HBP values (mean of the 3, 6, 9, 12 and 15 first measurements) and two definitions of OBP values (mean of three measurements at V1 and mean of three measurements at V2). Results: Among the 4939 treated hypertensives included in the SHEAF study, 463 (9.4%) were classified as masked hypertensives (reference group). By decreasing the number of office or home measurements, the prevalence of masked hypertension ranged from 8.9-12.1%. The sensitivity of the classification ranged from 94-69% therefore 6-31% of the masked hypertensives were not detected. The specificity ranged from 98-94% therefore 1-6% of patients were wrongly classified as masked hypertensives. Conclusion: A limited number of home and office BP measurements allowed the detection of masked hypertension with a high specificity and a low sensitivity. A sufficient number of measurements (three measurements at two visits for OBP and three measurements in the morning and in the evening over 2 days for HBP) are required to diagnose masked hypertension. abstract_id: PUBMED:34523884 Home Blood Pressure Monitoring. Home blood pressure monitoring provides important diagnostic information beyond in-office blood pressure readings and offers similar results to ambulatory blood pressure monitoring. Home blood pressure monitoring involves patients independently measuring their blood pressure with an electronic device, whereas ambulatory blood pressure monitoring involves patients wearing a portable monitor for 24 to 48 hours. Although ambulatory blood pressure monitoring is the diagnostic standard for measurement, home blood pressure monitoring is more practical and accessible to patients, and its use is recommended by the U.S. Preventive Services Task Force and the American College of Cardiology/American Heart Association. Home blood pressure monitoring generally results in lower blood pressure readings than in-office measurements, can confirm the diagnosis of hypertension after an elevated office blood pressure reading, and can identify patients with white coat hypertension or masked hypertension. Best practices for home blood pressure monitoring include using an appropriately fitting upper-arm cuff on a bare arm, emptying the bladder, avoiding caffeinated beverages for 30 minutes before taking the measurement, resting for five minutes before taking the measurement, keeping the feet on the floor uncrossed and the arm supported with the cuff at heart level, and not talking during the reading. An average of multiple readings, ideally two readings in the morning and again in the evening separated by at least one minute each, is recommended for one week. Home blood pressure readings can be used in hypertension quality measures. abstract_id: PUBMED:16205478 Use of home blood-pressure monitoring in the detection, treatment and surveillance of hypertension. Purpose Of Review: Use of home blood-pressure monitoring is increasing but the technique and the equipment have limitations. We provide an overview of recent evidence in this rapidly evolving field. Recent Findings: Home blood-pressure monitoring is an acceptable method for screening patients for hypertension. There is increasing evidence supporting the predictive power of home blood pressure for stroke risk even in the general population. The identification of white-coat and masked hypertension remains an important role for home blood-pressure monitoring. Unvalidated equipment and poor patient technique are major concerns. The purchase of devices needs to be linked to a simple patient-education programme, which is perhaps an opportunity for collaboration between healthcare providers and commercial companies. Devices that store the blood-pressure measurements in the memory are preferred to ensure accuracy of reporting. Data-transmission systems providing automatic storage, transmission and reporting of blood pressure, direct involvement of the patient and potentially a reduced number of hospital/general practitioner visits, offer significant advantages. To reduce patient anxiety, overuse of home blood-pressure monitoring should be avoided but there is the potential for self-modification of treatment, subject to certain safeguards. Summary: Self-monitoring of blood pressure is developing rapidly, linked to increasing awareness of the impact of reducing high blood pressure on public health and the marketing/advertising strategies used to sell automatic devices. Home blood-pressure monitoring has a role in the detection and management of blood pressure, but not at the expense of careful blood-pressure measurement in the office and adherence to national guidelines. abstract_id: PUBMED:26364650 Home blood pressure measurements: advantages and disadvantages compared to office and ambulatory monitoring. Home blood pressure monitoring (HBPM) is an easy and standardized tool to measure blood pressure (BP) at home, which is recommended by several national and international guidelines to obtain reliable BP values. It is a useful and dependable measure of BP, which can add information to the more common and standardized methods, such as office (OBPM) and ambulatory blood pressure measurement (ABPM), and sometimes substitute for them. Along with ABPM, HBPM detects out-of-office BP levels, helping to identify the white-coat effect and masked hypertension. Moreover, the large number of BP measurements obtained with HBPM has been demonstrated to be associated with future cardiovascular events and to provide information on day-to-day variability; this is a new aspect, the clinical significance of which is still under evaluation. By giving patients a more active role in the management of their chronic disease, HBPM can improve treatment adherence. The present review focuses on all these aspects and describes the pros and cons of HBPM use compared to OBPM and ABPM. In conclusion, although some aspects need to be clarified, the scientific evidence collected so far should encourage the more widespread use of HBPM in clinical practice. abstract_id: PUBMED:27745830 Outpatient blood pressure monitoring is not always necessary It is clear that clinical measurements of blood pressure can lead to errors in the diagnostic process and follow-up of patients with hypertension. Scientific societies recommend other measurement methods, such as home measurements and outpatient monitoring. Outpatient monitoring might be the golden standard but, nowadays has an important limitation-its availability. Home measurements solve 80-90% of the doubts of the diagnostic process and follow-up of patients with hypertension, and its higher availability and acceptance by the patient are clear. Home measurements should be used in the diagnostic process of arterial hypertension as a screening test for white coat hypertension and masked hypertension. They should be used as a screening test for resistant hypertension in the follow-up of patients with high blood pressure. Besides, in the follow-up of patients with hypertension home measurements have shown that they can contribute to treatment adherence, reduce clinical inertia and make data teletransmission possible, aspects that have proven to help improve the degree of control of hypertensive patients. Therefore, home measurements would be the treatment of choice for the diagnosis and follow-up of most patients with hypertension. We should consider home measurements and outpatient monitoring as complementary methods for the diagnosis and follow-up of patients with high blood pressure. abstract_id: PUBMED:37627945 Home versus Clinic Blood Pressure Monitoring: Evaluating Applicability in Hypertension Management via Telemedicine. Hypertension is a significant public health concern in Saudi Arabia, affecting 28.6% of the population. Despite the availability of effective treatments, optimal blood pressure control is not always achieved, highlighting the need for effective management strategies. This study aimed to evaluate the applicability of home, compared to clinic, blood pressure measurements for managing hypertension in the Qassim region of Saudi Arabia. The study included 85 adults undergoing antihypertensive treatment. Home blood pressure measurements were obtained during the day and the evening using automated oscillometric sphygmomanometers, whereas clinic measurements were taken during clinic hours. Home blood pressure readings were significantly lower than clinic blood pressure readings, with mean differences of 20.4 mmHg and 4.1 mmHg for systolic and diastolic blood pressures, respectively. There was a positive correlation between the clinic systolic and diastolic blood pressures (r = 0.549, p < 0.001) and a weak correlation between the daytime home and clinic systolic blood pressures (r = 0.218, p < 0.05). This study provides insight into the applicability of home blood pressure monitoring, which may aid in the development of more effective hypertension management strategies, particularly the use of morning home blood pressure monitoring to aid treatment decisions through telehealth medicine. abstract_id: PUBMED:15948383 Clinical significance of home blood pressure measurements for diagnosis of hypertension in the elderly Systolic blood pressure elevates with increase in age. The frequency of white-coat hypertension and masked hypertension also increase with increase in age. Elderly persons tend to have abnormal circadian rhythm of blood pressure. Home blood pressure measurements are very useful to determine such pathophysiological conditions. Home blood pressure is useful not only for diagnosis of hypertension but also for treatment and management of hypertension; i.e. choice and titration of antihypertensive drugs. A minimal antihypertensive effect and duration of action of antihypertensive drugs are determined by home blood pressure measurements: the latter is established by the comparison of the antihypertensive effect of the drug in the morning with that in the evening, i.e. morning-evening ratio (ME ratio). abstract_id: PUBMED:26303331 Comparison of ambulatory blood pressure measurement with home, office and pharmacy measurements: is arterial blood pressure measured at pharmacy reliable? Rationale, Aims And Objectives: Standardizing arterial blood pressure (BP) measurement is difficult because of different performers like doctor or pharmacy employee. We investigated the reliability between different BP measurement methods. Methods: The study was conducted in an internal medicine service with 160 patients in Ankara, Turkey. First, the subjects' BP was measured by doctor. Then, 24-hour BP monitoring devices were placed. Participants were then instructed to measure their BPs both at home and in pharmacy. The next day, arterial BP was measured by the doctor for the second time. Results: The prevalence rates of masked and white coat hypertension were 8.8% (n = 14) and 8.1% (n = 13), respectively. There was no statistically significant differences between ambulatory measurement and home, office and pharmacy measurements (P > 0.05). The consistency rate between ambulatory and home measurements was 97.5% (kappa = 0.947, P < 0.001). The consistency rate between ambulatory and pharmacy measurements was 82.5% (kappa = 0.634, P < 0.001). When compared with ambulatory measurement, the most sensitive (98.0%) and most specific (96.8%) method was home measurement. There was a moderate positive correlation between ambulatory and other measurements in both systolic and diastolic values. There was a positive and very strong correlation between ambulatory and home measurements of systolic and diastolic ABP values (respectively; r = 0.926 and r = 0.968) and there was a statistically significant relation between these measurements (P < 0.001). Conclusion: The results of all measurement methods were close to each other. The most sensitive and specific method was home measurement when compared with ambulatory measurement. But both office and pharmacy measurements had also high sensitivity and specificity. abstract_id: PUBMED:30003694 Home blood pressure monitoring in the 21st century. Home blood pressure monitoring provides multiple measurements in the usual environment of each individual, allows the detection of intermediate hypertension phenotypes (white-coat and masked hypertension), and appears to have superior prognostic value compared to the conventional office blood pressure measurements. Accumulating evidence suggests that home blood pressure monitoring improves long-term hypertension control rates. Moreover, it is widely available, relatively inexpensive, and well accepted by patients. Thus, current guidelines recommend home blood pressure monitoring as an essential method for the evaluation of almost all untreated and treated patients with suspected or treated hypertension. Validated automated upper-arm cuff devices with automated storage and averaging of readings should be used. The home blood pressure monitoring schedule for 4 to 7 days with exclusion of the first day (12-24 readings) should be averaged to provide values for decision making. abstract_id: PUBMED:37878534 The HOPE Asia Network consensus on blood pressure measurements corresponding to office measurements: Automated office, home, and ambulatory blood pressures. For adopting recently introduced hypertension phenotypes categorized using office and out of office blood pressure (BP) for the diagnosis of hypertension and antihypertension drug therapy, it is mandatory to define the corresponding out of office BP with the specific target BP recommended by the major guidelines. Such conditions include white-coat hypertension (WCH), masked hypertension (MH), white-coat uncontrolled hypertension (WUCH), and masked uncontrolled hypertension (MUCH). Here, the authors review the relevant literature and discuss the related issue to facilitate the use of corresponding BPs for proper diagnosis of WCH, MH, WUCH, and MUCH in the setting of standard target BP as well as intensive target BP. The methodology of deriving the corresponding BP has evolved from statistical methods such as standard deviation, percentile value, and regression to an outcome-based approach using pooled international cohort study data and comparative analysis in randomized clinical trials for target BPs such as the SPRINT and STEP studies. Corresponding BPs to 140/90 and 130/80 mm Hg in office BP is important for safe and strict achievement of intensive BP targets. The corresponding home, daytime, and 24-h BPs to 130/80 mm Hg in office BP are 130/80, 130/80, and 125/75 mm Hg, respectively. However, researchers have found some discrepancies among the home corresponding BPs. As tentative criterion for de-escalation of antihypertensive therapy as shown in European guidelines was 120 mm Hg in office BP, corresponding home, daytime, and 24-h systolic BPs to 120 mm Hg in office systolic BP are 120, 120, and 115 mm Hg, respectively. Answer: Yes, the number of home blood pressure measurements is an important issue in the detection of masked hypertension. The SHEAF study found that by decreasing the number of office or home measurements, the prevalence of masked hypertension ranged from 8.9-12.1%, and the sensitivity of the classification ranged from 94-69%, meaning that 6-31% of masked hypertensives were not detected when fewer measurements were used. The specificity ranged from 98-94%, indicating that 1-6% of patients were wrongly classified as masked hypertensives with fewer measurements. Therefore, a sufficient number of measurements are required to diagnose masked hypertension accurately, specifically three measurements at two visits for office blood pressure (OBP) and three measurements in the morning and in the evening over 2 days for home blood pressure (HBP) (PUBMED:15564984). Home blood pressure monitoring (HBPM) is recommended by various guidelines and is considered to provide reliable blood pressure values. Best practices for HBPM include taking an average of multiple readings, ideally two readings in the morning and again in the evening, for one week (PUBMED:34523884). HBPM is useful for detecting white-coat and masked hypertension and has been shown to be associated with future cardiovascular events (PUBMED:26364650). It is also recommended for the diagnosis and follow-up of most patients with hypertension, as it can contribute to treatment adherence and reduce clinical inertia (PUBMED:27745830). Moreover, HBPM has been found to have a high sensitivity and specificity when compared with ambulatory blood pressure measurement (ABPM), which is considered the gold standard (PUBMED:26303331). It is also more practical and accessible to patients compared to ABPM (PUBMED:34523884). The HOPE Asia Network consensus also emphasizes the importance of corresponding home blood pressure measurements to office measurements for the diagnosis of various hypertension phenotypes (PUBMED:37878534). In conclusion, the number of home blood pressure measurements is crucial for the accurate detection of masked hypertension, and a sufficient number of measurements are necessary to ensure high sensitivity and specificity in classification.
Instruction: Retroperitoneoscopic Adrenalectomy in Obese Patients: Is It Suitable? Abstracts: abstract_id: PUBMED:37497309 The Impact of Obesity on Mortality and Complications in Posterior Retroperitoneoscopic Adrenalectomy. Background Obesity is a global epidemic. It influences surgical technique, ergonomics, safety, and outcomes. However, there is a paucity of evidence of obesity-related impact in posterior retroperitoneoscopic adrenalectomy (PRA). This study compared perioperative outcomes of obese and non-obese participants undergoing PRA. Methodology This is a multi-center retrospective cohort study of elective PRA from March 2014 to December 2022. Patient demographics, surgical techniques, clinicopathological parameters, and outcomes, including overall complication rate, were analyzed using SPSS version 27 (IBM Corp., Armonk, NY, USA). Results Seventy-five patients underwent a PRA, of which 97.3% were completed retroperitoneoscopically. The overall complication rate was (9.3%), and on subgroup analysis, the obese cohort had a lower percentage complication profile at 6.5%. Male participants comprised 52%, with a median age of 55 (IQR=19). The median BMI was 29.0 (IQR=8), of which 41% were obese, and 40% were overweight. Univariate analysis showed that being obese was not significantly associated with a higher complication rate (p=0.471). In addition, there was no significant increase in conversion (p=0.508), bleeding/transfusion (p=0.508), surgical site infection (SSI; p=1.000), incisional hernia (p=1.000), ICU or high dependency unit admission (p=0.292) and any-cause mortality (p=1.000). No sentinel deaths directly related to PRA were recorded. Procedure duration was longer in obese (117 mins) vs. non-obese participants (88.9 mins, p=0.022). However, there was no significant difference in the length-of-hospital stay (p=0.592). The cohort conversion rate was (2.7%), and tumor size was associated with a higher conversion rate (35.4 vs. 62.5mm, p=0.040). Conclusion Posterior retroperitoneoscopic adrenalectomy can be a safe procedure in obese populations, and obesity does not increase perioperative morbidity or mortality. abstract_id: PUBMED:27826779 Single-incision retroperitoneoscopic adrenalectomy: a North American experience. Background: Endoscopic adrenalectomy is currently performed using multiple ports placed either transabdominally or retroperitoneally. We report our initial experience with single-incision retroperitoneoscopic adrenalectomy (SIRA). Methods: A prospective database of patients undergoing adrenalectomy from December 2013 through March 2016 was analyzed. We adopted conventional retroperitoneoscopic adrenalectomy (CORA) in December 2013 and transitioned to SIRA in March 2015. SIRA was performed using three trocars placed through a single 2-cm incision below the 12th rib. Clinical characteristics and outcomes were compared between patients undergoing SIRA and CORA. Results: One hundred and five adrenalectomies were performed in 102 patients: 34 laparoscopic transperitoneal, 24 CORA, 37 SIRA and 7 open. The SIRA and CORA groups were similar with respect to clinical characteristics (SIRA vs. CORA: mean BMI 27.0 vs. 28.8 kg/m2, maximum BMI 38.9 vs. 44.3 kg/m2; mean nodule size 3.2 vs. 3.2 cm, maximum nodule size 8.0 vs. 6.0 cm). One patient undergoing SIRA required placement of an additional 5-mm port because of extensive adhesions. No patients who underwent SIRA or CORA required conversion to open adrenalectomy. There were no deaths, and blood loss remained <10 mL for all cases. Operative length was similar between SIRA and CORA (105 vs. 92 min, P = 0.26). In multivariable linear regression analysis, nodule size > 5 cm (effect = 1.75, P < 0.001) and pheochromocytoma (effect = 1.30, P = 0.05) were significant predictors of increased operative length for SIRA. BMI and laterality (right vs. left) did not affect operative length. Rates of postoperative temporary abdominal wall relaxation, length of stay and postoperative pain medication use were similar between the two groups. Conclusions: SIRA is safe and feasible to implement as a refinement of CORA and may be applied to technically challenging cases involving obese patients or large nodules. The use of three ports allows for two-handed dissection, which may shorten the learning curve for many surgeons. abstract_id: PUBMED:22584723 Retroperitoneoscopic adrenalectomy in pheochromocytoma. Since the first laparoscopic adrenalectomy, the technique has evolved and it has become the standard of care for many adrenal diseases, including pheochromocytoma. Two laparoscopic accesses to the adrenal have been developed: transperitoneal and retroperitoneal. Retroperitoneoscopic adrenalectomy may be recommended for the treatment of pheochromocytoma with the same peri-operative outcomes of the transperitoneal approach because it allows direct access to the adrenal glands without increasing the operative risks. Although technically more demanding than the transperitoneal approach, retroperitoneoscopy can shorten the mean operative time, which is critical for cases with pheochromocytoma where minimizing the potential for intra-operative hemodynamic changes is essential. Blood loss and the convalescence time can be also shortened by this approach. There is no absolute indication for either the transperitoneal or retroperitoneal approach; however, the latter procedure may be the best option for patients who have undergone previous abdominal surgery and obese patients. Also, retroperitoneoscopic adrenalectomy is a good alternative for treating cases with inherited pheochromocytomas, such as multiple endocrine neoplasia type 2A, in which the pheochromocytoma is highly prevalent and frequently occurs bilaterally. abstract_id: PUBMED:26486758 Laparoendoscopic Single-Site Retroperitoneoscopic Adrenalectomy Versus Conventional Retroperitoneoscopic Adrenalectomy in Obese Patients. Objective: To compare the operative outcomes of obese patients undergoing laparoendoscopic single-site (LESS) retroperitoneoscopic adrenalectomy or standard laparoscopic (LAP) retroperitoneoscopic adrenalectomy. Methods: Between September 2011 and April 2015, 51 obese patients underwent LESS retroperitoneoscopic adrenalectomy and their operative outcomes were compared with 65 obese patients who underwent standard retroperitoneoscopic adrenalectomy by the same surgeon. In the LESS group, a single-port access was positioned in the lumbar incision. The standard retroperitoneal adrenalectomy technique was performed with a combination of conventional and curved LAP instruments. The following parameters were adopted: patient demographics, surgical details, perioperative complications, postoperative outcome data, and short-term outcomes. Results: The LESS group was comparable with the LAP group in terms of total operative time (70.4 ± 21.3 vs 65.5 ± 24.8 minutes, p = 0.26), hospital length of stay (5.7 ± 1.2 vs 6.1 ± 1.5 days, p = 0.12), and incidence of complications (4/51 vs 5/65, p = 0.98) for patients with similar baseline demographics. The LESS group had significantly shorter surgical incisions (2.8 ± 0.3 vs 5.3 ± 0.6 cm, p < 0.0001), lower in-hospital analgesic requirement (7.6 ± 3.3 mg morphine equivalent vs 10.5 ± 6.2 mg, p = 0.003), and significantly superior scar satisfaction score (9.3 ± 0.7 vs 7.6 ± 1.4, p < 0.0001) than the LAP group. Although estimated blood loss was greater in the LESS group (28.1 ± 10.6 vs 16.9 ± 7.2 mL in the LAP group, p < 0.0001), <50 mL overall blood loss was not clinically significant. During a mean follow-up of 20.2 months, no recurrences or deaths were documented in either group. Conclusions: In properly selected patients, LESS retroperitoneoscopic adrenalectomy for obese individuals is technically feasible and safe in experienced hands, offering perioperative outcomes comparable with those of the conventional multiport approach, but with a superior cosmetic outcome. abstract_id: PUBMED:25398550 Retroperitoneoscopic Adrenalectomy in Obese Patients: Is It Suitable? Background: The aim of this study was to evaluate the suitability of posterior retroperitoneoscopic adrenalectomy for patients with morbid obesity. Methods: This retrospective clinical cohort study included patients who underwent elective posterior retroperitoneoscopic adrenalectomy. Intraoperative (operative time, blood loss, intraoperative complications, conversion rate) and postoperative (hospital stay, morbidity, mortality) parameters were compared between the two study subgroups: obese (body mass index [BMI] ≥30 kg/m(2)) and non-obese patients (BMI <30 kg/m(2)). Results: A total of 137 subsequent patients were enrolled in the study (41 obese and 96 non-obese patients). Mean tumour size was 5.2 ± 2.2 cm; aldosteronism and incidentaloma were the most frequent indications. Operative time was significantly longer (87 vs. 65 min; P = 0.0006) in obese patients. There was no difference in operative blood loss. One conversion was necessary. Overall, the 30-day postoperative morbidity was significantly higher in obese patients (26.8 vs. 11.5 %; P = 0.025). The hospital stay was significantly longer in obese patients (3.1 vs. 2.5 days; P = 0.003). Conclusions: Dorsal retroperitoneoscopic adrenalectomy can be safely performed in morbidly obese patients, maintaining the advantages of minimally invasive surgery. Avoiding an abdominal approach is beneficial for patients. There is a more favourable postoperative course, shorter hospital stay, better cosmetic outcome and quicker recovery with dorsal retroperitoneoscopic adrenalectomy. The prolonged operative time, longer hospital stay and higher risk of postoperative complications that occurred in obese patients were acceptable in light of the generally higher risk associated with surgeries performed in obese patients. abstract_id: PUBMED:29507374 Single-plane retroperitoneoscopic adrenalectomy: a new operative procedure for benign adrenal disease. To evaluate the therapeutic effect of single-plane retroperitoneoscopic adrenalectomy. From February 2014 to March 2017, 251 patients underwent single-plane retroperitoneoscopic adrenalectomy, and their operative outcomes were compared with those of 98 patients who underwent anatomical three-plane retroperitoneoscopic adrenalectomy. Among 35 patients with a body mass index (BMI) of ≥30 kg/m2, their operative outcomes were compared between two operative procedures. The demographic data and perioperative outcomes of the patients were statistically analysed. The single-plane and three-plane groups were comparable in terms of estimated blood loss, time to oral intake, hospital stay, and incidence of complications among patients with similar baseline demographics. The single-plane group had a significantly shorter operation time (46.9 ± 5.8 vs 54.8 ± 7.0 mins, P < 0.0001) and lower analgesia requirement (56/251 vs 33/98, p = 0.03). For obese patients with a BMI of ≥30 kg/m2, single-plane adrenalectomy was also associated with a significantly shorter operation time(48.1 ± 6.2 vs 64.1 ± 5.1 mins, p < 0.0001). Single-plane retroperitoneoscopic adrenalectomy is feasible, safe, and effective in the treatment of adrenal masses <5 cm in size and provides a shorter operation time and better pain control than anatomical retroperitoneal adrenalectomy, especially in obese patients. abstract_id: PUBMED:17256092 Retroperitoneoscopic adrenalectomy for phaeochromocytoma in a morbidly obese patient: a case report. A 38-year-old morbidly obese lady (BMI-46 kg/m(2)) was diagnosed with a right adrenal phaeochromocytoma. She successfully underwent retroperitoneoscopic adrenalectomy using a 3-port technique incurring a blood loss of 110 ml. Postoperatively her recovery was rapid and uneventful with stabilisation of blood pressure and relief of symptoms, thereby emphasising the safety and efficacy of retroperitoneoscopic adrenalectomy in morbidly obese patients. abstract_id: PUBMED:19198961 Retroperitoneoscopic adrenalectomy for pheochromocytoma in a morbidly obese. Laparoscopy has become a standard approach for adrenal surgery. Additionally, retroperitoneoscopic techniques have also shown to be effective for adrenal removal. It is known that laparoscopic techniques in the morbidly obese may be challenging; therefore, alternative approaches such as the retroperitoneoscopic may prove to be of value in the morbidly obese. abstract_id: PUBMED:30482519 Operation duration and adrenal gland size, but not BMI, are correlated with complication rate for posterior retroperitoneoscopic adrenalectomy for benign diseases. Background: We sought to determine whether obesity is correlated with complications after posterior retroperitoneoscopic adrenalectomy for benign diseases and to develop surrogate markers of abdominal fat in preoperative computed tomography. Methods: We conducted a retrospective chart review of all patients who had undergone posterior retroperitoneoscopic adrenalectomy and preoperative computed tomography between January 1, 2008 and December 31, 2015. The cross-sectional components of fat assessed by computed tomography included total fat area, subcutaneous fat area, retroperitoneal fat area, and peritoneal fat area. The patients were grouped into 2 categories according to the absence or presence of a postoperative complication (the no-complications group and the complications group, respectively). Results: Of 116 study patients, 20 patients (17%) had a postoperative complication. Operations of greater duration and smaller adrenal gland size were significantly correlated with complications both in univariate and multivariate analyses. Body mass index, total fat area, subcutaneous fat area, retroperitoneal fat area, peritoneal fat area, and distance of skin to muscle (in axial and sagittal images) correlated with complications in univariate analysis but not in multivariate analysis. In the no-complications group, the removed adrenal gland was larger than that removed in the complications group (6 vs 4 cm, respectively, P = .001), whereas the complications group had a greater operative duration (139 vs 104 min, respectively, P = .001) and a greater duration of hospital stay (3 days vs 1 day, respectively, P = .001). Conclusion: In this study, operations of greater duration and smaller adrenal gland size were better predictors of complications after posterior retroperitoneoscopic adrenalectomy for benign disease than measures of obesity, including body mass index. abstract_id: PUBMED:21463807 Posterior retroperitoneoscopic adrenalectomy: a contemporary American experience. Background: Posterior retroperitoneoscopic adrenalectomy (PRA) is an excellent surgical option for adrenal gland removal. The operation requires that surgeons learn a new approach with few similarities to anterior adrenalectomy. This study reports a large series of PRAs incorporated into surgical care using a team-model approach. Study Design: The prospective endocrine surgery database was queried to identify patients who underwent PRA during a recent 4-year period. Demographic, operative, and pathologic data were recorded. The authors' initial experiences with PRA (group 1) are compared with our contemporary experience (group 2). Results: One hundred and eighteen PRAs were successfully performed (100 unilateral and 9 bilateral). Indications were pheochromocytoma in 21 patients, Cushing's syndrome or Cushing's disease in 22 patients, aldosteronoma in 22 patients, virilizing tumor in 3 patients, isolated metastasis in 28 patients, and nonfunctional mass in 19 patients. Forty-eight percent of patients had undergone earlier abdominal surgery. Forty-eight percent were obese (body mass index [calculated as kg/m(2)] ≥30). No significant differences were found in operative time (110 versus 118 minutes, p = 0.30), tumor size (2.59 versus 2.85 cm, p = 0.44), or body mass index (29.63 versus 29.93, p = 0.82) between groups 1 and 2. Both complications (15.9% versus 7.7%, p = 0.29) and conversion rates (9.5% versus 1.9%, p = 0.19) were lower in group 2, although this was not statistically significant. Conclusions: PRA is a technique safely performed for a variety of adrenal lesions, is ideal for patients who have undergone earlier abdominal surgery, and is feasible in obese patients. Proficiency can be obtained during a short period, leading to low conversion and complication rates. This technique should be incorporated into the armamentarium of the endocrine surgeon. A team approach to learning new surgical techniques is effective. Answer: Retroperitoneoscopic adrenalectomy is considered suitable for obese patients. Studies have shown that this minimally invasive surgical approach can be safely performed in morbidly obese patients, maintaining the advantages of minimally invasive surgery while avoiding the challenges associated with an abdominal approach (PUBMED:25398550). The procedure allows for direct access to the adrenal glands without increasing operative risks and may be particularly beneficial for patients who have undergone previous abdominal surgery (PUBMED:22584723). Obese patients undergoing posterior retroperitoneoscopic adrenalectomy (PRA) do not experience increased perioperative morbidity or mortality compared to non-obese patients. A study found that the overall complication rate was low, and obesity was not significantly associated with a higher complication rate, conversion to open surgery, bleeding/transfusion, surgical site infection, incisional hernia, ICU or high dependency unit admission, or any-cause mortality (PUBMED:37497309). Additionally, the operative time was longer for obese patients, but there was no significant difference in the length of hospital stay (PUBMED:37497309). Other studies have reported similar findings, with one study showing that single-incision retroperitoneoscopic adrenalectomy (SIRA) is safe and feasible to implement as a refinement of conventional retroperitoneoscopic adrenalectomy (CORA) and may be applied to technically challenging cases involving obese patients or large nodules (PUBMED:27826779). Laparoendoscopic single-site (LESS) retroperitoneoscopic adrenalectomy has also been found to be technically feasible and safe for obese individuals, offering perioperative outcomes comparable with those of the conventional multiport approach, but with superior cosmetic outcomes (PUBMED:26486758). Furthermore, single-plane retroperitoneoscopic adrenalectomy has been shown to be effective in the treatment of adrenal masses in obese patients, providing a shorter operation time and better pain control than anatomical retroperitoneal adrenalectomy (PUBMED:29507374). Even in cases of morbid obesity, retroperitoneoscopic adrenalectomy has been successfully performed with rapid and uneventful recovery (PUBMED:17256092, PUBMED:19198961). In conclusion, retroperitoneoscopic adrenalectomy is suitable for obese patients and can be performed with low complication and conversion rates, making it a viable option for this patient population (PUBMED:21463807).
Instruction: Is my kid out of size? Abstracts: abstract_id: PUBMED:31513158 Oral manifestations of KID syndrome: rare clinical case The paper presents a rare clinical case of an infant with KID (Keratitis, Ichthyosis, Deafness) syndrome (about 100 patients reported so far) admitted for histological verification of oral mucosa lesions. Disease pathogenesis defines inadequate reparation and skin and mucosa innate immunity defect leading to higher incidence of bacterial and fungal infections, so the 4-years old girl received treatment for vegetating candidiasis of the oral mucosa for several weeks with no clinical improvement. Initial examination showed that the oral lesions resulted from sharp edges of severely affected carious teeth. Histological study of multifocal biopsy revealed pyogenic granulomas and no signs of SCC. Teeth extraction and symptomatic treatment leaded to significant clinical improvement and some remained mucosal changes may be attributed to syndrome manifestations. abstract_id: PUBMED:28943606 Kid-Short Marfan Score (Kid-SMS) Is a Useful Diagnostic Tool for Stratifying the Pre-Test Probability of Marfan Syndrome in Childhood. Due to age dependent organ manifestation, diagnosis of Marfan syndrome (MFS) is a challenge, especially in childhood. It is important to identify children at risk of MFS as soon as possible to direct those to appropriate treatment but also to avoid stigmatization due to false diagnosis. We published the Kid-Short Marfan Score (Kid-SMS) in 2012 to stratify the pre-test probability of MFS in childhood. Hence we now evaluate the predictive performance of Kid-SMS in a new cohort of children. We prospectively investigated 106 patients who were suspected of having MFS. At baseline, children were examined according to Kid-SMS. At baseline and follow-up visit, diagnosis of MFS was established or rejected using standard current diagnostic criteria according to the revised Ghent Criteria (Ghent-2). At baseline 43 patients were identified with a risk of MFS according to Kid-SMS whereas 21 patients had Ghent-2 diagnosis of MFS. Sensitivity was 100%, specificity 77%, negative predictive value 100% and Likelihood ratio of Kid-SMS 4.3. During follow-up period, three other patients with a stratified risk for MFS were diagnosed according to Ghent-2. We confirm very good predictive performance of Kid-SMS with excellent sensitivity and negative predictive value but restricted specificity. Kid-SMS avoids stigmatization due to diagnosis of MFS and thus restriction to quality of life. Especially outpatient pediatricians and pediatric cardiologists can use it for primary assessment. abstract_id: PUBMED:27927571 A Comparative Analysis Among the SRS M&M, NIS, and KID Databases for the Adolescent Idiopathic Scoliosis. Study Design: Retrospective cohort analysis. Objectives: A growing number of publications have utilized the Scoliosis Research Society (SRS) Morbidity and Mortality (M&M) database, but none have compared it to other large databases. The objective of this study was to compare SRS complications with those in administrative databases. Summary Of Background Data: The Nationwide Inpatient Sample (NIS) and Kid's Inpatient Database (KID) captured a greater number of overall complications while the SRS M&M data provided a greater incidence of spine-related complications following adolescent idiopathic scoliosis (AIS) surgery. Chi-square was used to obtain statistical significance, with p < .05 considered significant. Methods: The SRS 2004-2007 (9,904 patients), NIS 2004-2007 (20,441 patients) and KID 2003-2006 (10,184 patients) databases were analyzed for AIS patients who underwent fusion. Comparable variables were queried in all three databases, including patient demographics, surgical variables, and complications. Results: Patients undergoing AIS in the SRS database were slightly older (SRS 14.4 years vs. NIS 13.8 years, p < .0001; KID 13.9 years, p < .0001) and less likely to be male (SRS 18.5% vs. NIS 26.3%, p < .0001; KID 24.8%, p < .0001). Revision surgery (SRS 3.3% vs. NIS 2.4%, p < .0001; KID 0.9%, p < .0001) and osteotomy (SRS 8% vs. NIS 2.3%, p < .0001; KID 2.4%, p < .0001) were more commonly reported in the SRS database. The SRS database reported fewer overall complications (SRS 3.9% vs. NIS 7.3%, p < .0001; KID 6.6%, p < .0001). However, when respiratory complications (SRS 0.5% vs. NIS 3.7%, p < .0001; KID 4.4%, p < .0001) were excluded, medical complication rates were similar across databases. In contrast, SRS reported higher spine-specific complication rates. Mortality rates were similar between SRS versus NIS (p = .280) and SRS versus KID (p = .08) databases. Conclusions: There are similarities and differences between the three databases. These discrepancies are likely due to the varying data-gathering methods each organization uses to collect their morbidity data. Level Of Evidence: Level IV. abstract_id: PUBMED:12727876 Cdc2-mediated phosphorylation of Kid controls its distribution to spindle and chromosomes. The chromokinesin Kid is important in chromosome alignment at the metaphase plate. Here, we report that Kid function is regulated by phosphorylation. We identify Ser427 and Thr463 as M phase-specific phosphorylation sites and Cdc2-cyclin B as a Thr463 kinase. Kid with a Thr463 to alanine mutation fails to be localized on chromosomes and is only detected along spindles, although it retains the ability to bind DNA or chromosomes. Localization of rigor-type mutant Kid, which shows nucleotide-independent microtubule association, is also confined to the spindle, implying that strong association of Kid with the spindle can sequester it from chromosomes. T463A substitution in DNA-binding domain-truncated Kid consistently enhances its spindle localization. At physiological ionic strength, unphosphorylated Kid shows ATP-independent microtubule association, whereas Thr463-phosphorylated Kid shows ATP dependency. Moreover, the stalk region of unphosphorylated Kid interacts with microtubules and the interaction is weakened when Thr463 is phosphorylated. Our data suggest that phosphorylation on Thr463 of Kid downregulates its affinity for microtubules to ensure reversible association with spindles, allowing Kid to bind chromosomes and exhibit its function. abstract_id: PUBMED:26890962 Kid depletion in mouse oocytes associated with multinucleated blastomere formation and inferior embryo development. This study investigated the knockdown (KD) of Kid on maturation developmental competence and multinucleation of mouse germinal vesicle (GV) oocytes after parthenogenetic activation. Data revealed that Kid messenger RNA (mRNA) was expressed in GV and MII stage oocyte and 1- and 2-cell embryos. Additionally, Kid mRNA expression in the Kid KD group decreased by nearly 46% compared to the control small interfering RNA (siRNA) groups. The rate of multinucleated embryos in the Kid KD group (52.4%) was significantly higher (P < 0.05) than the control siRNA group (4.7%). Finally, the developmental rates were significantly lower in the Kid siRNA group at > 4-cell stage (28.6% vs. 53.5%) and the blastocyst stage (2.4% vs. 23.3%) compared to the control siRNA groups. Suppression of Kid using siRNA caused multinucleation in early embryos with high frequency and it may increase 2- to 4-cell arrested embryos and reduce the developmental competence to blastocyst. abstract_id: PUBMED:22470741 Imaging Keratitis-Icthyosis-Deafness (KID) syndrome with FDG-PET (F18-fluorodeoxiglucose-Positron Emission Tomography). Keratitis-Icthyosis-Deafness (KID) syndrome is a rare dysplasia characterized by vascularizing keratitis, congenital sensorineural hearing-loss, and progressive erythrokeratoderma. To our knowledge, this is the first KID syndrome imaged with FDG-PET in the literature. This paper is intended to help familiarize with the FDG abnormalities related to this rare entity. abstract_id: PUBMED:32367610 Clinical, etiopathogenic, and therapeutic aspects of KID syndrome. Keratitis-ichthyosis-deafness (KID syndrome) is a syndromes ichthyoses that is clinically and genetically heterogeneous requiring early and long-term multidisciplinary monitoring of affected individuals. A review of the clinical, etiopathogenic and therapeutic aspects is presented of this rare congenital ectodermal disorder. abstract_id: PUBMED:3312867 KID syndrome (keratitis-ichthyosis-deafness) The KID syndrome is characterized by congenital ichthyosis, vascular keratitis and neurosensorial deafness. We report a 17 year old female patient, the first case of KID syndrome in Spanish literature. Red, dry, scaling skin was present at birth with sparse hair. At the age of six, malar erythema was prominent, with perioral ragades and onset of progressive neurosensory deafness. At the age of ten, vascularizing keratitis developed. At 12, treatment with etretinate failed to improve the ichthyosis. We review the clinical, pathological and analytical features of KID syndrome and discuss its relationship to other ichthyoses. abstract_id: PUBMED:19694809 A mutagenic analysis of the RNase mechanism of the bacterial Kid toxin by mass spectrometry. Kid, the toxin of the parD (kis, kid) maintenance system of plasmid R1, is an endoribonuclease that preferentially cleaves RNA at the 5' of A in the core sequence 5'-UA(A/C)-3'. A model of the Kid toxin interacting with the uncleavable mimetic 5'-AdUACA-3' is available. To evaluate this model, a significant collection of mutants in some of the key residues proposed to be involved in RNA binding (T46, A55, T69 and R85) or RNA cleavage (R73, D75 and H17) were analysed by mass spectrometry in RNA binding and cleavage assays. A pair of substrates, 5'-AUACA-3', and its uncleavable mimetic 5'-AdUACA-3', used to establish the model and structure of the Kid-RNA complex, were used in both the RNA cleavage and binding assays. A second RNA substrate, 5'-UUACU-3' efficiently cleaved by Kid both in vivo and in vitro, was also used in the cleavage assays. Compared with the wild-type protein, mutations in the residues of the catalytic site abolished RNA cleavage without substantially altering RNA binding. Mutations in residues proposed to be involved in RNA binding show reduced binding efficiency and a corresponding decrease in RNA cleavage efficiency. The cleavage profiles of the different mutants were similar with the two substrates used, but RNA cleavage required much lower protein concentrations when the 5'-UUACU-3' substrate was used. Protein synthesis and growth assays are consistent with there being a correlation between the RNase activity of Kid and its inhibitory potential. These results give important support to the available models of Kid RNase and the Kid-RNA complex. abstract_id: PUBMED:17206710 Interactions between the toxin Kid of the bacterial parD system and the antitoxins Kis and MazE. The proteins Kid and Kis are the toxin and antitoxin, respectively, encoded by the parD operon of Escherichia coli plasmid R1. Kis prevents the inhibition of E. coli cell growth caused by the RNA cleavage activity of Kid. Overproduction of MazE, the chromosome-encoded homologue of Kis, has been demonstrated to neutralize Kid toxicity to a certain extent in the absence of native Kis. Here, we show that a high structural similarity exists between these antitoxins, using NMR spectroscopy. We report about the interactions between Kid and Kis that are responsible for neutralization of Kid toxicity and enhance autoregulation of parD transcription. Native macromolecular mass spectrometry data demonstrate that Kid and Kis form multiple complexes. At Kis:Kid ratios equal to or exceeding 1:1, as found in vivo in a plasmid-containing cell, various complexes are present, ranging from Kid(2)-Kis(2) tetramer up to Kis(2)-Kid(2)-Kis(2)-Kid(2)-Kis(2) decamer. When Kid is in excess of Kis, corresponding to an in vivo situation immediately after loss of the plasmid, the Kid(2)-Kis(2)-Kid(2) heterohexamer is the most abundant species. NMR chemical shift and intensity perturbations in the (1)H (15)N HSQC spectra of Kid and Kis, observed when titrating the partner protein, show that the interaction sites of Kid and Kis resemble those within the previously reported MazF(2)-MazE(2)-MazF(2) complex. Furthermore, we demonstrate that Kid(2)-MazE(2) tetramers can be formed via weak interactions involving a limited part of the Kis-binding residues of Kid. The functional roles of the identified Kid-Kis and Kid-MazE interaction sites and complexes in toxin neutralization and repression of transcription are discussed. Answer: The provided abstracts do not contain any information regarding the physical size or growth patterns of children, so it is not possible to determine if your "kid" is out of size based on these abstracts. The term "kid" in the context of these abstracts refers to different meanings, such as an abbreviation for "Keratitis-Ichthyosis-Deafness" syndrome (KID syndrome) or a component of a bacterial toxin-antitoxin system (Kid toxin). If you have concerns about your child's growth or size, it is recommended to consult a pediatrician or healthcare provider for an evaluation.
Instruction: Revising the primigravid partogram: does it make any difference? Abstracts: abstract_id: PUBMED:27579090 The impact of positive acquired thrombophilia serology on ultrasound, obstetric outcome and the placenta in a low-risk primigravid population. Our aim was to determine the prevalence and sequelae of positive acquired thrombophilia serology in the asymptomatic low-risk primigravid population. We undertook a prospective blinded study of 1011 primigravid patients screening for lupus anticoagulant, anticardiolipin antibody, anti-β 2 glycoprotein-1 and antinuclear antibody assessment at booking and 36 weeks gestation. Serial ultrasounds of the fetus with uterine and umbilical Dopplers and placental evaluation were performed at 24 and 36 weeks gestation. Antenatal course, labour and delivery outcome and placental histology were reviewed. The incidence of positive acquired thrombophilia serology was 27.4%. Overall, there was no difference in rates of fetal loss or maternal disease between women with positive acquired thrombophilia serology and the control population. Routine testing for acquired thrombophilic traits is therefore not warranted. abstract_id: PUBMED:29593991 Uterine rupture in a primigravid patient with an unscarred bicornuate uterus at term. Background: Uterine rupture of an unscarred primigravid uterus is an exceedingly rare event. Cases of spontaneous rupture of an unscarred bicornuate uterus have been reported, but typically occur in the first or second trimester. Case: A 28-year-old primigravida at 37 weeks gestation with a known bicornuate uterus and no prior surgery underwent an emergent cesarean section after presenting with severe abdominal pain and signs of fetal compromise. She was found to have a uterine rupture with the fetus free in the abdomen accompanied by a large hemoperitoneum. Both mother and baby did well postoperatively. Conclusion: Bicornuate uterus may be an independent risk factor for uterine rupture, which can occur in primigravid patients and at any gestation. abstract_id: PUBMED:34352624 Spontaneous unscarred uterine rupture in a primigravid patient at 11 weeks of gestation managed surgically: A rare case report. Introduction: Spontaneous rupture of the primigravid uterus is a rare but catastrophic obstetrical emergency. It usually occurs late in pregnancy or during labour, mainly in multiparous women. But, spontaneous unscarred uterine rupture in a primigravid patient that also in first trimester is very rare. There are only 12 papers in Pub Med database regarding spontaneous unscarred uterine rupture in primigravid in early pregnancy. Case Presentation: A 23-year-old primigravid female at 11th week of pregnancy, with no significant medical or surgical history, presented with sudden onset of generalised abdominal pain for four hours with multiple episodes of vomiting. On physical examination, patient was pale, in haemorrhagic shock with diffuse abdominal tenderness. Ultrasonography showed extra uterine gestational sac with massive haemoperitoneum. Eventually, patient was subjected to emergency laparotomy after resuscitation. Clinical Discussion: Spontaneous rupture of unscarred gravid uterus is a catastrophic rare condition that can be missed leading to maternal and foetal mortality. There are no pathognomonic features indicating the condition therefore it should be differentiated from other causes of acute abdominal emergencies. Conclusion: Spontaneous rupture of unscarred gravid uterus should be differentiated from other acute abdominal emergencies. Patient may land up in emergency department in the state of shock, emergency physicians should be aware of its symptoms or presentations. abstract_id: PUBMED:37214232 Saudi Arabian Primigravid Women's Experiences of Mindfulness to Reduce Childbirth Stress: A Qualitative Study. Introduction: Pregnancy is a significant transitional life experience. It can also be one of the most stressful experiences in life and some women go on to develop postpartum depression. Adopting mindfulness techniques during childbirth may allow women to experience less labor pain and require fewer medical interventions, which improve the mother's health. Objective: To investigate the effectiveness of mindfulness in reducing childbirth stress in primigravid women in Saudi Arabia. Methods: The researcher recruited primigravid women from an antenatal clinic in a government hospital in the Eastern Province of Saudi Arabia. The study adopted a qualitative interpretive descriptive design, using individual interviews to collect the data before conducting thematic content analysis using NVivo 10.1 software. Results: Five key themes emerged from the data: (a) stress reduction, (b) recognition of thoughts and feelings, (c) life satisfaction, (d) insufficient knowledge leads to challenges, and (e) empowerment of a spiritual aspect. Conclusions: Mindfulness is an effective technique that supports a mother's physical and psychological well-being. abstract_id: PUBMED:18777032 Revising the primigravid partogram: does it make any difference? Objective: To investigate the distribution of the rate of cervical dilatation of primigravid labour and its deviation from the standard partogram. Design: Retrospective observational study. Setting: South African district hospital serving an indigent rural population. Population: Expectant management of labour of healthy nulliparous women in active labour, at term, with a singleton pregnancy and cephalic presentation. Methods: Audit of 1,595 partograms of spontaneous primigravid labour. The standard partogram's alert line was replaced by a customised alert line based on the lowest 10th centile of the rate of cervical dilatation of the study population and an alert line representing the 10% slowest labours. The action line was placed parallel and 4 h to the right of the alert lines. Main Outcome Measure: The distribution of labours left to and on the respective alert lines, and right to the action lines. Results: The lowest 90th centile of the customised alert line yielded a rate of cervical dilatation of 0.860 cm/h. Three quarters of labours evolved left to the revised alert line as opposed to 56.1% left of the standard alert line [Odds ratio (OR) 0.49, 95% confidence interval (CI) 0.42-0.56]. The mean rate of cervical dilatation of the 10% slowest labours was 0.535 +/- 0.076 cm/h (P < 0.0001), and 95.7% of labours evolved left to the corresponding alert line (OR 8.40, 95% CI 6.44-11.0). Conclusion: The alert line representing the mean of the 10% slowest labours leads to an unrealistic distribution of labour on the partogram. A revised alert line based on the lowest 10th centile of the local population is more representative and should perhaps be used in the management of labour. abstract_id: PUBMED:36414574 Differences in Immune phenotype in decidual tissue from multigravid women compared to primigravid women. Problem: Women with a previous uncomplicated pregnancy have lower risks of immune-associated pregnancy disorders in a subsequent pregnancy. This could indicate a different maternal immune response in multigravid women compared to primigravid women. In a previous study, we showed persistent higher memory T cell proportions with higher CD69 expression after uncomplicated pregnancies. To our knowledge no studies have reported on immune cells in general, and immune memory cells and macrophages specifically in multigravid and primigravid women. Method Of Study: T cells and macrophages were isolated from term decidua parietalis and decidua basalis tissue from healthy primigravid women (n = 12) and multigravid women (n = 12). Using flow cytometry, different T cell populations including memory T cells and macrophages were analyzed. To analyze whether a different immune phenotype is already present in early pregnancy, decidual tissue from uncomplicated ongoing pregnancies between 9 and 12 weeks of gestation from multigravida and primigravid women was investigated using qRT-PCR. Results: Nearly all T cell subsets analyzed in the decidua parietalis had significantly higher CD69+ proportions in multigravid women compared to primigravid women. A higher proportion of decidual (CD50- ) M2-like macrophages was found in the decidua parietalis in multigravid women compared to primigravid women. In first trimester decidual tissue higher FOXP3 mRNA expression was found in multigravid women compared to primigravid women. Conclusions: This study shows that decidual tissue from multigravid women has a more activated and immunoregulatory phenotype compared to decidual tissue from primigravid women in early pregnancy and at term which could suggest a more balanced immune adaptation towards pregnancy after earlier uncomplicated pregnancies. abstract_id: PUBMED:36496931 Evaluation of a Novel Infrared Thermography Projection to Assess Udder Health in Primigravid Dairy Heifers. Heifer mastitis in early lactation impacts negatively on animal welfare, milk production and longevity. A major challenge for the prevention and control of mastitis in dairy heifers is to establish when intramammary infection occurs because pre-partum secretum sampling is risky. We evaluated a ventrodorsal projection to capture thermal images of the entire udder of primigravid and compared results against caudocranial projection, which is used in lactating cattle. Based on the analysis of 119 heifers and images taken at 2 months and 2 weeks pre-partum, a very strong positive correlation (r = 0.91 and r = 0.96, respectively) was shown between caudocranial and ventrodorsal projections of hind quarters. Quarter maximum gradient temperatures were consistently greater on ventrodorsal projection than on caudocranial projection, and less variable than minimum gradient temperatures. The collection of ventrodorsal images is a simple one-step method involving the imaging of the entire udder in a manner safe for both the cattle and handlers. Together, these results demonstrate that a single projection can be used to scan the entire udder of primigravid dairy heifers in commercial farm conditions, with the potential to implement this as a routine method for the early detection of intramammary infection based on udder surface temperature. abstract_id: PUBMED:37583679 Before, During, and After: An Examination of the Pre-Task Planning and Post-Task Revising Practices of Adults with Low Literacy and their Effect on the Quality of Written Compositions. This study sought to examine the pre-task planning and post-task revising practices of adults with low literacy and how those practices affect overall writing quality. Seventy-six adults with low literacy composed essays in response to a prompt and were given time for pre-task planning and post-task revising. Results showed that participants with higher planning skills were able to utilize planning and revising processes to increase the detail included in their compositions while potentially streamlining the language contained therein. Writers with lower planning skills, in contrast, focused on lower-level aspects of writing (e.g., word choice) and were unable to make use of the more demanding processes of planning and revising to improve their compositions. These findings suggest a hierarchical development of writing skills and the possibility of sacrifices in certain areas of the writing process as writers focus on other areas. Practical implications for practitioners are discussed. abstract_id: PUBMED:32983306 A case of spontaneous uterine artery pseudoaneurysm in a primigravid woman at 16 weeks gestation. We report a case of a uterine artery pseudoaneurysm in a 29-year-old primigravid woman at 16 weeks gestation. The woman presented to the emergency room with lower left quadrant pain and vaginal bleeding. Ultrasound revealed a left adnexal mass consistent with a pseudoaneurysm. Percutaneous thrombin injection was chosen to avoid contrast and radiation risks to the fetus. Ultrasound demonstrated thrombosis of the pseudoaneurysm with no evidence of fetal distress. On postprocedure day 2, the patient presented again with similar complaints of lower quadrant pain and vaginal bleeding. The pseudoaneurysm was found to have recanalized and a decision was made to treat with computed tomography angiography and coil embolization. The procedure was successful, with angiography revealing an incidental branch of the pseudoaneurysm that was subsequently embolized. abstract_id: PUBMED:29208037 Uterine rupture in a primigravid patient, an uncommon but severe obstetrical event: a case report. Background: A spontaneous rupture of the unscarred uterus in a primigravid patient is extremely rare and is associated with high perinatal and maternal morbidity and mortality. Case Presentation: A 34-year-old white primigravid woman, 31 + 3 weeks of gestation, presented with pre-eclampsia and developed a sudden acute abdomen. An emergency laparotomy was performed and a uterine rupture was found as the cause of the event. A stillborn girl was born. Conclusion: A rupture of the pregnant uterus should always be considered in a pregnant woman presenting with abdominal pain, even in a primigravid patient. Answer: Revising the primigravid partogram does make a difference according to the study presented in the abstract from PUBMED:18777032. The study aimed to investigate the distribution of the rate of cervical dilatation in primigravid labor and its deviation from the standard partogram. The researchers conducted a retrospective observational study on 1,595 partograms of spontaneous primigravid labor. They replaced the standard partogram's alert line with a customized alert line based on the lowest 10th percentile of the rate of cervical dilatation of the study population and an alert line representing the 10% slowest labors. The action line was placed parallel and 4 hours to the right of the alert lines. The results showed that the lowest 90th percentile of the customized alert line yielded a rate of cervical dilatation of 0.860 cm/h. Three-quarters of labors evolved left to the revised alert line as opposed to 56.1% left of the standard alert line. The mean rate of cervical dilatation of the 10% slowest labors was 0.535 +/- 0.076 cm/h, and 95.7% of labors evolved left to the corresponding alert line. The conclusion of the study was that the alert line representing the mean of the 10% slowest labors leads to an unrealistic distribution of labor on the partogram. A revised alert line based on the lowest 10th percentile of the local population is more representative and should perhaps be used in the management of labor. This suggests that revising the primigravid partogram to better reflect the labor progression of the local population can lead to a more accurate and realistic monitoring of labor, which could potentially improve labor management and outcomes.
Instruction: Social roles among recruits in Switzerland: Do social roles relate to alcohol use and does role change have an impact? Abstracts: abstract_id: PUBMED:26722991 Social roles among recruits in Switzerland: Do social roles relate to alcohol use and does role change have an impact? Introduction: Young men are likely to report high levels of alcohol use. Previous studies found a reduction in alcohol use when adopting adult social roles. This study examines the frequency of parenthood, partnership and stable employment among young men in Switzerland. It tests whether the alcohol use of those with adult social roles differs from those without and whether changes in social roles relate to changes in alcohol use. Methods: Data was available from 5025 men (20.0 years) at baseline (August 2010 to November 2011) and 15 months later. Changes in social roles and their impact on alcohol use were examined in multiple regression models. Results: At baseline, 15.8% had a job and 4.9% a stable partner, and 1.5% had a child or were expecting one (30.5%, 6.1% and 2.2% at follow-up). Having a partner was associated with a significant decrease in annual frequency of drinking and weekly risky single-occasion drinking (RSOD) at follow-up. A higher number of social roles at follow-up was associated with a significant decrease in weekly RSOD. Apart from a significant decrease in weekly RSOD among those remaining in a stable partnership, role development was not found to have significant effects on alcohol use between baseline and follow-up. Conclusions: In Switzerland, an early engagement in permanent social roles is uncommon. Nevertheless, holding single or multiple social roles was commonly associated with reduced alcohol use, although not always significantly so. In western European countries, the engagement in adult social roles is postponed to later ages. abstract_id: PUBMED:36898052 Alcohol use among adult recent Latino/a immigrants: the role of stress, forced migration, and adherence to traditional gender roles. Background: Previous research conducted among Latino/a immigrants has shown the underlying effect that exposure to stress after immigrating to the U.S. (i.e. health access, racial/ethnic discrimination, and language barriers) has on alcohol use patterns. However, given the demographic shifts in recent immigrants, understanding the influence of stress before (i.e. poverty, healthcare, and educational opportunities) and after immigration on their alcohol use (i.e. alcohol consumption and drinking behaviors in the past 12 months) in the context of migration and traditional gender roles is warranted.Objectives: To examine the (a) cumulative effects of pre- to post-immigration stress, (b) respective moderating effects of traditional gender roles, and (c) forced migration on alcohol use for men and women.Methods: Hierarchical multiple regression and moderation analyses were conducted on a cross-sectional sample of 529 (N = 268 men, N = 261 women) adult (18-34 years) from recent Latino/a immigrants in South Florida.Results: Gender had a statistically significant difference on alcohol use, (F 527) = 18.68, p < .001, with men (p = 4.36 ± SE =.22) reporting higher alcohol use than women (p = 3.08 ± SE =.20). Post-immigration stress (β = .12, p = .03) but not pre-migration stress had a statistically significant association with alcohol use. There is no interaction effect by traditional gender roles and forced migration on the associations between pre- to post-immigration stress and alcohol use.Conclusion: Results suggest that post-immigration stress may be a reasonable intervention target to mitigate alcohol use among recent Latino/a immigrants, particularly among men. abstract_id: PUBMED:24857795 Adult social roles and alcohol use among American Indians. American Indians are disproportionately burdened by alcohol-related problems. Yet, research exploring predictors of alcohol use among American Indians has been limited by cross-sectional designs and reservation-based samples. Guided by a life course developmental perspective, the current study used a subsample of American Indians (n=927) from the National Longitudinal Study of Adolescent Health (Add Health) to explore alcohol use (current drinking, usual number of drinks, and binge drinking) among this population. We examined whether adult social roles (i.e., cohabitation, marriage, parenthood, college enrollment, and full-time work) were linked to the rise and fall of alcohol use. Multi-level models demonstrated that adult social roles were linked to alcohol use at the within- and between-person levels. Becoming a parent was linked to a lower likelihood of being a current drinker, fewer alcoholic drinks, and less frequent binge drinking. Transitioning to full-time work was associated with a higher likelihood of being a current drinker and more frequent binge drinking. Results point to the importance of exploring within-group trajectories of alcohol use and highlight the protective and risky nature of adult social roles among American Indians. abstract_id: PUBMED:37114480 How Social Roles Affect Sleep Health during Midlife. This study draws on role theory and the life course perspective to examine how sleep health (duration, quality, and latency) is shaped by social role accumulation (number of roles), role repertoires (role combinations), and role contexts among middle-aged adults. We also examine how the relationships between social roles and sleep health are gendered. We use data from the National Longitudinal Survey of Youth 1979 Cohort (N = 7,628). Results show that role accumulation is associated with less sleep and decreased insomnia symptoms, and that role repertoires also impact sleep (e.g., parenthood leads to diminished sleep quantity and quality). There is also evidence that contextual factors related to employment history, marital quality, and parenthood affect sleep health. Furthermore, results reveal that several of the relationships between social roles and sleep are gendered. Taken together, findings demonstrate the utility of examining links between multiple dimensions of social roles and sleep health. abstract_id: PUBMED:23398916 Development and validation of the work-family-school role conflicts and role-related social support scales among registered nurses with multiple roles. Purpose: The purpose of this study was to develop work-family-school role conflicts and role-related social support scales, and to validate the psychometrics of those scales among registered nurses with multiple roles. Methods: The concepts, generation of items, and the scale domains of work-family-school role conflicts and role-related social support scales were constructed based on a review of the literature. The validity and reliability of the scales were examined by administering them to 201 registered nurses who were recruited from 8 university hospitals in South Korea. The content validity was examined by nursing experts using a content validity index. Exploratory factor analysis and confirmatory factor analysis were used to establish the construct validity. The correlation with depression was examined to assess concurrent validity. Finally, internal consistency was assessed using Cronbach's alpha coefficients. Results: The work-family-school role conflicts scale comprised ten items with three factors: work-school-to-family conflict (three items), family-school-to-work conflict (three items), and work-family-to-school conflict (four items). The role-related social support scale comprised nine items with three factors: support from family (three items), support from work (three items), and support from school (three items). Cronbach's alphas were 0.83 and 0.76 for the work-family-school role conflicts and role-related social support scales, respectively. Both instruments exhibited acceptable construct and concurrent validity. Conclusion: The validity and reliability of the developed scales indicate their potential usefulness for the assessment of work-family-school role conflict and role-related social support among registered nurses with multiple roles in Korea. abstract_id: PUBMED:34904936 Domains matter: A prospective investigation of traditional feminine gender roles and alcohol use among Latinas. Scholars suggest traditional feminine gender roles (TFGRs) influence alcohol use among U.S. Latinas, but relevant literature is limited. This two-wave study examined how multi-dimensional internal (i.e., beliefs) and external (i.e., practices) TFGR processes related to drinking among college-bound Latina emerging adults across time. TFGRs characterized by virtue predicted less alcohol engagement, while some TFGR dimensions (e.g., subordinate) predicted more. TFGR practices more strongly predicted cross-sectional alcohol outcomes than TFGR beliefs, although some TFGR beliefs predicted later drinking. These findings highlight the utility of assessing multiple TFGR dimensions and domains to better understand the link between TFGRs and drinking among Latinas. abstract_id: PUBMED:32101096 Traditional feminine gender roles, alcohol use, and protective behavioral strategies among Latina college students. Objective: Studies conducted with Latina/o populations suggest there are gendered norms surrounding drinking behavior, but research examining alcohol use among Latina college students in the context of traditional feminine gender role (TFGR) norms, is scarce.Participants and Methods: 405 Latina undergraduates completed a web-based study of sociocultural factors and alcohol use. Path analyses assessed the relation between TFGR , alcohol use, and alcohol-related protective behavioral strategies (PBS). Results: A dimension of TFGR characterized by purity was related to (a) reduced alcohol use in general, and (b) PBS designed to modify the manner of drinking among drinkers. In turn, this dimension of PBS related to a lower likelihood of experiencing drunkenness. Conclusions: These findings support the notion that enacting some dimensions of TFGR among Latina college students may limit general alcohol use (directly) and high-risk drinking (indirectly through use of PBS), while enacting other dimensions may place Latinas at risk of alcohol misuse. abstract_id: PUBMED:20643507 A step beyond--the relevance of depressed mood and mastery in the interplay between the number of social roles and alcohol use. Objectives: The present study examines whether depressed mood and external control mediate or moderate the relationship between the number of social roles and alcohol use. Participants: The analysis was based on a national representative sample of 25- to 45-year-old male and female drinkers in Switzerland. Method: The influence of depressed mood and external control on the relationship between the number of social roles (parenthood, partnership, employment) and alcohol use was examined in linear structural equation models (mediation) and in multiple regressions (moderation) stratified by gender. All analyses were adjusted for age and education level. Results: Holding more roles was associated with lower alcohol use, lower external control and lower depressed mood. The study did not find evidence of depressed mood or external control mediating the social roles-alcohol relationship. A moderation effect was identified among women only, whereby a protective effect of having more roles could not be found among those who scored high on external control. In general, a stronger link was observed between roles and alcohol use, while depressed mood and external control acted independently on drinking. With the exception of women with high external control, the study found no link between a higher number of social roles and greater alcohol use. Conclusion: Our results indicate that drinking behaviours are more strongly linked to external control and depressed mood than they are to the number of social roles. The study also demonstrates that in any effective alcohol prevention policy, societal actions that enable individuals to combine more social roles play a central role. abstract_id: PUBMED:27014168 The Influence of Different Social Roles Activation on Women's Financial and Consumer Choices. In recent times, the changes occurring in the social role of women and men have been observed. Traditionally, the dominating social role of the woman was as housewife, and that of the man was focused on work and family maintenance. Nowadays, the social role of women is evolving in the direction of taking a profession, while increasingly men are taking care of the household. The main goal of the studies presented here was to verify how the activation of different social roles (traditional or non-traditional) may be reflected in women's financial and consumer choices. Three experimental studies were conducted. In the first study (n = 195 females), three different social roles of women - professional (non-traditional), housewife (traditional) and neutral (control) - were activated. The results showed that activating women's non-traditional social role increased their tendency to invest and decreased their propensity to save money compared to the activation of the traditional or neutral social role. The goal of the second study (n = 196 females) was to check whether, despite there being no differences in the level of consumption in the first study, can any differences be observed in the preference for the type of products chosen for consumption. The results showed that activating the non-traditional social role raised the propensity to spend funds on products and services for individual use and reduced the willingness to buy goods for collective use (shared with other members of the household). The purpose of the third study (n = 90 females) was to examine how different images of women appearing in advertisements may affect women's judgments of the advertised product. Women who watched the ad with woman in the non-traditional social role estimated the product quality, look, color and price higher that participants exposed to the advertisement presenting the woman in traditional or neutral social role. The present studies give some evidence that the new, non-traditional social role of women that is often presented in the media may affect women's everyday financial choices and judgments of products. abstract_id: PUBMED:36274914 Social/dialogical roles of social robots in supporting children's learning of language and literacy-A review and analysis of innovative roles. One of the many purposes for which social robots are designed is education, and there have been many attempts to systematize their potential in this field. What these attempts have in common is the recognition that learning can be supported in a variety of ways because a learner can be engaged in different activities that foster learning. Up to now, three roles have been proposed when designing these activities for robots: as a teacher or tutor, a learning peer, or a novice. Current research proposes that deciding in favor of one role over another depends on the content or preferred pedagogical form. However, the design of activities changes not only the content of learning, but also the nature of a human-robot social relationship. This is particularly important in language acquisition, which has been recognized as a social endeavor. The following review aims to specify the differences in human-robot social relationships when children learn language through interacting with a social robot. After proposing categories for comparing these different relationships, we review established and more specific, innovative roles that a robot can play in language-learning scenarios. This follows Mead's (1946) theoretical approach proposing that social roles are performed in interactive acts. These acts are crucial for learning, because not only can they shape the social environment of learning but also engage the learner to different degrees. We specify the degree of engagement by referring to Chi's (2009) progression of learning activities that range from active, constructive, toward interactive with the latter fostering deeper learning. Taken together, this approach enables us to compare and evaluate different human-robot social relationships that arise when applying a robot in a particular social role. Answer: The study on social roles among recruits in Switzerland found that engagement in adult social roles was associated with a decrease in alcohol use. Specifically, having a partner was linked to a significant reduction in the annual frequency of drinking and weekly risky single-occasion drinking (RSOD) at follow-up. Additionally, a higher number of social roles at follow-up was associated with a significant decrease in weekly RSOD. However, apart from a significant decrease in weekly RSOD among those remaining in a stable partnership, role development did not have significant effects on alcohol use between baseline and follow-up. The study concluded that although early engagement in permanent social roles is uncommon in Switzerland, holding single or multiple social roles was commonly associated with reduced alcohol use, although not always significantly so (PUBMED:26722991).
Instruction: Can preterm labour be predicted in low risk pregnancies? Abstracts: abstract_id: PUBMED:18455138 Perceived risk of preterm and low-birthweight birth in the Central Pennsylvania Women's Health Study. Objective: Engaging women in preconception prevention may be challenging if at-risk women do not perceive increased risk. This study examined predictors of perceiving increased risk for preterm/low birthweight birth. Study Design: Using the Central Pennsylvania Women's Health Study, a population-based sample of reproductive-age women, we analyzed whether sociodemographics, health and pregnancy history, health behaviors, attitudes, or health care utilization predicted risk perception of preterm/low-birthweight birth. Results: Of the 645 women analyzed, 157 (24%) estimated their risk of preterm/low-birthweight birth to be very or somewhat likely. Higher perceived risk was associated with being underweight, previous preterm/low-birthweight birth, having a mother with previous preterm/low-birthweight birth, lower perceived severity of preterm/low birthweight, and smoking. Conclusions: Several factors known to predict preterm/low birthweight did influence risk perception in this study, whereas others did not. Further research on how these factors have an impact on participation in preconception care programs is warranted. abstract_id: PUBMED:8688104 Recurrence risk of low birthweight deliveries among women with a prior very low birthweight delivery. To estimate the risk of repeat low birthweight deliveries among women whose first child was very low birthweight (less than 1500 g), a retrospective cohort of women who had their first and second children in Washington state between 1984 and 1991 was studied. After adjustment for potential confounding factors, a woman whose first infant was very low birthweight experienced an 11.5-fold increased risk of delivering a low birthweight (less than 2500 g) second infant (relative risk 11.5, 95% confidence interval 5.4 to 24.4). Women with a very low birthweight first infant also had a significantly increased risk of repeat very low birthweight infant (p < 0.0001). Women with a previous very low birthweight delivery are at increased risk of repeat low and very low birthweight infants. This high-risk group may benefit from education regarding recurrence risk and modification of factors associated with low birthweight, as well as good prenatal care. abstract_id: PUBMED:28348722 Prevalence and Risk Factors of Low Birth Weight in the Southeast of Iran. Background: The purpose of this study was to determine the prevalence and related factors of low birth weight (LBW) in the Southeast of Iran. Methods: This cross-sectional study was carried out in Kerman province. Data were collected from Iranian Maternal and Neonatal Network at public and private hospitals. All live births from March 2014 to March 2015 considered as the source population. The risk factors including maternal age, gravida, parity, abortion, pregnancy risk factors, maternal nationality, maternal education, maternity insurance, place of living, consanguinity, neonate sex, preterm labor, place of birth, delivery manager, and delivery type were compared between LBW and normal birth weight groups. Results: The prevalence of LBW was 9.4% in the present study. Preterm labor (odds ratio [OR]: 22.06; P < 0.001), neonate female sex (OR: 1.41; P < 0.001), low parity (OR: 0.85; P < 0.001), pregnancy age <18 years (OR: 1.26; P = 0.012), pregnancy age >35 years (OR: 1.21; P = 0.001), delivery by cesarean section (OR: 1.17; P = 0.002), pregnancy risk factors (OR: 1.67; P < 0.001), maternal illiteracy (OR: 1.91; P < 0.001), living in the rural area (OR: 1.19; P < 0.001), consanguineous (OR: 1.08; P = 0.025), and delivery by obstetrician (OR: 1.12; P = 0.029) were identified as significant factors associated with LBW in this study. Conclusions: Prevention of preterm labor, consanguineous marriage, pregnancy age <18 and >35 years old, and maternal medical risk factors are some critical interventions to reduce its burden. Increasing the access to high-quality health-care services in rural and deprived areas is another effective strategy for the prevention of LBW. abstract_id: PUBMED:17464585 Risk of preterm delivery in relation to maternal low birth weight. Objective: We examined the relationship between maternal low birth weight and preterm delivery risk. Methods: Information concerning maternal birth weight was collected during in-person interviews. Logistic regression was used to estimate odds ratios (OR) and 95% confidence intervals (95% CI). Preterm delivery cases were studied in aggregate, in subgroups (spontaneous preterm labor, preterm premature rupture of membranes, medically induced preterm delivery, moderate preterm delivery [gestational age at delivery 34-36 weeks], and early preterm delivery [gestational age at delivery<34 weeks]). Results: After adjusting for confounders, women weighing<2,500 g at birth had a 1.54-fold increased risk of preterm delivery versus women weighing=2,500 g (95% CI 0.97-2.44). Maternal low birth weight was associated with a 2-fold increased risk of spontaneous preterm delivery (95% CI 1.03-3.89), but weakly associated with preterm premature rupture of membranes (OR=1.44; 95% CI 0.67-3.09) and medically induced preterm delivery (OR=1.10; 95% CI 0.43-2.82). Maternal low birth weight was more strongly associated with early preterm delivery (OR=1.94) than with moderate preterm delivery (OR=1.46). Women weighing<2,500 g at birth and who became obese (pre-pregnancy body mass index, =30 kg/m2) before pregnancy had a 3.65-fold increased risk of preterm delivery (95% CI 1.33-10.02) versus women weighing=2,500 g at birth and who were not obese prior to pregnancy (<30 kg/m2). Conclusions: Results confirm earlier findings linking maternal low birth weight with future risk of preterm delivery. abstract_id: PUBMED:29539791 Periodontal Infection as a Possible Risk Factor for Preterm Low Birth Weight. Periodontal diseases are Gram-negative anaerobic infections that can occur in women of childbearing age (18 to 34 years). In the present investigation we sought to determine whether the prevalence of maternal periodontal infection could be associated with preterm low birth weight (PLBW), controlling for known risk factors and potential covariates. A case-control study of 124 pregnant or postpartum mothers was performed. PLBW cases were defined as a mother with a birth of less than 2,500 g and one or more of the following: gestational age <37 weeks, preterm labor (PTL), or premature rupture of membranes (PROM). Controls were normal birth weight infants (NBW). Assessments included a broad range of known obstetric risk factors, such as tobacco use, drug use, alcohol consumption, level of prenatal care, parity, genitourinary infections, and nutrition. Each subject received a periodontal examination to determine clinical attachment level. PLBW cases and primiparous PLBW cases (n = 93) had significantly worse periodontal disease than the respective NBW controls. Multivariate logistic regression models, controlling for other risk factors and covariates, demonstrated that periodontal disease is a statistically significant risk factor for PLBW with adjusted odds ratios of 7.9 and 7.5 for all PLBW cases and primiparous PLBW cases, respectively. These data indicate that periodontal diseases represent a previously unrecognized and clinically significant risk factor for preterm low birth weight as a consequence of either PTL or preterm PROM. J Periodontol 1996;67:1103-1113. abstract_id: PUBMED:12151153 Psychiatric and substance use disorders as risk factors for low birth weight and preterm delivery. Objective: We examined the associations between psychiatric and substance use diagnoses and low birth weight (LBW), very low birth weight (VLBW), and preterm delivery among all women delivering in California hospitals during 1995. Methods: This population-based retrospective cohort analysis used linked hospital discharge and birth certificate data for 521,490 deliveries. Logistic regression analyses were conducted to assess the associations between maternal psychiatric and substance use hospital discharge diagnoses and LBW, VLBW, and preterm delivery while controlling for maternal demographic and medical characteristics. Results: Women with psychiatric diagnoses had a significantly higher risk of LBW (adjusted odds ratio [OR] 2.0; 95% confidence interval [CI] 1.7, 2.3), VLBW (OR 2.9; 95% CI 2.1, 3.9), and preterm delivery (OR 1.6; 95% CI 1.4, 1.9) compared with women without those diagnoses. Substance use diagnoses were also associated with higher risk of LBW (OR 3.7; 95% CI 3.4, 4.0), VLBW (OR 2.8; 95% CI 2.3, 3.3), and preterm delivery (OR 2.4; 95% CI 2.3, 2.6). Conclusion: Maternal psychiatric and substance use diagnoses were independently associated with low birth weight and preterm delivery in the population of women delivering in California in 1995. Identifying pregnant women with current psychiatric disorders and increased monitoring for preterm and low birth weight delivery among this population may be indicated. abstract_id: PUBMED:11328215 Maternal hematological status and risk of low birth weight and preterm delivery in Nepal. Background: Our aim was to investigate associations between maternal characteristics, with emphasis on hematological status, and risk of low birth weight and preterm delivery among pregnant Nepali women. Methods: In a case-control study, 1400 pregnant women attending Patan Hospital, Kathmandu, Nepal for antenatal care and delivery in the period 1994 to 1996 were included. Women with twin pregnancies (n=15) and those delivering infants with congenital malformations (n=13) were excluded from the study. Maternal characteristics including hematocrit values were recorded at the first antenatal visit. Main outcome measures included birth weight, gestation at delivery, Apgar score, mode of delivery, and perinatal death. Linear and logistic regression models were used to analyze data. Results: Severe anemia (hematocrit < or =24%) was associated with a significantly increased risk of low birth weight (<2500 g) and preterm delivery (<37 weeks gestation). High hematocrit values (> or =40%) did not increase the risk of low birth weight or preterm delivery. The risk of low Apgar score or operative deliveries was significantly increased in women with severe anemia in the first trimester. Teenagers, women with short height or low body mass index, and those belonging to the ethnic group Brahmins, had significantly higher risks of delivering low birth weight infants. Conclusions: Severe maternal anemia, particularly in the first trimester, was significantly associated with adverse pregnancy outcome. Low maternal age, height or body mass index also increased the risk of low birth weight. Improvements in the nutritional status of young Nepali women could contribute to improved health among their infants. abstract_id: PUBMED:11820369 Higher risk of preterm birth and low birth weight in women with periodontal disease. Pregnant women with periodontal disease (PD) may be at increased risk for having preterm low-birth-weight (PLBW) children. We investigated whether the maintenance of the mothers' periodontal health after 28 weeks' gestation reduces the risk of PLBW. Of the 639 women studied, 406 had gingivitis and received treatment before 28 weeks' gestation, and 233 had PD and were treated after delivery. Data about previous and current pregnancies and known risk factors were obtained from patients' medical records. Primary outcomes were delivery before 37 weeks' gestation or an infant with birth weight below 2500 g. The incidence of PLBW was 2.5% in periodontally healthy women, and 8.6% in women with PD (p = 0.0004, relative risk = 3.5, 95% CI, 1.7 to 7.3). Risk factors significantly associated with PLBW were previous PLBW, PD, fewer than 6 pre-natal visits, and low maternal weight gain. PD was associated with both preterm birth and low birth weight, independent of other risk factors. abstract_id: PUBMED:19603312 Pregnancy outcome after age 40 and risk of low birth weight. A historical cohort study was conducted to examine the pregnancy outcome in women aged 40 or older and determine the effect of age on low birth weight. The pregnancy outcomes of 789 mothers aged 40 years or older were analysed and compared with those of 20,852 mothers aged 20-34 years. There were differences in socioeconomic status and obstetric characteristics between the two groups. The older group had more medical and obstetric complications (diabetes mellitus, chronic hypertension, malpresentation, pregnancy-induced hypertension, placenta praevia, multiple pregnancies, pre-term labour, fetal distress, retained placenta, postpartum haemorrhage and endometritis), more adverse fetal outcomes (low birth weight, low Apgar scores and congenital anomalies) and a higher caesarean section rate. The multivariate logistic regression analysis confirmed that maternal age was an independent risk factor for low birth weight. These data will be useful in counselling patients about their expectations and the risk of adverse outcomes and in providing the appropriate necessary care. abstract_id: PUBMED:11276762 Mechanisms of risk in preterm low-birthweight infants. The periodontal diseases share many common risk factors with preterm low birth weight. Examples are, age, socioeconomic status and smoking (Fig. 5). Studies to date have only shown an association between the two conditions, and this does not indicate a causal relationship. However, since the inflammatory mediators that occur in the periodontal diseases, also play an important part in the initiation of labor, there are plausible biological mechanisms that could link the two conditions. The challenge for the future is to characterize the nature of the factors that predispose a mother to give birth prematurely to infants less than 2500 g and to assign relative probabilities to each. Studies are taking place in many parts of the world to determine the probability of a preterm low-birth-weight outcome, the interdependence of the factors that contribute to a birth event and possible casual relationships between these factors. Further information about the details of the effects of maternal infection will come from intervention studies, animal studies and more detailed examination of the mechanisms. Answer: The prediction of preterm labor in low-risk pregnancies is a complex issue, as it involves various factors and potential indicators. While the abstracts provided do not directly address the predictability of preterm labor in low-risk pregnancies, they do identify several risk factors associated with preterm and low birth weight deliveries, which could potentially be used to predict preterm labor. For instance, a history of previous preterm or low birth weight delivery significantly increases the risk of subsequent preterm deliveries (PUBMED:8688104). Maternal characteristics such as being underweight, having a mother with a history of preterm/low birth weight birth, and smoking are also associated with higher perceived risk (PUBMED:18455138). Additionally, maternal low birth weight has been linked to an increased risk of preterm delivery in the future (PUBMED:17464585). Other factors identified include maternal age (both younger than 18 and older than 35), preterm labor, female sex of the neonate, cesarean section, pregnancy risk factors, maternal illiteracy, living in rural areas, and consanguinity (PUBMED:28348722). Severe maternal anemia, particularly in the first trimester, is associated with adverse pregnancy outcomes, including low birth weight and preterm delivery (PUBMED:11328215). Maternal psychiatric and substance use disorders are also independently associated with low birth weight and preterm delivery (PUBMED:12151153). Periodontal disease has been identified as a possible risk factor for preterm low birth weight (PUBMED:29539791; PUBMED:11820369), and maternal age over 40 is an independent risk factor for low birth weight (PUBMED:19603312). While these factors can help identify pregnancies at higher risk for preterm labor, predicting preterm labor in low-risk pregnancies remains challenging. The abstracts suggest that while certain risk factors can be identified, there is no clear method for predicting preterm labor in all cases, especially in those considered low risk. Further research and development of predictive models incorporating a wide range of factors are needed to improve the predictability of preterm labor in low-risk pregnancies.
Instruction: Contact tracing/pre-employment screening for pulmonary tuberculosis: should positive Mantoux test necessitates routine chest X-ray? Abstracts: abstract_id: PUBMED:20710107 Contact tracing/pre-employment screening for pulmonary tuberculosis: should positive Mantoux test necessitates routine chest X-ray? Background: Pulmonary tuberculosis (PTB) remains a scourge in most developing countries. Mantoux skin test and more commonly chest radiography are some of the methods of diagnosing the disease, especially with regard to contact tracing and pre-employment screening. Regular use of radiography has both safety and cost implications. This study aimed at establishing if any justification exists in requesting for chest radiographs in asymptomatic subjects with a positive Mantoux skin test reaction. Methods: 174 adults comprising PTB contacts and newly employed/admitted university staff/students were recruited into the study. They were 89 males (51.1%) and 85 females (48.9%). All subjects had Mantoux test (using purified protein derivative, PPD). Patients who had positive Mantoux reaction subsequently were subjected to posteroanterior chest radiographic examination. Mantoux test and chest radiographic findings were then correlated with each other. Results: 102 subjects (59.2%) had positive Mantoux reactions, while 27 subjects (31.1%) had abnormal chest radiographs. There was no significant correlation between Mantoux readings and chest radiographic findings (P=0.106). Conclusion: It is concluded that other ancillary tests like sputum examination and/or, where available, Interferon and ESAT tests should be carried out before chest radiograph is requested in subjects with positive Mantoux reaction. abstract_id: PUBMED:21419057 Contact tracing and prophylaxis of tuberculosis Contact tracing is an integrated part on tuberculosis management and is conducted by the medical officer in collaboration with the local chest clinics. The principle of contact tracing is to begin with the household contacts and expanding to more peripheral contacts depending of the number of persons with latent of active tuberculosis identified. Persons identified with latent TB infection should be offered isoniazid preventive treatment for six months. abstract_id: PUBMED:9634687 Tuberculosis: contact tracing and preventive chemotherapy Tuberculosis is transmitted from patients with lung disease to sensitive individuals by inhalation of infectious particles expelled during cough. Therefore, the search for infected subjects among relatives of patients with infectious tuberculosis is the best method of preventing later development of the disease in populations where the prevalence of tuberculosis is low. Contact tracing relies on the tuberculin skin test, followed by chest X-ray if indicated and bacteriological examination of sputum. In recently infected subjects, particularly the young, preventive chemotherapy has been shown to decrease the risk of later disease. abstract_id: PUBMED:22768705 Contact tracing of a 15-year-old girl with smear-negative pulmonary tuberculosis in Tehran. Early detection of tuberculosis (TB) cases and supervised treatment are vital for control of the disease. To assess TB transmission from a sputum smear-negative pulmonary TB case, the contacts of a 15-year-old Iranian girl diagnosed with smear-negative TB in 2010 were traced. In all, 52 classmates and close friends and 15 school staff were screened by tuberculin skin test and chest X-ray. Those with a positive skin test or abnormal chest X-ray were further evaluated by chest spiral computed tomography (CT) scan and triple gastric washing. All classmates and close friends were Iranian and female. Of the 52 girls, 17 (32%) had latent infection (either positive skin test or abnormal chest X-ray) and 3 (5.7%) had active TB (abnormal CT or positive culture additional to positive skin test or abnormal X-ray). None of the staff had abnormal findings. Contact tracing should be considered for contacts of all children with symptomatic pulmonary TB, even those who are smear-negative. abstract_id: PUBMED:26634258 Evaluation of Chest X-ray and Thoracic Computed Tomography in Patients with Suspected Tuberculosis. Objective: To investigate if there is any correlation between positive findings detected by posterior-anterior (PA) chest radiograph and thoracic computerized tomography (CT) in cases with suspected lung tuberculosis (TB) due to positive tuberculin skin test (TST) results. Methods: This is a retrospective evaluation of the medical files of patients who visited the Department of Pediatrics, Istanbul University, Istanbul Faculty of Medicine from 2006, through 2011 as outpatients and had positive TST (>15 mm) results. Results: A total of 326 patients were included in the study; 45.7 % (n = 149) were girls, and the mean age was 9.0 ± 4.1 y (range: 1-17 y). In total, 14.4 % (n = 47) had TB findings, all of which were in the form of hilar lymphadenopathy. Among the 47 cases with TB findings in PA chest X-ray, 45 (95.7 %) also had findings in thoracic CT. Only 2 (4.3 %) patients had normal thoracic CT results although their PA chest X-ray results were positive. Conclusions: Evaluation for pulmonary TB in children with positive isolated TSTs should be made primarily with PA chest X-ray. A routine thoracic CT scan is not necessary for asymptomatic patients with only hilar lymphadenopathy findings in PA chest radiographs. abstract_id: PUBMED:17127909 Contact tracing in a case of tuberculosis in a health care worker Introduction: Tuberculosis is a public health problem, of which the nosocomial transmission from a health care worker to patients has not been well documented. Observations: A Senegalese surgeon aged 32 was admitted to hospital on account of deterioration in his general health. He was found to have sputum positive tuberculosis and received standard treatment. We report the strategy employed for tracing the contacts of this health care worker. Of a total of 185 members of staff (permanent and temporary) and 91 patients who had been in contact with the index case, 180 (97.3%) and 71 (78%) respectively were screened. Of the 251 subjects screened, 5 staff (2.8%) and 11 patients (15.4%) showed evidence of latent tuberculous infection and 6 were treated. In total 97.3% of staff exposed were screened at the beginning of the study and 63% were reviewed at 3 months as opposed to 78% and 53% for the patients. Conclusion: This study shows poor compliance with the visit 3 months after exposure and the need to standardise the procedures in order to improve the screening and follow up of contacts. abstract_id: PUBMED:8387246 Abnormal chest x-rays in intravenous drug users: implications for tuberculosis screening programs. Objectives: The purpose of the study was to (1) determine the prevalence of significant abnormalities in routine chest x-rays used to screen for pulmonary tuberculosis in intravenous drug users and (2) evaluate the ability of the purified protein derivative skin test to identify persons with such abnormalities. Methods: We conducted a cross-sectional screening study on 1314 persons admitted to an opiate detoxification unit in an urban jail. Purified protein derivative tuberculin reactivity and the prevalence of abnormalities consistent with tuberculosis on screening chest x-rays were evaluated. The chest x-ray was obtained independent of the skin test. Results: The chest x-rays of 73 of the inmates (5.6%) showed abnormalities consistent with tuberculosis. Tuberculin skin testing missed 17 of 26 chest x-rays (65%) with significant infiltrates. Conclusions: Purified protein derivative screening is insensitive to chest x-ray abnormalities that require additional diagnostic evaluation for tuberculosis. Routine chest studies should be performed on all intravenous drug users admitted to congregate housing settings. abstract_id: PUBMED:9081510 Guidelines for environmental contact tracing in tuberculosis. German Central Committee for control of tuberculosis These guidelines were initiated by and worked out in cooperation with physicians of the public health service. Numerous publications and relevant legal regulations, particularly the Federal Law against Epidemics were taken into consideration. The responsibilities of the public health service in the fight against tuberculosis includes case finding, detection of the routes of infection and prevention of transmission. Within the scope of these duties contacts of index cases are traced after the notifications were received. The suspects of tuberculosis are examined considering their risk of infection and disease. While doing so the detection of the source of infection has priority. In general tuberculin tests, X-rays and bacteriology are available for public health services, in order to stick to the duties. The general procedures are explained and recommendations for intervals of investigations are given. Nevertheless there is leeway for decisions in isolated cases, but divergent steps should be documented with care. Specific features in the examination of contacts in institutions of communities and in companies are mentioned just as organizing and legal aspects. Requirements of coercive measures are described. Finally a short note is given on perspectives of molecular techniques on measures of the public health services. abstract_id: PUBMED:22688652 Tuberculin Skin Test, chest radiography and contact screening in children ≤5 y: relevance in Revised National Tuberculosis Control Programme (RNTCP). Objectives: To ascertain the status of Tuberculin Skin Test (TST) in the children aged 0-5 y who are household contacts of adults having sputum smear positive pulmonary tuberculosis and also to study the relationship of TST and Chest X-Ray (CXR) and their relevance in contact screening under RNTCP setup. Methods: This prospective hospital based clinical study was conducted at a tertiary level referral medical college in Northern India. Study subjects were 200 children ≤5 y of age who were household contacts of adults with sputum smear positive pulmonary tuberculosis. All these adult index cases were registered and taking regular treatment at RNTCP centre attached to a medical college. Demographic data collection, detailed clinical history along with clinical examination, growth assessment, TST and chest radiography (CXR) was done for every child. Results: TST was positive in 39 % children and out of these 68 % of child contacts had CXR findings suggestive of tuberculosis. This correlation was found to be highly significant (p < 0.001), especially in children less than 1 y of age. A high percentage of clinically asymptomatic children also had positive TST and CXR consistent with definition of TB disease. Conclusions: Contact screening must be prioritized in RNTCP. Unlike presently, TST and CXR should be actively employed in RNTCP for contact screening. abstract_id: PUBMED:35325093 Impact of the COVID-19 pandemic on contact tracing of patients with pulmonary tuberculosis. Background: The COVID-19 pandemic could have negative effects on tuberculosis (TB) control. The objective was to assess the impact of the pandemic in contact tracing, TB and latent tuberculosis infection (LTBI) in contacts of patients with pulmonary TB in Catalonia (Spain). Methods: Contact tracing was carried out in cases of pulmonary TB detected during 14 months in the pre-pandemic period (1 January 2019 to 28 February 2020) and 14 months in the pandemic period (1 March 2020 to 30 April 2021). Contacts received the tuberculin skin test and/or interferon gamma release assay and it was determined whether they had TB or LTBI. Variables associated with TB or LTBI in contacts (study period and sociodemographic variables) were analyzed using adjusted odds ratio (aOR) and the 95% confidence intervals (95% CI). Results: The pre-pandemic and pandemic periods showed, respectively: 503 and 255 pulmonary TB reported cases (reduction of 50.7%); and 4676 and 1687 contacts studied (reduction of 36.1%). In these periods, the proportion of TB cases among the contacts was 1.9% (84/4307) and 2.2% (30/1381) (P = 0.608); and the proportion of LTBI was 25.3% (1090/4307) and 29.2% (403/1381) (P < 0.001). The pandemic period was associated to higher LTBI proportion (aOR = 1.3; 95% CI 1.1-1.5), taking into account the effect on LTBI of the other variables studied as sex, age, household contact and migrant status. Conclusions: COVID-19 is affecting TB control due to less exhaustive TB and LTBI case detection. An increase in LTBI was observed during the pandemic period. Efforts should be made to improve detection of TB and LTBI among contacts of TB cases. Answer: The necessity of routine chest X-rays for individuals with a positive Mantoux test during contact tracing or pre-employment screening for pulmonary tuberculosis (PTB) is debated. A study with 174 adults who were either contacts of PTB patients or newly employed/admitted university staff/students found that 59.2% had positive Mantoux reactions, but only 31.1% had abnormal chest radiographs. There was no significant correlation between Mantoux readings and chest radiographic findings (P=0.106), suggesting that other tests like sputum examination or Interferon and ESAT tests should be considered before requesting a chest radiograph in subjects with a positive Mantoux reaction (PUBMED:20710107). Contact tracing is an essential part of tuberculosis management, and those identified with latent TB infection should be offered preventive treatment (PUBMED:21419057). The search for infected subjects among relatives of patients with infectious tuberculosis is the best method of preventing later development of the disease in populations where the prevalence of tuberculosis is low. Contact tracing relies on the tuberculin skin test, followed by chest X-ray if indicated and bacteriological examination of sputum (PUBMED:9634687). In a case involving a 15-year-old girl with smear-negative pulmonary tuberculosis, contact tracing revealed that 32% of her classmates had latent infection and 5.7% had active TB, indicating the importance of contact tracing even for smear-negative cases (PUBMED:22768705). A retrospective evaluation of patients with suspected lung tuberculosis due to positive tuberculin skin test (TST) results found that routine thoracic CT scan is not necessary for asymptomatic patients with only hilar lymphadenopathy findings in PA chest radiographs (PUBMED:26634258). In summary, while a positive Mantoux test indicates exposure to TB, it does not necessarily correlate with active disease or abnormal chest X-ray findings. Therefore, routine chest X-rays may not be necessary for all individuals with a positive Mantoux test, and other diagnostic tests should be considered to confirm active PTB before proceeding with chest radiography.
Instruction: Should ABPI be measured in patients with healed venous leg ulcers every three months? Abstracts: abstract_id: PUBMED:15575575 Should ABPI be measured in patients with healed venous leg ulcers every three months? Objective: To assess if three-monthly reassessment of ABPI is necessary and to determine the possibility of identifying which patients may benefit from more frequent reassessment. Method: The sample comprised 88 consecutive patients with 175 limbs attending community leg ulcer clinics for reassessment after leg ulcer healing had been achieved with compression therapy. Outcome measures were: a fall in ABPI to below 0.8, months between reassessments and arterial disease history. Results: ABPI fell from above 0.8 to below 0.8 in seven limbs in six patients over time periods of between six and 24 months. Of these seven limbs, six had an ABPI between 0.8 and 1.0 at the initial assessment, indicating they had some arterial impairment. ABPI fell below 0.7 (0.58) in only one patient, and this was detected at a six-month reassessment. None of the patients with no or just one arterial symptom experienced a reduction in ABPI to below 0.8. Those whose ABPIs fell below this level had two or more arterial symptoms. Conclusion: Three-monthly reassessment of healed limbs is not recommended as most patients who initially have an ABPI above 1.0 are unlikely to change during this time. However, patients with two or more arterial risk factors, diabetes and an initial ABPI of below 1.0 are likely to need closer monitoring. More in-depth research is needed to examine this problem. abstract_id: PUBMED:35855676 The efficacy of low-frequency ultrasound as an added treatment for chronic wounds: A meta-analysis. We performed a meta-analysis to evaluate the effect of low-frequency ultrasound as an added treatment for chronic wounds. A systematic literature search up to May 2022 was performed and 838 subjects with chronic wounds at the baseline of the studies; 412 of them were using the low-frequency ultrasound (225 low-frequency high-intensity contact ultrasound for diabetic foot wound ulcers, and 187 low-frequency low-intensity non-contact ultrasound for a venous leg wound ulcers), and 426 were using standard care (233 sharp debridements for diabetic foot wound ulcers and 193 sham treatments for venous leg wound ulcers). Odds ratio (OR), and mean difference (MD) with 95% confidence intervals (CIs) were calculated to assess the effect of low-frequency ultrasound as an added treatment for chronic wounds using the dichotomous, and contentious methods with a random or fixed-effect model. The low-frequency high-intensity contact ultrasound for diabetic foot wound ulcers had significantly lower non-healed diabetic foot wound ulcers at ≥3 months (OR, 0.37; 95% CI, 0.24-0.56, P < .001), a higher percentage of diabetic foot wound ulcers area reduction (MD, 17.18; 95% CI, 6.62-27.85, P = .002) compared with sharp debridement for diabetic foot wound ulcers. The low-frequency low-intensity non-contact ultrasound for a venous leg wound ulcers had a significantly lower non-healed venous leg wound ulcers at ≥3 months (OR, 0.31; 95% CI, 0.15-0.62, P = .001), and higher percentage venous leg wound ulcers area reduction (MD, 18.96; 95% CI, 2.36-35.57, P = .03) compared with sham treatments for a venous leg wound ulcers. The low-frequency ultrasound as an added treatment for diabetic foot wound ulcers and venous leg wound ulcers had significantly lower non-healed chronic wound ulcers at ≥3 months, a higher percentage of chronic wound ulcers area reduction compared with standard care. The analysis of outcomes should be with caution because of the low sample size of all the 17 studies in the meta-analysis and a low number of studies in certain comparisons. abstract_id: PUBMED:30485668 ABPI reporting and compression recommendations in global clinical practice guidelines on venous leg ulcer management: A scoping review. Clinical practice guidelines (CPGs) for venous leg ulcer (VLU) management recommend below-knee compression to improve healing outcomes after calculating the ankle-brachial pressure index (ABPI) to rule out significant arterial disease. This systematic scoping review aimed to complete a qualitative and quantitative content analysis of international CPGs for VLU management to determine if consensus existed in relation to recommendations for compression application based on an ABPI reading and clinical assessment. Our review shows that there is a lack of consensus across 13 VLU CPGs and a lack of clear guidance in relation to the specific ABPI range of compression therapy that can be safely applied. An area of uncertainty and disagreement exists in relation to an ABPI between 0.6 and 0.8, with some guidelines advocating that compression is contraindicated and others that there should be reduced compression. This has implications in clinical practice, including when it is safe to apply compression. In addition, the inconsistency in the levels of evidence and the grades of recommendation makes it difficult to compare across various guidelines. abstract_id: PUBMED:23170845 Chronic venous ulceration of leg associated with peripheral arterial disease: an underappreciated entity in developing country. Chronic venous ulcer can often be associated with asymptomatic peripheral arterial disease (PAD), which usually remains undiagnosed adding significantly to the morbidity of these patients. The Ankle-Brachial Pressure Index (ABPI) is suggested for PAD evaluation. Many PAD studies were conducted in western countries, but there is a scarcity of data on the prevalence of PAD in clinical venous ulcer patient in developing countries. We conducted a study in a tertiary care hospital of eastern part of India to find out the prevalence of PAD in venous ulcer patients, and also to find the sensitivity of ABPI as a diagnostic tool in these patients. We evaluated clinically diagnosed patients with venous ulcer using ABPI and Colour Doppler study for the presence of PAD. Possible associations such as age, sex, body mass index (BMI), smoking, hypertension and atherosclerosis were studied. All results were analysed using the software Statistica version 6. PAD was present in 23 (27·71%) patients. Older age, longer duration, smoking, high BMI and hypertension were found to be significantly associated with PAD. A very strong level of agreement was found between venous Doppler and ABPI. Assessment for the presence of PAD is important in all clinically diagnosed venous ulcer patients. ABPI being a simple, non-invasive outpatient department (OPD)-based procedure, can be routinely used in cases of venous ulcer to find out the hidden cases of PAD even in developing countries. abstract_id: PUBMED:36736699 Prevalence and predictors of combined >50% iliocaval venous obstruction and superficial venous reflux in chronic venous insufficiency patients with healed or active venous leg ulcer. Objective: The aim of the present study was to investigate the prevalence and predictors of combined >50% iliocaval venous obstruction (ICVO) and superficial venous reflux (SVR) in patients with chronic venous insufficiency (CVI) with a healed (C5) or an active (C6) venous leg ulcer (VLU). Methods: We conducted a retrospective review of prospectively collected data from patients with CVI with CEAP (Clinical, Etiology, Anatomy, Pathophysiology) class C5 (healed ulcer) or C6 (active ulcer) with SVR who had been treated at our institution from February 2017 to January 2018. The demographic, clinical, and surgical data and duplex ultrasound and computed tomography venography findings were collected. We used OsiriX MD, version 2.9, software (Pixmeo SARL, Bernex, Switzerland) to measure the vein diameter via multiplanar reconstruction. The prevalence of combined >50% ICVO and SVR was evaluated, and univariate and multivariate analyses were performed to identify the independent predictors of >50% ICVO in patients with CVI and SVR. Results: A total of 79 limbs from 67 patients were enrolled. The mean age was 59.82 ± 12.86 years, the mean body mass index was 28.68 ± 6.41 kg/m2, and 41.8% were men. The prevalence of >50% ICVO in the patients with SVR was 31.6%. Univariate analysis showed a history of deep vein thrombosis (DVT) in the affected leg (P = .001), a VLU in the left leg (P = .033), a history of a recurrent VLU (P = .038), and reversed flow in the superficial epigastric vein (P = .004) were significantly associated with >50% ICVO in patients with CVI and SVR. Multivariate analysis revealed a history of DVT in the affected leg (adjusted odds ratio [aOR], 8.31; 95% confidence interval [CI], 2.29-30.19; P = .001), a VLU in the left leg (aOR, 3.95; 95% CI, 1.18-13.19; P = .026), and a history of a recurrent VLU (aOR, 3.08; 95% CI, 1.02-9.32; P = .047) to be independently associated with combined >50% ICVO and SVR in patients with CVI. Conclusions: The prevalence of combined >50% ICVO and SVR in patients with CVI and CEAP C5 or C6 was 31.6%. The independent predictors of combined >50% ICVO and SVR in those with CVI were a history of DVT in the affected leg, a VLU in the left leg, and recurrent VLUs. abstract_id: PUBMED:28408089 Editor's Choice - Mid-term Outcomes of Endovenous Laser Ablation in Patients with Active and Healed Venous Ulcers: A Follow-up Study. Objectives: The aim of this study was to assess the mid-term ulcer recurrence rate in patients with healed or active venous ulcers treated with endovenous laser ablation (EVLA) for incompetent superficial axial veins and to search for possible risk factors for non-healing and recurrence. Methods: Consecutive patients treated with EVLA because of a healed or active venous ulcer between 2006 and 2013 were identified in the medical records and quality registry and invited to follow-up, including clinical history, study examination, Duplex ultrasound scanning, ankle brachial pressure, photoplethysmography, venous clinical severity score (VCSS), and health related quality of life (HRQoL) measured with EQ5D. Of 228 patients, 170 (195 legs) fulfilled the inclusion criteria. Twenty patients were interviewed by phone, 27 were unreachable and 11 were excluded. Univariate and multivariate regression analyses were performed to identify possible risk factors for recurrence. Results: The mean follow-up time was 41 months (range 14-89 months). The average age was 66.6 years (range 36-87 years). All 86 legs operated on for an active ulcer had this ulcer healed sometime between the operation and the study examination, but thereafter it recurred in 14 patients (16%). In 109 legs operated on for a healed ulcer, the ulcer recurred in 17 legs (16%). Complications such as permanent sensory loss were seen in 16 legs (8%) and deep venous thrombosis in two legs (1%). Thirty legs (15%) were re-treated for superficial venous incompetence (SVI). Reduced ankle mobility was a risk factor for recurrence in both univariate and multivariate analysis (p=.048). Conclusions: These midterm results demonstrate that endovenous laser ablation of SVI in patients with healed or active venous ulcers achieves good healing and low ulcer recurrence rates, with a low rate of complications and an acceptable re-intervention rate. abstract_id: PUBMED:20559187 Efficacy of a two-component compression system for the therapy of venous leg ulcers. Objective: The performance of a two-component compression system in daily practice was assessed in an outpatient clinic where patients with a wide variety of comorbidities and wound characteristics are treated. Methods: In the single-centred observational study, patients with venous leg ulcers (VLUs) on one or both legs were treated with a two-component compression system for up to 12 months or until the ulcer healed. The only exclusion criteria were an ABPI <0.8, immobilisation and limited ankle joint flexibility. Study assessment parameters included ulcer size on entry into the study, the presence of skin irritations, occurrence of adverse events and ulcer recurrence. The primary outcome measure was the mean time to healing. Results: In total, 136 patients with VLUs were included. Baseline median ulcer duration was 7.5 months and the baseline median size was 4.3cm(2). The average reduction in total ulcer surface was 2.9 +/- 5.5cm(2) per month. Of the wounds, 90.4% healed after 12 months; the mean healing time was three months (Kaplan-Meier, CI 95%: 3-4). The baseline wound size had a significant influence on mean healing time (Cox proportional hazards model; p<0.0001), wounds >4.3cm(2) healed within a mean of five months, while wounds <4.3cm(2) healed within a mean of two months (log rank test: p<0.00001). Conclusion: Healing results were encouraging and suggest a good applicability in the daily practice of VLU therapy. Further studies will include systematic assessment of patient concordance. Declaration Of Interest: None. abstract_id: PUBMED:18666719 Pulse oximetry index: a simple arterial assessment for patients with venous disease. Objective: To provide additional safety data comparing ankle brachial pressure index (ABPI) and pulse oximetry (Lanarkshire Oximetry Index, LOI) as measures of arterial circulation in patients with venous disease of the leg. Method: A total of 107 (195 legs) attending hospital leg ulcer clinics participated in this prospective open study. We attempted to measure brachial and foot arterial pressures in all patients using both the handheld Doppler method (ABPI) and pulse oximeter method (LOI). Features of patients with limbs in which either the ABPI or LOI could not be assessed were documented. ABPI and LOI values were compared, and agreement between the two assessment methods was assessed. Results: We found the LOI measurement to be a simpler technique than Doppler ABPI measurement, with an endpoint less prone to the subjective variability associated with the Doppler method. Of the 195 legs assessed,we obtained LOI in 10 in which an ABPI could not be recorded. LOI could not be recorded in only one leg. There was a linear association (p<0.001) and fair agreement (kappa=0.303) between LOI and ABPI in the 184 legs in which both ratios could be measured. There was no evident tendency for LOI to read either low or high compared with ABPI. Conclusion: Pulse oximetry LOI is a simple alternative to Doppler ABPI in the screening of patients for arterial disease that could be a contraindication to, or require modification of, compression therapy. It can be measured in some legs that cannot be assessed by Doppler ultrasound. abstract_id: PUBMED:16050236 The views of patients living with healed venous leg ulcers. Aim: To examine the views of patients with a diagnosis of venous insufficiency who had experienced at least one episode of venous ulceration that had been treated and healed. Method: Ten participants (five female, five male) were recruited from a community-based healed leg ulcer clinic. In-depth semi-structured interviews were conducted with participants. The interview data were then transcribed and analysed for themes and patterns. Findings: Three themes emerged: symptoms, treatment and coping mechanisms, each with sub-themes. Long-term pain, changes in body image, limitations to lifestyle and difficulties with treatment were described by participants. Level of knowledge was found to be related to the coping measures demonstrated by participants, which included non-acceptance and normalisation. Conclusion: Future developments and reviews of existing services in the overall management of patients with venous ulceration need to include a shift of emphasis to encompass the chronic nature of the underlying disease, with socially acceptable interventions aimed at controlling or limiting the consequences to the patient. abstract_id: PUBMED:14518609 Flavonoid treatment in patients with healed venous ulcer: flow cytometry analysis suggests increased CD11b expression on neutrophil granulocytes in the circulation. The objective was to determine the activation of white blood cells (WBCs) and endothelial cells in patients with healed venous ulcer and the influence of the standing position and of treatment with flavonoids. Ten patients with a healed venous ulcer were treated with flavonoid substance (90% diosmin), 1000 mg three times daily for 30 days. Blood samples were taken from arm and dorsal foot veins before and after standing for 30 minutes. Blood sampling was performed before treatment, after three days, one month and three months. The activation of WBCs was determined by measuring adhesion molecule CD11b and CD18 expression on the surface of granulocytes and monocytes. In addition, interleukin 6 (IL-6), IL-8, soluble E-selectin (sE-selectin), sL-selectin and sICAM-1 levels in serum were quantified. The results showed that standing did not influence any of the measured parameters significantly. Expression of CD11b adhesion molecules on granulocytes was significantly up-regulated (p = 0.044) after treatment with flavonoids for one month, but this increase was not significant (p = 0.056) two months after the treatment period compared with the baseline level. The expression of CD18 remained unchanged. Baseline expression of CD11b or CD18 on monocytes did not change significantly during the study period. Neither was any significant change observed in the levels of IL-6, IL-8 or the soluble adhesion molecules. It was concluded that flavonoid treatment for 30 days increased the expression of CD11b adhesion molecules on circulating granulocytes. No general effect on the inflammatory process could be observed as assessed by levels of cytokines and soluble adhesion molecules. Possible explanations for these findings could be that a decreased number of primed granulocytes leave the circulation due to a changed WBC/endothelial cell interaction or that flavonoids have a direct effect on granulocytes. Further studies are needed to clarify the mode of action of flavonoids in chronic venous disease. Answer: Based on the provided abstracts, it is not recommended to measure ABPI (Ankle-Brachial Pressure Index) in patients with healed venous leg ulcers every three months. The study in abstract PUBMED:15575575 suggests that three-monthly reassessment of healed limbs is not necessary for most patients, especially those who initially have an ABPI above 1.0, as they are unlikely to experience significant changes during this time. However, the study does recommend closer monitoring for patients with two or more arterial risk factors, diabetes, and an initial ABPI of below 1.0, as they are more likely to need frequent reassessment. The abstract does not specify an exact frequency for ABPI measurement in these higher-risk patients, but it implies that a tailored approach based on individual risk factors may be more appropriate than a blanket three-monthly reassessment for all patients with healed venous leg ulcers. Further in-depth research is suggested to examine this issue more closely. Therefore, while routine three-monthly ABPI reassessment for all patients with healed venous leg ulcers is not supported by the evidence in the abstract, more frequent monitoring may be beneficial for certain high-risk groups.
Instruction: Is atrial fibrillation always a culprit of stroke in patients with atrial fibrillation plus stroke? Abstracts: abstract_id: PUBMED:24217395 Is atrial fibrillation always a culprit of stroke in patients with atrial fibrillation plus stroke? Background: Some ischemic strokes in patients with atrial fibrillation (AF) are caused by noncardioembolic etiologies (AF-unrelated stroke), but not AF itself (AF-related stroke). However, most clinical trials on the risk of stroke in AF have not distinguished between these. We investigated the frequency and features of AF-unrelated versus AF-related strokes in patients with AF plus ischemic stroke. We hypothesized that certain clinical factors, including chronicity of AF, treatment at the time of stroke onset and echocardiographic findings, may help to discriminate between AF-related and AF-unrelated strokes. The mechanisms and antithrombotic medications at the time of stroke recurrence in the two groups were also examined. Methods: Consecutive patients with ischemic stroke within 7 days of symptom onset and with AF were included. Patients were classified according to the previously published criteria. Clinical factors including CHADS2 and CHA2DS2-VASc scores and transthoracic echocardiographic (TTE) findings were evaluated. Results: Of 522 patients, 424 (81.2%) were grouped as AF-related stroke and the remaining 90 (17.2%) were classified as AF-unrelated stroke. Among the patients with AF-unrelated stroke, 51 (9.8%) were categorized as possible large artery atherosclerosis and 38 (7.3%) as possible small artery occlusion; 1 patient (0.2%) was assigned to miscellaneous cause. The AF-related and AF-unrelated strokes had similar CHADS2 and CHA2DS2-VASc scores. However, compared to AF-unrelated stroke, AF-related stroke was independently associated with female sex (odds ratio, OR, 2.19; 95% confidence interval, CI, 1.18-4.05), sustained AF (OR, 2.09; 95% CI, 1.21-3.59), inadequate anticoagulation at stroke onset (OR, 3.21; 95% CI, 1.33-7.75) and left ventricular dysfunction on TTE (OR, 2.84; 95% CI, 1.40-5.74). We identified 26 patients who experienced 2 strokes during the study period. The initial stroke subtype was a strong predictor of the recurrent stroke mechanism (p < 0.001). Among 17 events of AF-related recurrent stroke in these subpopulation, only 2 strokes (11.8%) occurred in a setting of adequate anticoagulation, whereas 4 out of 9 patients (44.4%) who had AF-unrelated strokes at recurrence were sufficiently anticoagulated at the time of admission (p = 0.138). Conclusion: AF is not always a culprit of stroke in patients with AF plus ischemic stroke; approximately one sixth of these cases are unrelated to AF and have distinct characteristics compared to AF-related stroke. There are significant differences in terms of some clinical and TTE parameters between AF-related and AF-unrelated stroke. Future studies are warranted to optimize strategies for risk stratification, treatment and prevention of stroke in these patients. abstract_id: PUBMED:37692040 Transcatheter aortic valve replacement with the VenusA-Pro and VenusA-Plus systems: preliminary experience in China. Background: The outcomes of transcatheter aortic valve replacement (TAVR) employing the second-generation retrievable VenusA-Pro and VenusA-Plus delivery systems with the self-expanding VenusA-Valve have not been described yet. This study aims to report the outcomes of these two second-generation delivery systems. Methods: From January 2022 to April 2023, we prospectively enrolled patients with severe aortic stenosis undergoing TAVR with VenusA-Pro from three centers across China in this first-in-man study and retrospectively identified those undergoing TAVR with VenusA-Plus. All outcomes were reported according to the Valve Academic Research Consortium 3 definition. The primary outcome was 30-day all-cause mortality. Results: A total of 156 patients were included, of which 46 underwent TAVR with VenusA-Pro and 110 underwent TAVR with VenusA-Plus. The Society of Thoracic Surgeons median score was 2.1%, bicuspid anatomy prevalence rate was 55.1%, and the mean aortic root calcification volume was 693 mm3. The technical success rate was 91.7%, comparable between the VenusA-Pro and VenusA-Plus groups (87.0% vs. 93.6%, P = 0.169). The 30-day all-cause mortality was 2.6%, similar between the VenusA-Pro and VenusA-Plus groups (2.2% vs. 2.7%, P = 0.842). No myocardial infarction occurred. The incidences of stroke (0.6%), major bleeding (3.8%), major vascular complications (5.1%), acute kidney injury (9.0%), permanent pacemaker implantation (5.1%), new-onset atrial fibrillation (5.8%), and moderate-to-severe paravalvular aortic regurgitation (6.0%) were favorable and comparable between the two groups. The clinical outcomes were similar between the patients with bicuspid and tricuspid aortic valve, except that the incidence of permanent pacemaker implantation was lower in patients with bicuspid anatomy (1.2% vs. 10.6%, P = 0.010). Conclusions: The 30-day outcomes of TAVR with VenusA-Pro and VenusA-Plus were favorable and comparable. abstract_id: PUBMED:35318490 Isolated pontine infarction versus pontine plus infarction: prevalence, pathogenic mechanism, and outcomes. Background: Pontine infarctions are often associated with extrapontine infarcts. We aimed to elucidate differences in clinical characteristics, mechanisms, and functional outcomes between isolated pontine infarction (IPI) and pontine plus infarction (PPI). Methods: We studied patients with acute pontine infarction between January 2019 and August 2021 and divided them into IPI and PPI according to diffusion-weighted magnetic resonance image. Vertical locations were classified into rostral, middle, and caudal, and horizontal locations were categorized as basal/tegmental and unilateral/bilateral. Factors associated with poor 90-day outcome (modified Rankin scale 3-6) were analyzed. Results: Among 241 patients, 170 (70.5%) had IPI and 71 (29.5%) had PPI. The most frequently observed extrapontine areas were cerebellar (59.1%), followed by posterior cerebral artery territory (45.1%). Mental status changes, sensory changes, and ataxia were more common, and motor dysfunction was less common in PPI patients. The PPI patients more often had rostral (P < 0.001), bilateral lesions (P < 0.001), and moderate/severe vascular stenosis and atrial fibrillation; therefore, large artery disease (LAD) and cardioembolism were more common stroke mechanisms (P < 0.001). In IPI patients, high initial National Institutes of Health Stroke Scale (NIHSS) (adjusted Odds ratio (OR) = 1.38; P = 0.001) and old age (aOR = 1.05; P = 0.049) were associated with poor functional outcome, whereas moderate/severe stenosis (Reference: no stenosis, aOR = 7.17; P = 0.014) and high initial NIHSS (aOR = 1.39; P = 0.006) were related to unfavorable outcomes in PPI patients. Conclusions: PPI patients more often had extensive pontine lesions, LAD and cardioembolism, and their outcome was more often influenced by underlying severe vascular diseases. These differences need to be considered in the prevention and therapeutic strategies. abstract_id: PUBMED:32101311 Outcomes Associated With Oral Anticoagulants Plus Antiplatelets in Patients With Newly Diagnosed Atrial Fibrillation. Importance: Patients with nonvalvular atrial fibrillation at risk of stroke should receive oral anticoagulants (OAC). However, approximately 1 in 8 patients in the Global Anticoagulant Registry in the Field (GARFIELD-AF) registry are treated with antiplatelet (AP) drugs in addition to OAC, with or without documented vascular disease or other indications for AP therapy. Objective: To investigate baseline characteristics and outcomes of patients who were prescribed OAC plus AP therapy vs OAC alone. Design, Setting, And Participants: Prospective cohort study of the GARFIELD-AF registry, an international, multicenter, observational study of adults aged 18 years and older with recently diagnosed nonvalvular atrial fibrillation and at least 1 risk factor for stroke enrolled between March 2010 and August 2016. Data were extracted for analysis in October 2017 and analyzed from April 2018 to June 2019. Exposure: Participants received either OAC plus AP or OAC alone. Main Outcomes And Measures: Clinical outcomes were measured over 3 and 12 months. Outcomes were adjusted for 40 covariates, including baseline conditions and medications. Results: A total of 24 436 patients (13 438 [55.0%] male; median [interquartile range] age, 71 [64-78] years) were analyzed. Among eligible patients, those receiving OAC plus AP therapy had a greater prevalence of cardiovascular indications for AP, including acute coronary syndromes (22.0% vs 4.3%), coronary artery disease (39.1% vs 9.8%), and carotid occlusive disease (4.8% vs 2.0%). Over 1 year, patients treated with OAC plus AP had significantly higher incidence rates of stroke (adjusted hazard ratio [aHR], 1.49; 95% CI, 1.01-2.20) and any bleeding event (aHR, 1.41; 95% CI, 1.17-1.70) than those treated with OAC alone. These patients did not show evidence of reduced all-cause mortality (aHR, 1.22; 95% CI, 0.98-1.51). Risk of acute coronary syndrome was not reduced in patients taking OAC plus AP compared with OAC alone (aHR, 1.16; 95% CI, 0.70-1.94). Patients treated with OAC plus AP also had higher rates of all clinical outcomes than those treated with OAC alone over the short term (3 months). Conclusions And Relevance: This study challenges the practice of coprescribing OAC plus AP unless there is a clear indication for adding AP to OAC therapy in newly diagnosed atrial fibrillation. abstract_id: PUBMED:37332266 Should we be screening people with diabetes for atrial fibrillation? Exploring patients' views. Atrial fibrillation (AF) and diabetes are increasingly prevalent worldwide, both increasing stroke risk. AF can be detected by patient-led electrocardiogram (ECG) screening applications. Understanding patients' views around AF screening is important when considering recommendations, and this study explores these views where there is an existing diagnosis of diabetes. Nine semi-structured qualitative interviews were conducted with participants from a previous screening study (using a mobile ECG device), who were identified with AF. Thematic analysis was completed using NVivo 12 Plus software and themes were identified within each research question for clarity. Themes were identified in four groups: 1. patients' understanding of AF - the 'concept of irregularity' and 'consideration of consequence'; 2. views on screening - 'screening as a resource-intensive initiative', 'fear of outcomes from screening' and 'expectations of screening reliability'; 3. views on incorporating screening into routine care - 'importance of screening convenience'; and 4. views on the screening tool - 'technology as a barrier' and 'feasibility of the mobile ECG recording device for screening'. In conclusion, eliciting patients' views has demonstrated the need for clear and concise information around the delivery of an AF diagnosis. Screening initiatives should factor in location, convenience, personnel, and cost, all of which were important for promoting screening inclusion. abstract_id: PUBMED:22221944 Cost-effectiveness of clopidogrel plus aspirin for stroke prevention in patients with atrial fibrillation in whom warfarin is unsuitable. Guidelines for atrial fibrillation (AF) recommend clopidogrel plus aspirin as an alternative stroke prevention strategy in patients in whom warfarin is unsuitable. A Markov model was conducted from a Medicare prospective using data from the Atrial Fibrillation Clopidogrel Trial with Irbesartan for Prevention of Vascular Events-A (ACTIVE-A) trial and other published studies. Base-case analysis evaluated patients 65 years old with AF, a CHADS(2) (congestive heart failure, 1 point; hypertension defined as blood pressure consistently >140/90 mm Hg or antihypertension medication, 1 point; age ≥75 years, 1 point; diabetes mellitus, 1 point; previous stroke or transient ishemic attack, 2 points) score of 2, and a lower risk for major bleeding. Patients received clopidogrel 75 mg/day plus aspirin or aspirin alone. Patients were followed for up to 35 years. Outcomes included quality-adjusted life-years (QALYs), costs (in 2011 American dollars), and incremental cost-effectiveness ratios. Quality-adjusted life expectancy and costs were 9.37 QALYs and $88,751 with clopidogrel plus aspirin and 9.01 QALYs and $79,057 with aspirin alone. Incremental cost-effectiveness ratio for clopidogrel plus aspirin was $26,928/QALY. With 1-way sensitivity analysis using a willingness-to-pay threshold of $50,000/QALY, clopidogrel plus aspirin was no longer cost effective when the CHADS(2) score was ≤1, major bleeding risk with aspirin was ≥2.50%/patient-year, the relative risk decrease for ischemic stroke with clopidogrel plus aspirin versus aspirin alone was <25%, and the utility of being healthy with AF on combination therapy decreased to 0.95. Monte Carlo simulation demonstrated that clopidogrel plus aspirin was cost effective in 55% and 73% of 10,000 iterations assuming willingness-to-pay thresholds of $50,000 and $100,000/QALY. In conclusion, clopidogrel plus aspirin appears cost-effective compared to aspirin alone for stroke prevention in patients with AF with a CHADS(2) of ≥2 and a lower risk of bleeding. abstract_id: PUBMED:34401468 Optimisation of oral anticoagulants for patients with atrial fibrillation within 12 months after percutaneous coronary intervention: A meta-analysis and systematic review. Background: The optimal antithrombotic strategy, especially regarding oral anticoagulants (OACs) for atrial fibrillation (AF) patients with bleeding and thrombosis risk after percutaneous coronary intervention (PCI), remains unknown. This study explored the optimal oral anticoagulants for AF patients after PCI using a meta-analysis. Methods: Randomised controlled trials were identified from PubMed, Embase, and the Cochrane Library through December 2020. Risk ratios, 95% confidence intervals, and random-effects models were used to compare different antithrombotic strategies through network meta-analysis, and the combination of antithrombotic agents was ranked according to the surface under the cumulative ranking curve and rankograms. Interval plots were drawn to observe pairwise comparisons between the different strategies. Results: Five studies of 11,532 patients were included. Factor IIa inhibitor 110 mg bid plus a P2Y12 inhibitor had the greatest advantage for reducing Thrombolysis In Myocardial Infarction (TIMI) major or minor bleeding; Factor Xa inhibitor plus a P2Y12 inhibitor had the greatest advantage for reducing International Society on Thrombosis and Hemostasis major bleeding. For patients at risk of stroke plus all-cause death, factor IIa inhibitor 150 mg bid plus a P2Y12 inhibitor should be prioritised, and for those at risk of myocardial infarction and stent thrombosis, vitamin K antagonists plus a P2Y12 inhibitor were preferred. Conclusion: Factor IIa inhibitor 110 mg, factor IIa inhibitor 150 mg, factor Xa inhibitor and vitamin K antagonists should be selected in different situations. abstract_id: PUBMED:34446528 Ischaemic stroke in anticoagulated patients with atrial fibrillation. Anticoagulation substantially reduces the risk of stroke in patients with atrial fibrillation (AF). However, recent studies have shown that up to 22%-36% of patients on anticoagulation will suffer an ischaemic stroke (IS). In this narrative review, we provide an overview of risk factors, mechanisms, management of acute IS and strategies for secondary prevention for patients with AF with stroke despite oral anticoagulation. For this paper, we reviewed available literature from important studies (randomised clinical trials, meta-analyses, reviews and case series) on patients with IS despite anticoagulation. We focused on recent studies that examined safety and efficacy of acute stroke treatments and evaluation and management strategies for secondary prevention. The literature review suggests that patients with AF with IS despite anticoagulation are a heterogeneous group with several possible mechanisms, which may include reduced or non-adherence to anticoagulation, competing non-cardioembolic stroke aetiologies or cardioembolic mechanisms separate from AF. The identification of one or more possible mechanisms of stroke despite anticoagulation may allow for a more targeted and individualised approach for secondary prevention. There are limited data to guide management in such patients, and strategies to prevent recurrent strokes include strict risk factor control and therapies targeting the most likely stroke mechanism. In cases where AF is suspected to be the culprit, clinical trials are needed to test the safety and efficacy of left atrial appendage occlusion plus anticoagulation versus continued anticoagulation alone. abstract_id: PUBMED:36945020 Efficacy and safety of rivaroxaban plus clopidogrel versus aspirin plus clopidogrel in patients with coronary atherosclerotic heart disease and gastrointestinal disease undergoing percutaneous coronary intervention: study protocol for a non-inferiority randomized controlled trial. Background: Dual antiplatelet therapy (DAPT) with aspirin and a P2Y12 inhibitor is recommended for patients with coronary heart disease (CHD) undergoing percutaneous coronary intervention (PCI) to antithrombosis, meanwhile, increasing the risks of gastrointestinal bleeding. Rivaroxaban, a novel oral anticoagulant, combined with a P2Y12 receptor inhibitor reduces adverse events in patients with CHD and atrial fibrillation who underwent PCI. The effect of rivaroxaban plus P2Y12 inhibitor on reducing bleeding events in patients with CHD and gastrointestinal disease (GID) undergoing PCI remains unclear. Method: The study is a prospective, single-center, randomized controlled trial. A total of 1020 patients with CHD and GID undergoing PCI will be enrolled. Patients are randomized (1:1) to receive either rivaroxaban 10 mg plus clopidogrel 75 mg daily or aspirin 100 mg plus clopidogrel 75 mg daily; both treatments will last 6 months. The primary endpoint is Bleeding Academic Research Consortium (BARC) type 2-5 bleeding requiring medical intervention. The secondary endpoint is a composite of major adverse cardiovascular and cerebrovascular events (MACCE), including all-cause death, cardiac death, nonfatal myocardial infarction, stent thrombosis, ischemia-driven target vessel revascularization, and stroke. Discussion: The objective of this study is to evaluate the efficacy and safety of rivaroxaban plus clopidogrel versus aspirin plus clopidogrel in patients with CHD and GID undergoing PCI. We aim to explore an optimized antithrombotic strategy, which achieves the same anti-ischemic effect as standard DAPT without increasing the risk of GIB, for patients with CHD and GID undergoing PCI. Trial Registration: This protocol is registered at the Chinese Clinical Trial Registry under the number ChiCTR2100044319. And this publication is based on version 1.4 of the trial protocol dated Sep 6, 2021. abstract_id: PUBMED:33746024 Efficacy and safety of rivaroxaban and ticagrelor in elderly patients with atrial fibrillation undergoing percutaneous coronary intervention. Aims: To compare the efficacy and safety of a dual therapy (rivaroxaban and ticagrelor) with a triple therapy (aspirin, clopidogrel and warfarin) in Chinese elderly patients with nonvalvular atrial fibrillation (NVAF) undergoing percutaneous coronary intervention (PCI). Methods: A total of 106 elderly Chinese patients with NAVF after PCI were randomly divided into a dual therapy group treated with ticagrelor 90 mg twice daily and rivaroxaban 15 mg once daily after PCI, and a triple therapy group treated with aspirin 100 mg and clopidogrel 75 mg once daily combined with the dose-adjusted vitamin K antagonist warfarin once daily. The mean follow-up time was 1 year. The primary endpoint was the composite death rate from cardiovascular causes, myocardial infarction, stroke or stent thrombosis. The safety endpoint was clinically significant bleeding (a composite value of major, minor and minimal bleeding). Results: There were no significant differences between the 2 groups regarding the basic characteristics of the patients. The primary composite endpoint of the dual therapy group after 1 year was not significantly different from the triple therapy group (16.7% vs 15.2%, P = 0.86; HR 1.02; 95% CI: 0.82-1.24), but there was a significant difference in the incidence of hemorrhage (7.4% vs 26.9% P = 0.01; HR 0.71; 95% CI: 0.62-0.83) between the 2 groups. Conclusions: In elderly Chinese patients with NVAF undergoing PCI, the efficacy of dual (ticagrelor plus rivaroxaban) treatments was comparable to the triple antithrombotic regime (warfarin plus dual antiplatelet therapy). The overall incidence of bleeding was significantly reduced with dual treatment compared to the triple treatment regime. Answer: No, atrial fibrillation (AF) is not always the culprit of stroke in patients with AF plus ischemic stroke. Approximately one sixth of these cases are unrelated to AF and have distinct characteristics compared to AF-related stroke (PUBMED:24217395). The study found that 17.2% of strokes in patients with AF were classified as AF-unrelated, with possible large artery atherosclerosis and possible small artery occlusion being the most common causes. AF-related and AF-unrelated strokes had similar CHADS2 and CHA2DS2-VASc scores, but AF-related stroke was independently associated with female sex, sustained AF, inadequate anticoagulation at stroke onset, and left ventricular dysfunction on transthoracic echocardiography (TTE). The initial stroke subtype was also a strong predictor of the recurrent stroke mechanism.
Instruction: Are apparent associations between parental representations and psychosis risk mediated by early trauma? Abstracts: abstract_id: PUBMED:16223425 Are apparent associations between parental representations and psychosis risk mediated by early trauma? Objective: It was investigated whether the reported association between representations of parental rearing style and psychosis does not represent a main effect, but instead is a proxy indicator of the true underlying risk factor of early trauma. Method: In a general population sample of 4045 individuals aged 18-64 years, first ever onset of positive psychotic symptoms at 3-year follow-up was assessed using the Composite International Diagnostic Interview and clinical interviews if indicated. Representations of parental rearing style were measured with the Parental Bonding Instrument (PBI). Results: Lower baseline level of PBI parental care predicted onset of psychotic symptoms 2 years later. However, when trauma was included in the equation, a strong main effect of trauma emerged at the expense of the effect size of PBI low care. Conclusion: The results suggest that associations between representations of parental rearing style and psychosis may be an indicator of the effect of earlier exposure to childhood trauma. abstract_id: PUBMED:36931184 Associations of neurodevelopmental risk factors with psychosis proneness: Findings from a non-clinical sample of young adults. Psychotic disorders often develop as the continuum of subclinical symptoms that include hallucination-like and delusion-like experiences, and are commonly referred to as psychotic-like experiences (PLEs). To date, a number of neurodevelopmental risk factors of psychosis have been detected, yet their mutual interplay remains unknown. Therefore, we aimed to investigate the additive association of childhood trauma history, reading disabilities and symptoms of attention-deficit/hyperactivity disorder (ADHD) with psychosis proneness. A total of 3000 young adults (58.3% females, aged 18-35 years) with a negative history of psychiatric treatment were recruited to the cross-sectional study through computer-assisted web interview. Self-reports were administered to measure childhood trauma history, ADHD symptoms and reading disabilities. Linear regression analyses revealed significant main associations of childhood trauma history and reading disabilities with higher levels of PLEs. There were no significant main associations of ADHD with the level of PLEs. However, the associations of all possible interactions between neurodevelopmental risk factors with the level of PLEs were significant. Our findings suggest that childhood trauma history and reading disabilities may additively increase a risk of psychosis. The present findings bring new implications for early intervention strategies in psychosis and posit the rationale of recording the accumulation of neurodevelopmental vulnerabilities in clinical practice. abstract_id: PUBMED:29400347 Effects of early trauma on psychosis development in clinical high-risk individuals and stability of trauma assessment across studies: a review. Early trauma (ET), though broadly and inconsistently defined, has been repeatedly linked to numerous psychological disturbances, including various developmental stages of psychotic disorders. The prodromal phase of psychosis highlights a unique and relevant population that provides insight into the critical periods of psychosis development. As such, a relatively recent research focus on individuals at clinical high risk (CHR) for psychosis reveals robust associations of early life trauma exposures with prodromal symptoms and function in these cohorts. While prevalence rates of ET in CHR cohorts remain consistently high, methodological measures of traumatic experiences vary across studies, presenting potential problems for reliability and validity of results. This review aims to 1) highlight the existing evidence identifying associations of ET, of multiple forms, with both symptom severity and transition rates to psychosis in CHR individuals, 2) present data on the variability among trauma assessments and its implications for conclusions about its relationship with clinical variables, 3) describe cognitive deficits common in CHR cohorts, including perceptual and neurocognitive impairments, and their neural correlates, that may modify the relationship of ET to symptoms, and 4) propose future directions for standardization of trauma assessment in CHR cohorts to better understand its clinical and cognitive correlates. abstract_id: PUBMED:35866367 Cumulative environmental risk in early life is associated with mental disorders in childhood. Background: No single environmental factor is a necessary or sufficient cause of mental disorder; multifactorial and transdiagnostic approaches are needed to understand the impact of the environment on the development of mental disorders across the life course. Method: Using linked multi-agency administrative data for 71 932 children from the New South Wales Child Developmental Study, using logistic regression, we examined associations between 16 environmental risk factors in early life (prenatal period to <6 years of age) and later diagnoses of mental disorder recorded in health service data (from age 6 to 13 years), both individually and summed as an environmental risk score (ERS). Results: The ERS was associated with all types of mental disorder diagnoses in a dose-response fashion, such that 2.8% of children with no exposure to any of the environmental factors (ERS = 0), compared to 18.3% of children with an ERS of 8 or more indicating exposure to 8 or more environmental factors (ERS ⩾ 8), had been diagnosed with any type of mental disorder up to age 13-14 years. Thirteen of the 16 environmental factors measured (including prenatal factors, neighbourhood characteristics and more proximal experiences of trauma or neglect) were positively associated with at least one category of mental disorder. Conclusion: Exposure to cumulative environmental risk factors in early life is associated with an increased likelihood of presenting to health services in childhood for any kind of mental disorder. In many instances, these factors are preventable or capable of mitigation by appropriate public policy settings. abstract_id: PUBMED:29352556 What causes psychosis? An umbrella review of risk and protective factors. Psychosis is a heterogeneous psychiatric condition for which a multitude of risk and protective factors have been suggested. This umbrella review aimed to classify the strength of evidence for the associations between each factor and psychotic disorders whilst controlling for several biases. The Web of Knowledge database was searched to identify systematic reviews and meta-analyses of observational studies which examined associations between socio-demographic, parental, perinatal, later factors or antecedents and psychotic disorders, and which included a comparison group of healthy controls, published from 1965 to January 31, 2017. The literature search and data extraction followed PRISMA and MOOSE guidelines. The association between each factor and ICD or DSM diagnoses of non-organic psychotic disorders was graded into convincing, highly suggestive, suggestive, weak, or non-significant according to a standardized classification based on: number of psychotic cases, random-effects p value, largest study 95% confidence interval, heterogeneity between studies, 95% prediction interval, small study effect, and excess significance bias. In order to assess evidence for temporality of association, we also conducted sensitivity analyses restricted to data from prospective studies. Fifty-five meta-analyses or systematic reviews were included in the umbrella review, corresponding to 683 individual studies and 170 putative risk or protective factors for psychotic disorders. Only the ultra-high-risk state for psychosis (odds ratio, OR=9.32, 95% CI: 4.91-17.72) and Black-Caribbean ethnicity in England (OR=4.87, 95% CI: 3.96-6.00) showed convincing evidence of association. Six factors were highly suggestive (ethnic minority in low ethnic density area, second generation immigrants, trait anhedonia, premorbid IQ, minor physical anomalies, and olfactory identification ability), and nine were suggestive (urbanicity, ethnic minority in high ethnic density area, first generation immigrants, North-African immigrants in Europe, winter/spring season of birth in Northern hemisphere, childhood social withdrawal, childhood trauma, Toxoplasma gondii IgG, and non-right handedness). When only prospective studies were considered, the evidence was convincing for ultra-high-risk state and suggestive for urbanicity only. In summary, this umbrella review found several factors to be associated with psychotic disorders with different levels of evidence. These risk or protective factors represent a starting point for further etiopathological research and for the improvement of the prediction of psychosis. abstract_id: PUBMED:32577908 Traumatic experiences in childhood and adolescence: a meta-analysis of prospective studies assessing risk for psychosis. Evidence of the association between traumatic experiences and psychosis are uncertain with respect to temporal order, clinical outcomes and the role of the age and genetic liability. The aim of the present meta-analysis was to explore the temporal relationship between the development of psychosis and traumatic exposure using prospective studies and to examine the role of moderation factors on overall effect sizes. Studies were identified by searching Embase-Ovid, PsycINFO (EBSCO), Pubmed, Scopus, Web of Science databases, and yielded an initial total of 9016 papers, leaving finally 23 after the screening process. Three sets of meta-analyses estimated the risk of developing psychotic experiences or full clinical psychosis by having experienced maltreatment by an adult or bullying by peers or parental death, using the random-effects model. Bullying by peers (OR = 2.28 [1.64, 4.34]), maltreatment by an adult (OR = 2.20 [1.72, 2.81]) and parental death (OR = 1.24 [1.06, 1.44]) all increased the risk of psychosis. Moderator analysis showed that negative effects of bullying were detected especially in those with genetic liability for psychosis and exposure to multiple trauma types; studies with higher prevalence of males showed a stronger risk for those exposed to parental death. No significant meta-regression was found between the risk of developing a full clinical psychosis or a psychotic experience. Lack of studies hampered the results about the age of trauma occurrence. The cumulative effect of being bullied from peers and experiencing other adversities during childhood and/or adolescence, together with genetic liability for psychosis, appears to confer the highest risk for developing psychotic symptoms later in life. abstract_id: PUBMED:22461484 Childhood adversities increase the risk of psychosis: a meta-analysis of patient-control, prospective- and cross-sectional cohort studies. Evidence suggests that adverse experiences in childhood are associated with psychosis. To examine the association between childhood adversity and trauma (sexual abuse, physical abuse, emotional/psychological abuse, neglect, parental death, and bullying) and psychosis outcome, MEDLINE, EMBASE, PsychINFO, and Web of Science were searched from January 1980 through November 2011. We included prospective cohort studies, large-scale cross-sectional studies investigating the association between childhood adversity and psychotic symptoms or illness, case-control studies comparing the prevalence of adverse events between psychotic patients and controls using dichotomous or continuous measures, and case-control studies comparing the prevalence of psychotic symptoms between exposed and nonexposed subjects using dichotomous or continuous measures of adversity and psychosis. The analysis included 18 case-control studies (n = 2048 psychotic patients and 1856 nonpsychiatric controls), 10 prospective and quasi-prospective studies (n = 41,803) and 8 population-based cross-sectional studies (n = 35,546). There were significant associations between adversity and psychosis across all research designs, with an overall effect of OR = 2.78 (95% CI = 2.34-3.31). The integration of the case-control studies indicated that patients with psychosis were 2.72 times more likely to have been exposed to childhood adversity than controls (95% CI = 1.90-3.88). The association between childhood adversity and psychosis was also significant in population-based cross-sectional studies (OR = 2.99 [95% CI = 2.12-4.20]) as well as in prospective and quasi-prospective studies (OR = 2.75 [95% CI = 2.17-3.47]). The estimated population attributable risk was 33% (16%-47%). These findings indicate that childhood adversity is strongly associated with increased risk for psychosis. abstract_id: PUBMED:37293402 The impact of early factors on persistent negative symptoms in youth at clinical high risk for psychosis. Introduction: Persistent negative symptoms (PNS) are described as continuing moderate negative symptoms. More severe negative symptoms have been associated with poor premorbid functioning in both chronic schizophrenia and first episode psychosis patients. Furthermore, youth at clinical high risk (CHR) for developing psychosis may also present with negative symptoms and poor premorbid functioning. The aim of this current study was to: (1) define the relationship between PNS and premorbid functioning, life events, trauma and bullying, previous cannabis use, and resource utilization, and (2) to examine what explanatory variables best predicted PNS. Method: CHR participants (N = 709) were recruited from the North American Prodrome Longitudinal Study (NAPLS 2). Participants were divided into two groups: those with PNS (n = 67) versus those without PNS (n = 673). A K-means cluster analysis was conducted to distinguish patterns of premorbid functioning across the different developmental stages. The relationships between premorbid adjustment and other variables were examined using independent samples t-tests or chi square for categorical variables. Results: There was significantly more males in the PNS group. Participants with PNS had significantly lower levels of premorbid adjustment in childhood, early adolescence, and late adolescence, compared to CHR participants without PNS. There were no differences between the groups in terms of trauma, bullying, and resource utilization. The non-PNS group had more cannabis use and more desirable and non-desirable life events. Conclusion: In terms of better understanding relationships between early factors and PNS, a prominent factor associated with PNS was premorbid functioning, in particular poor premorbid functioning in later adolescence. abstract_id: PUBMED:32616469 Relationship between childhood trauma and social adaptation in a sample of young people attending an early intervention service for psychosis. Introduction: Childhood trauma has been associated with an increased risk of psychosis, a greater severity of psychopathological symptoms, and a worse functional prognosis in patients with psychotic disorders. The current study aims to explore the relationship between childhood trauma, psychopathology and social adaptation in a sample of young people with first episode psychosis (FEP) or at-risk mental states (ARMS). Material And Methods: The sample included 114 young people (18-35 years old, 81 FEP and 33 ARMS) who were attending an Early Intervention Service for Psychosis. Positive, negative and depressive symptoms were assessed with the PANSS and the Calgary Depression Scale; history of childhood trauma was assessed with the Childhood Trauma Questionnaire; social adaptation was assessed with the Social Adaptation Self-evaluation Scale (SASS). Structural equation modeling (SEM) was used to explore the relationship between childhood trauma, psychopathology and SASS dimensions in the global sample (including FEP and ARMS). An exploratory SEM analysis was repeated in the subsample of FEP patients. Results: ARMS individuals reported more emotional neglect and worse social adaptation compared to FEP. SEM analysis showed that childhood trauma is associated with a worse social adaptation, in a direct way with domains involving interpersonal relationships, and mediated by depressive symptoms with those domains involving leisure, work and socio-cultural interests. Conclusions: Childhood trauma has a negative effect on social adaptation in young people with early psychosis. Depressive symptoms play a mediation role in this association, especially in domains of leisure and work. abstract_id: PUBMED:29696166 Design, rationale and feasibility of a multidimensional experimental protocol to study early life stress. There is a rapidly accumulating body of evidence regarding the influential role of early life stress (ELS) upon medical and psychiatric conditions. While self-report instruments, with their intrinsic limitations of recall, remain the primary means of detecting ELS in humans, biological measures are generally limited to a single biological system. This paper describes the design, rationale and feasibility of a study to simultaneously measure neuroendocrine, immune and autonomic nervous system (ANS) responses to psychological and physiological stressors in relation to ELS. Five healthy university students were recruited by advertisement. Exclusion criteria included chronic medical conditions, psychotic disorders, needle phobia, inability to tolerate pain, and those using anti-inflammatory medications. They were clinically interviewed and physiological recordings made over a two-hour period pre, during and post two acute stressors: the cold pressor test and recalling a distressing memory. The Childhood Trauma Questionnaire and the Parental Bonding Index were utilised to measure ELS. Other psychological measures of mood and personality were also administered. Measurements of heart rate, blood pressure, respiratory rate, skin conductance, skin blood flow and temporal plasma samples were successfully obtained before, during and after acute stress. Participants reported the extensive psychological and multisystem physiological data collection and stress provocations were tolerable. Most (4/5) participants indicated a willingness to return to repeat the protocol, indicating acceptability. Our protocol is viable and safe in young physically healthy adults and allows us to assess simultaneously neuroendocrine, immune and autonomic nervous system responses to stressors in persons assessed for ELS. Answer: Yes, the apparent associations between parental representations and psychosis risk appear to be mediated by early trauma. The study with the general population sample of 4045 individuals aged 18-64 years found that lower baseline levels of parental care, as measured by the Parental Bonding Instrument (PBI), predicted the onset of psychotic symptoms 2 years later. However, when trauma was included in the analysis, a strong main effect of trauma emerged, reducing the effect size of PBI low care. This suggests that the association between representations of parental rearing style and psychosis may be an indicator of the effect of earlier exposure to childhood trauma (PUBMED:16223425). Furthermore, evidence from various studies supports the notion that early trauma is a significant risk factor for psychosis. A meta-analysis of patient-control, prospective, and cross-sectional cohort studies found that childhood adversities are strongly associated with an increased risk for psychosis, with an estimated population attributable risk of 33% (PUBMED:22461484). Another meta-analysis of prospective studies assessing the risk for psychosis after traumatic experiences in childhood and adolescence also confirmed that traumatic experiences such as bullying by peers, maltreatment by an adult, and parental death all increased the risk of psychosis (PUBMED:32577908). Additionally, a study on youth at clinical high risk for psychosis found that persistent negative symptoms, which are associated with poor premorbid functioning, were significantly related to poor premorbid adjustment in childhood and adolescence, although no differences were found in terms of trauma, bullying, and resource utilization between those with and without persistent negative symptoms (PUBMED:37293402). In summary, the evidence suggests that early trauma plays a mediating role in the association between parental representations and the risk of psychosis, indicating that interventions targeting early trauma may be crucial for preventing the onset of psychotic symptoms.
Instruction: Are eating and psychosocial characteristics in early teenage years useful predictors of eating characteristics in early adulthood? Abstracts: abstract_id: PUBMED:9813760 Are eating and psychosocial characteristics in early teenage years useful predictors of eating characteristics in early adulthood? A 7-year longitudinal study. Objectives: This longitudinal research aimed to determine the utility of psychosocial and eating characteristics in early teenage years in predicting eating attitudes and behaviors in early adulthood. Method: Self-esteem, perfectionism, family function, and eating attitudes were measured at 12 years and eating attitudes and behavior were assessed at 19 years. Sixty-three women (71.6%) remained in the study over the 7 years. Results: Bulimic attitudes in early teenage years were related to subsequent bulimic features, while restrictive attitudes were more closely related to later purging behaviors. Self-esteem and perfectionism were only weakly linked with subsequent eating. However, different perceived family characteristics at the first stage were associated with specific aspects of eating in early adulthood. Conclusions: There is consistency of eating characteristics across time, but psychosocial characteristics have a more limited predictive power. Future longitudinal studies should begin earlier in childhood if psychosocial factors are to be useful predictors of eating psychopathology in adulthood. abstract_id: PUBMED:34513451 Eating Behaviors in Early Childhood (1-5 Years) and Their Association With Sociodemographic Characteristics in Turkey. Eating problems are common in childhood and being seen in 25%-45% of healthy children. The period following the first year of life up to five years is when eating problems are most frequently reported and also when the child acquires eating habits. Several studies have shown that eating behavior is affected by the family environment, and by parental eating habits and modes of feeding their children. The purpose of this study was to investigate the eating behavior characteristics of children in early childhood, to determine the frequency of problematic eating behaviors, and to compare these with the family's sociodemographic characteristics. The study consisted of 255 children aged 1-5 years, with no chronic disease, presenting to İzmir Tepecik Education and Research Hospital Child Health and Diseases Department general pediatric clinics between February and April, 2017, together with their parents. In the study, 39 questions were asked to parents related to children's demographic characteristics and eating behaviour. Problematic eating behavior was present in 43.9% of the study group. Analysis identified feeding bottle use, feeding with the use of an assistant object, eating lasting longer than half an hour, and the child not feeding itself as the most frequent eating problems. Malnutrition was present in 104 (65.8%) of the children with problematic eating behavior. Examination of unhealthy eating behavior in the light of the study findings showed no significant association between eating behaviors and variables such as type and time of delivery, maternal education level, maternal employment status, maternal age, or the number of children in the family. In terms of nutrition status, Turkey appears to exhibit problems associated with both developed and developing countries. Public nutrition in Turkey fluctuates significantly depending on the region, the season, socioeconomic levels, and differences between urban and rural settlements. Early-onset of eating-related problems can have a negative effect on children's subsequent physical, emotional, and social development. Determination in the early period of eating problems seen in children and investigation of the causes thereof is important in terms of identifying a potential solution. It is important to work with the family to that end, and for children to be followed-up by pediatricians at regular intervals. abstract_id: PUBMED:36426567 A framework for conceptualising early intervention for eating disorders. Objective: This paper outlines the evidence base for early intervention for eating disorders; provides a global overview of how early intervention for eating disorders is provided in different regions and settings; and proposes policy, service, clinician and research recommendations to progress early intervention for eating disorders. Method And Results: Currently, access to eating disorder treatment often takes many years or does not occur at all. This is despite neurobiological, clinical and socioeconomic evidence showing that early intervention may improve outcomes and facilitate full sustained recovery from an eating disorder. There is also considerable variation worldwide in how eating disorder care is provided, with marked inequalities in treatment provision. Despite these barriers, there are existing evidence-based approaches to early intervention for eating disorders and progress is being made in scaling these. Conclusions: We propose action steps for the field that will transform eating disorder service provision and facilitate early detection, treatment and recovery for everyone affected by eating disorders, regardless of age, socioeconomic status and personal characteristics. abstract_id: PUBMED:27648856 Do depressive symptoms explain associations between binge eating symptoms and later psychosocial adjustment in young adulthood? Prospective associations between binge eating symptoms (i.e., objective overeating [OOE] and loss of control [LOC] eating) and psychosocial functioning during emerging adulthood were examined using data from the Longitudinal Study of Adolescent to Adult Health. We examined associations between OOE and LOC eating and psychosocial functioning variables with and without adjusting for concurrent depressive symptoms. Analyses revealed that OOE at Wave 3 (ages 18-28) was associated with depressive symptoms, social isolation, weight perception, and perceived attractiveness seven years later at Wave 4 (ages 25-35) and LOC eating at Wave 3 was associated with later depressive symptoms, suicidal thoughts, weight perception, social isolation, number of close friends, and sleep difficulty. Analyses adjusted for depressive symptoms at Wave 3 revealed that OOE at Wave 3 was associated with social isolation and perceived attractiveness at Wave 4 and LOC eating at Wave 3 was associated with later depressive symptoms, isolation, number of close friends, and sleep difficulty. Results show that binge eating symptoms are prospectively associated with psychosocial impairment during emerging adulthood even after controlling for depressive symptoms. Rather than simply screening for depressive symptoms, results highlight the utility of screening for binge eating symptoms as these symptoms are independently associated with psychosocial impairment in emerging adults. abstract_id: PUBMED:37802219 Associations of positive childhood experiences with binge-eating disorder characteristics and intuitive eating among college students. Childhood experiences encompass both adverse and positive childhood events. Adverse childhood experiences are positively associated with binge-eating disorder characteristics and inversely associated with intuitive eating in adults. However, to what extent positive childhood experiences (PCEs) are associated with binge-eating disorder characteristics and intuitive eating remains unclear. This study examined the associations of cumulative and individual PCEs with binge-eating disorder characteristics and intuitive eating among college students. Data were collected from 828 college students in 2022 (54.5% female, Mage 20.9 ± 2.6 years). PCEs were assessed using the Benevolent Childhood Experiences scale. Binge-eating disorder characteristics were assessed with the Questionnaire on Eating and Weight Patterns-5. Intuitive eating was assessed with the Intuitive Eating Scale-2. Associations of PCEs with binge-eating disorder characteristics and intuitive eating were examined using modified Poisson regressions and linear regressions. PCEs were cumulatively associated with binge-eating disorder characteristics and intuitive eating (p for trend <.05). College students in the low PCEs category (0-4 PCEs) had 1.37-1.92 times the prevalence of binge-eating disorder characteristics and had 3.89 points lower intuitive eating score than those in the high PCEs category (9-10 PCEs). Among the individual PCEs, intrapersonal PCE (i.e., feeling comfortable with oneself during childhood) was associated with both a lower prevalence of binge-eating disorder characteristics (aPR = 0.56-0.76) and a higher score of intuitive eating (adjusted β = 1.87, 95% CI = 1.28-2.46) after adjustment for sociodemographic variables. Findings from this study suggest that PCEs may play a role in eating behaviors of college students. Future studies with nationally representative samples should prospectively examine associations of PCEs with binge-eating disorder characteristics and intuitive eating and explore the underlying mechanisms of the associations between PCEs and eating behaviors. abstract_id: PUBMED:25808740 The influence of adolescent eating disorders or disordered eating behaviors on socioeconomic achievement in early adulthood. Objective: Much research documents the etiology and health consequences of adolescent eating disorders (ED), but very little is known about the long-term effects of EDs on the transition to adulthood. This study explores gender differences in the influence of EDs or disordered eating behaviors (DEB) on measures of socioeconomic independence in early adulthood. Method: Using the National Longitudinal Study of Adolescent Health (Add Health), this study compares individuals who self-identified as have been diagnosed with an ED or engaged in DEBs in late adolescence to those without ED or DEB on three measures of socioeconomic independence during early adulthood, including educational attainment, income, and likelihood of owning a home. This study uses multiple regression techniques and attempts to account for early-life conditions and health outcomes associated with EDs and DEBs. Results: For females, ED or DEB in late adolescence had a statistically significant, negative association with educational attainment (coefficient = -0.20, p = .05), personal income (coefficient = -0.12, p < .05) and odds of owning a home (odds ratio = 0.73, p < .02) in early adulthood. For males, ED or DEB was not associated with socioeconomic attainment. Discussion: Our findings suggest that EDs and DEBs are negatively associated with socioeconomic achievement during early adulthood for females, but not for males. EDs or DEBs during adolescence may set individuals on a different trajectory whereby they do not have the same life chances or opportunities for success in adulthood. These results confirm the public health message that EDs or DEBs have lasting negative consequences for women. abstract_id: PUBMED:37654072 What role do early life experiences play in eating disorders? The impact of parenting style, temperament and early maladaptive schemas. Objective: Given that most eating disorders develop in adolescence and early adulthood, early life experiences are said to play a key role in the aetiology of eating disorders. There are well-documented relationships between early maladaptive schemas and eating disorders, early maladaptive schemas and temperament and temperament and perceived parenting style. The present study aimed to test a hypothesis that perceived parenting style predicts temperament, which predicts early maladaptive schemas, which predict eating disorder symptoms in young people. Method: An online survey measured perceived parenting style, temperament, early maladaptive schemas and eating disorder symptoms in 397 people with disordered eating between the ages of 18 and 29. Path analysis was used to investigate the relationship between these elements. Results: The results found support for this hypothesis. Perceived maladaptive parenting ratings for mothers were a stronger predictor of temperament, and only two temperament factors were adequate predictors of early maladaptive schemas. Conclusions: Overall, the present study found preliminary support for a linear relationship where perceived parenting style predicts temperament, which predicts early maladaptive schema levels, which predicts eating disorder symptoms. The present study was the first to propose and test this model; however, further research is required to confirm the nature and extent of this relationship. abstract_id: PUBMED:29101919 Early detection of eating disorders in general practice. Background: General practitioners (GPs) are often the first health professional consulted in regard to eating disorders and their varied presentations. Given the prognostic significance of early detection of, and intervention for, such conditions, it is important that GPs feel confident to do so. Objective: The aim of this article was to heighten awareness of the role of early identification and diagnosis of eating disorders, especially anorexia nervosa and bulimia nervosa, in the primary care setting. The focus will be on their presentations and diagnosis, including changes to the Diagnostic and statistical manual of mental disorders, 5th edition (DSM-5), with a brief overview of management recommendations and admission criteria. Discussion: Eating disorders are complex, potentially life-threatening illnesses with significant medical and psychosocial consequences. Early detection and intervention can significantly contribute to better outcomes, and GPs are ideally placed to effect this. abstract_id: PUBMED:36238228 Psychosocial predictors of problematic eating in young adults who underwent adolescent bariatric surgery. Introduction: This study examined problematic eating and eating-related psychopathology among young adults who underwent adolescent bariatric surgery including concurrent and prospective associations with psychosocial factors and weight change. Methods: VIEW point is a 6-year follow-up study within a prospective observational study series observing adolescents with severe obesity who had bariatric surgery (n = 139) or who presented to nonsurgical lifestyle modification programs (n = 83). Participants completed height/weight measurements, questionnaires, and diagnostic interviews. Regression analyses compared problematic eating across groups and examined Year 6 correlates (i.e., psychosocial factors and weight change) and baseline predictors (i.e., psychosocial factors) of eating-related psychopathology. Results: Compared to the nonsurgical group, the surgical group reported lower eating-related psychopathology, objective binge eating, and grazing at Year 6. While chewing/spitting out and vomiting for weight/shape-related reasons were very infrequent for the surgical group, self-induced vomiting for other reasons (e.g., avoid plugging) was more common. For the surgical group, lower self-worth, greater internalizing symptoms, and higher weight-related teasing in adolescence predicted increased eating-related psychopathology in young adulthood. Year 6 eating-related psychopathology was concurrently associated with lower percent weight loss for the surgical group and greater percent weight gain for the nonsurgical group. Conclusion: Undergoing adolescent bariatric surgery appears to afford benefit for problematic eating and eating-related psychopathology. Current findings suggest that the clinical intervention related to problematic eating and associated psychosocial concerns may be needed for young adults with obesity, regardless of surgical status. abstract_id: PUBMED:12474194 Child and adolescent predictors for eating disorders in a community population of young adult women. Objective: This study investigated early predictors for developing eating disorders by young adulthood in a community sample of women participating in a 22-year longitudinal study. Method: Twenty-one women were identified at age 27 with lifetime full or partial eating disorders. These women were compared with 47 women with no history of eating disorders on predictive factors from three broad domains. Results: The women with eating disorders had more serious health problems before age 5 and mother-reported anxiety-depression at age 9. At 15, mothers described them as having more behavior problems. Before age 15, families of the eating disorder group had more histories of depression, eating problems and changes in family financial circumstances. Discussion: This study identifies early predictors distinguishing girls who develop eating disorders. Findings point to the need for continued research in the area of early health to comprehensively examine the biologic, behavioral, and environmental risks for eating disorders. Answer: Yes, eating and psychosocial characteristics in early teenage years can be useful predictors of eating characteristics in early adulthood. A longitudinal study found that bulimic attitudes in early teenage years were related to subsequent bulimic features, and restrictive attitudes were more closely related to later purging behaviors. However, self-esteem and perfectionism were only weakly linked with subsequent eating. Different perceived family characteristics at the first stage were associated with specific aspects of eating in early adulthood, suggesting that there is consistency of eating characteristics across time, but psychosocial characteristics have a more limited predictive power (PUBMED:9813760). Additionally, other studies have shown that early life experiences, such as parenting style, temperament, and early maladaptive schemas, play a key role in the aetiology of eating disorders, with perceived maladaptive parenting ratings for mothers being a strong predictor of temperament, which in turn predicts early maladaptive schema levels, and ultimately predicts eating disorder symptoms in young people (PUBMED:37654072). Furthermore, early health problems, mother-reported anxiety-depression, behavior problems, and family histories of depression and eating problems have been identified as early predictors distinguishing girls who develop eating disorders (PUBMED:12474194). These findings underscore the importance of early detection and intervention for eating disorders, as general practitioners are often the first health professionals consulted and early intervention may improve outcomes and facilitate full sustained recovery from an eating disorder (PUBMED:29101919). Therefore, while not all psychosocial characteristics may have strong predictive power, certain early eating and psychosocial characteristics can indeed be useful in predicting later eating characteristics and potential eating disorders.
Instruction: Is CT-guided biopsy of aggressive sacral tumors always able to reach a diagnosis? Abstracts: abstract_id: PUBMED:20416912 Is CT-guided biopsy of aggressive sacral tumors always able to reach a diagnosis? Objectives: To present a series of eight cases of rare sacral tumors that represent a wide spectrum of the aggressive disease involving the sacrum and to discuss the efficacy of CT-guided biopsy in these cases. Material And Methods: We reviewed all cases of sacral tumors biopsied under CT guidance in the last two years. Prior CT and MR images were analyzed before biopsy, and 3 mm CT slices of the lesion were obtained to determine the location of the target area and the path of the needle. All biopsies were performed using 11 G needles and 15 G coaxial needles. Results: Eight sacral lesions were biopsied under CT guidance, and a diagnosis was reached in four. In three of these, the diagnosis (chordoma, plasmocytoma, and metastasis from melanoma) was reached in a single session. In the fourth case, three sessions were necessary to reach the diagnosis of Ewing's sarcoma. For various reasons, the remaining four cases required surgical biopsy to reach the diagnoses of chondrosarcoma, eosinophilic granuloma, and primary bone lymphoma in two patients. The diagnostic efficacy of CT-guided biopsy was 36%. Conclusion: Our results suggest that CT-guided biopsy is less useful in the characterization of primary tumors than in metastases. The diagnosis of primary bone lymphomas is especially complicated. Repeating the procedure after inconclusive results probably will not provide additional information, and it is recommendable to perform surgical biopsy in these cases. abstract_id: PUBMED:28041320 Image-guided resection of aggressive sacral tumors. OBJECTIVE The aim of this study was to identify and discuss operative nuances utilizing image guidance in the surgical management of aggressive sacral tumors. METHODS The authors report on their single-institution, multi-surgeon, retrospective case series involving patients with pathology-proven aggressive sacral tumors treated between 2009 and 2016. They also reviewed the literature to identify articles related to aggressive sacral tumors, their diagnosis, and their surgical treatment and discuss the results together with their own experience. Information, including background, imaging, treatment, and surgical pearls, is organized by tumor type. RESULTS Review of the institutional records identified 6 patients with sacral tumors who underwent surgery between 2009 and 2016. All 6 patients were treated with image-guided surgery using cone-beam CT technology (O-arm). The surgical technique used is described in detail, and 2 illustrative cases are presented. From the literature, the authors compiled information about chordomas, chondrosarcomas, giant cell tumors, and osteosarcomas and organized it by tumor type, providing a detailed discussion of background, imaging, and treatment as well as surgical pearls for each tumor type. CONCLUSIONS Aggressive sacral tumors can be an extremely difficult challenge for both the patient and the treating physician. The selected surgical intervention varies depending on the type of tumor, size, and location. Surgery can have profound risks including neural compression, lumbopelvic instability, and suboptimal oncological resection. Focusing on the operative nuances for each type can help prevent many of these complications. Anecdotal evidence is provided that utilization of image-guided surgery to aid in tumor resection at our institution has helped reduce blood loss and the local recurrence rate while preserving function in both malignant and aggressive benign tumors affecting the sacrum. abstract_id: PUBMED:32222551 Diagnostic Challenges in Primary Sacral Tumors and the Yield of Computed Tomography-Guided Needle Biopsy in the Modern Era. Objective: Primary sacral tumors pose unique challenges because of their complex radiographic appearances, diverse pathologic entities, and dramatically different treatment paradigms based on tumor type. Magnetic resonance imaging and computed tomography (CT) can provide valuable information; however, sacral lesions can possess unique radiographic features and pose diagnostic dilemmas. CT-guided percutaneous needle biopsy is a critical component of the diagnostic workup. However, limited data are available on its efficacy for primary sacral tumors. Methods: The data from patients with newly diagnosed primary sacral lesions during a 12-year period at our hospital were analyzed. The preoperative magnetic resonance imaging findings, biopsy results, and pathological data for patients who required surgery were analyzed. Unique cases in which the final pathologic result was unexpected from the preoperative imaging findings have been highlighted. Results: Of 38 patients who underwent percutaneous needle biopsy, diagnostic tissue was obtained on the first attempt for 31 (82%). Five of the remaining 7 obtained diagnostic tissue on the second attempt, yielding 95% diagnosis, with only two requiring open biopsies. In 2 patients with diagnostic tissue on CT-guided biopsy, an open biopsy was still recommended because of the clinical scenario. In both patients, the open biopsy results matched those of the CT-guided biopsy. For the 18 patients who required surgery, we found 100% correlation between the percutaneous needle biopsy findings and the final pathological diagnosis. No biopsy-induced complications or extraspinal tumor seeding occurred. Conclusions: CT-guided biopsy is a safe and effective technique. It represents a critical component of the diagnostic algorithm, given the diverse pathological findings of primary sacral lesions and dramatic differences in treatment. abstract_id: PUBMED:2374755 CT-guided biopsy of metastatic sacral tumors. Between June 8, 1985 and January 29, 1986, the authors performed five CT-guided needle biopsies of suspicious sacral lesions. In each of the five cases, these biopsies allowed the acquisition of specimens suitable for definitive pathologic diagnosis. Three cases of metastatic adenocarcinoma, one case of desmoplastic fibroma, and one case of chronic inflammatory changes were noted. In the case involving chronic inflammatory changes, subsequent open biopsy was performed, confirming the inflammatory changes and absence of neoplasm. The authors conclude that percutaneous CT-guided biopsy of sacral lesion can be performed safely and rapidly, and allows accurate diagnosis and treatment of sacral lesions. abstract_id: PUBMED:32811758 Percutaneous CT-guided lung biopsy for the diagnosis of persistent pulmonary consolidation. Purpose: The primary objective of this study was to determine the diagnostic accuracy of percutaneous computed tomography (CT)-guided biopsy of persistent pulmonary consolidations. The secondary objective was to determine the complication rate and identify factors affecting diagnostic yield. Materials And Methods: Two radiologists retrospectively reviewed 98 percutaneous CT-guided biopsies performed in 93 patients (60 men, 33 women; mean age, 62±14.0 (SD) years; range: 18-88 years) with persistent pulmonary consolidations. Final diagnoses were based on surgical outcomes or 12 months clinical follow-up findings. Biopsy results were compared to the final diagnosis to estimate diagnostic yield. Results: A final diagnosis was obtained for all patients: 51/93 (54.8%) had malignant lesions, 12/93 (12.9%) specific definite benign lesions (including 9 infections, two pneumoconiosis and one lipoid pneumonia) and 30/93 (32.3%) non-specific benign lesions. CT-guided biopsy had an overall diagnostic yield of 60% (59/98) with a correct diagnosis for 50/51 malignant lesions (diagnostic yield of 98% for malignancy) and for 9/47 benign lesions (diagnostic yield of 19% for benign conditions). Major complications occurred in 4/98 (4%) of lung biopsies (four pneumothoraxes requiring chest tube placement). Conclusion: Percutaneous CT-guided biopsy is an alternative to endoscopic or surgical biopsy for the diagnosis of persistent consolidation with a low risk of severe complication. abstract_id: PUBMED:37961959 Role of percutaneous CT-guided biopsy in the characterisation of pleural diseases. Background: Computed tomography (CT)-guided biopsy is emerging as a preferred and safe method for obtaining tissue samples in pleural diseases. Objective: This study aimed to evaluate the diagnostic yield and safety of percutaneous CT-guided biopsy in pleural diseases and to find CT findings predictive of malignant neoplastic pleural disease. Material And Methods: This retrospective study included 77 patients with pleural disease who underwent CT-guided pleural biopsies from July 2013 to May 2020. All procedures were performed with a coaxial semi-automatic biopsy device. Histopathology was performed in all cases, and additional tests such as immunohistochemistry (IHC) or microbiological analysis were carried out depending on clinical suspicion. The correlation of CT findings with final diagnosis was performed by Chi-square, Fisher's exact test and logistic regression analysis. Results: The overall technical success rate of CT-guided pleural biopsy was 100% with a diagnostic yield of 96.1%. No major complication was encountered, with minor complications encountered in the form of minimal pneumothorax and chest pain. Malignant pleural conditions constituted the largest group including metastatic adenocarcinoma as the most common (31.2%), followed by metastatic squamous cell carcinoma and mesothelioma. Tubercular pleural involvement was the second most common category (16.9%). The cartridge-based nucleic acid amplification test (CB-NAAT) assay had 90% sensitivity on pleural tissue in tubercular cases. CT features predictive of malignancy were irregular and nodular pleural thickening, mediastinal and diaphragmatic pleural involvement and mediastinal/chest wall invasion. There was a good correlation between higher pleural thicknesses with malignant outcome. Conclusion: Percutaneous CT-guided biopsy is a safe method for obtaining pleural tissue samples with high diagnostic yield. CT findings provide clues, which favour malignant pleural involvement. abstract_id: PUBMED:26045345 Comparison of cone-beam CT-guided and CT fluoroscopy-guided transthoracic needle biopsy of lung nodules. Purpose: To compare the diagnostic performance of cone-beam CT (CBCT)-guided and CT fluoroscopy (fluoro-CT)-guided technique for transthoracic needle biopsy (TNB) of lung nodules. Methods: The hospital records of 319 consecutive patients undergoing 324 TNBs of lung nodules in a single radiology unit in 2009-2013 were retrospectively evaluated. The newly introduced CBCT technology was used to biopsy 123 nodules; 201 nodules were biopsied by conventional fluoro-CT-guided technique. We assessed the performance of the two biopsy systems for diagnosis of malignancy and the radiation exposure. Results: Nodules biopsied by CBCT-guided and by fluoro-CT-guided technique had similar characteristics: size, 20 ± 6.5 mm (mean ± standard deviation) vs. 20 ± 6.8 mm (p = 0.845); depth from pleura, 15 ± 15 mm vs. 15 ± 16 mm (p = 0.595); malignant, 60% vs. 66% (p = 0.378). After a learning period, the newly introduced CBCT-guided biopsy system and the conventional fluoro-CT-guided system showed similar sensitivity (95% and 92%), specificity (100% and 100%), accuracy for diagnosis of malignancy (96% and 94%), and delivered non-significantly different median effective doses [11.1 mSv (95 % CI 8.9-16.0) vs. 14.5 mSv (95% CI 9.5-18.1); p = 0.330]. Conclusion: The CBCT-guided and fluoro-CT-guided systems for lung nodule biopsy are similar in terms of diagnostic performance and effective dose, and may be alternatively used to optimize the available technological resources. Key Points: • CBCT-guided and fluoro-CT-guided lung nodule biopsy provided high and similar diagnostic accuracy. • Effective dose from CBCT-guided and fluoro-CT-guided lung nodule biopsy was similar. • To optimize resources, CBCT-guided lung nodule biopsy may be an alternative to fluoro-CT-guided. abstract_id: PUBMED:36187941 Supplementary benefits of CT-guided transthoracic lung aspiration biopsy for core needle biopsy. Objective: This study aimed to investigate the diagnostic efficacy of computed tomography (CT)-guided transthoracic lung core needle biopsy combined with aspiration biopsy and the clinical value of this combined routine microbial detection. Materials And Methods: We retrospectively collected the electronic medical records, CT images, pathology, and other data of 1085 patients with sequential core needle biopsy and aspiration biopsy of the same lung lesion under CT guidance in the First Affiliated Hospital of Wenzhou Medical University from January 2016 to January 2021. GenXpert MTB/RIF detection and BD BACTEC™ Mycobacterium/fungus culture were applied to identifying the microbiological results of these patients. We then compared the positive diagnostic rate, false negative rate, and diagnostic sensitivity rate of three methods including core needle biopsy alone, aspiration biopsy alone, and both core needle biopsy and aspiration biopsy. Results: The pathological results of cutting histopathology and aspiration of cell wax were examined for 1085 patients. The diagnostic rates of cutting and aspiration pathology were 90.1% (978/1085) and 86.3% (937/1085), respectively, with no significant difference (P > 0.05). Considering both cutting and aspiration pathologies, the diagnostic rate was significantly improved, up to 98% (1063/1085) (P < 0.001). A total of 803 malignant lesions were finally diagnosed (803/1085, 74.0%). The false negative rate by cutting pathology was 11.8% (95/803), which was significantly lower than that by aspiration biopsy [31.1% (250/803), P < 0.001]. Compared with core needle biopsy alone, the false negative rate of malignant lesions decreased to 5.6% (45/803) (P < 0.05). Next, the aspirates of the malignant lesions highly suspected of corresponding infection were cultured. The results showed that 16 cases (3.1%, 16/511) were infected with Mycobacterium tuberculosis complex, Aspergillus niger, and Acinetobacter baumannii, which required clinical treatment. 803 malignant tumors were excluded and 282 cases of benign lesions were diagnosed, including 232 cases of infectious lesions (82.3%, 232/282). The diagnostic rate of Mycobacterium/fungus culture for infectious lesions by aspiration biopsy (47.4%) was significantly higher than that by lung core needle biopsy (22.8%; P < 0.001). The diagnostic rate of aspiration biopsy combined with core needle biopsy was 56% (130/232). The parallel diagnostic rate of aspirated biopsy for GenXpert detection and Mycobacterium/fungal culture combined with core needle biopsy was 64.7% (150/232), which was significantly higher than that of lung core needle biopsy alone (P < 0.001). Finally, pulmonary tuberculosis was diagnosed in 90 cases (38.8%) of infectious lesions. Compared with the sensitivity of core needle biopsy to detect tuberculosis (27.8%, 25/90), the sensitivity of aspirating biopsy for GenXpert detection and Mycobacterium/fungal culture was significantly higher, at 70% (63/90) and 56.7% (51/90), respectively. Although there was no significant difference in the sensitivity of aspirated biopsy for GenXpert and Mycobacterium/fungal culture to detect pulmonary tuberculosis, the sensitivity was significantly increased to 83.3% (P < 0.05) when the two tests were combined. Moreover, when aspirated biopsies were combined with GenXpert detection, Mycobacterium/fungus culture, and core needle biopsy, the sensitivity was as high as 90% (81/90). Conclusion: CT-guided lung aspiration biopsy has a significant supplementary effect on core needle biopsies, which is indispensable in clinical application. Additionally, the combination of aspiration biopsy and core needle biopsy can significantly improve the diagnostic rate of benign and malignant lesions. Aspiration biopsy showed that pulmonary malignant lesions are complicated with pulmonary tuberculosis, aspergillus, and other infections. Finally, the diagnostic ability of lung puncture core needle biopsy and aspiration biopsy combined with routine microbial detection under CT positioning in the diagnosis of pulmonary infectious diseases was significantly improved. abstract_id: PUBMED:27852642 Percutaneous CT-guided sternal biopsy: factors affecting the diagnostic yield. Background Sternal lesions are occasionally seen in clinical practice and their diagnosis can be important, especially for oncologic patients. However, percutaneous computed tomography (CT)-guided biopsy of sternal lesions is rarely performed. Purpose To assess the diagnostic yield of percutaneous CT-guided sternal biopsies and to analyze the factors that affect diagnostic yield. Material and Methods A retrospective review of 34 patients who underwent CT-guided sternal biopsy was carried out at a single institution. Pre-biopsy CT density, location, penetration length of biopsy needle, number of biopsy attempts, angle of needle approach, final diagnosis, and operator experience level were recorded. A biopsy was considered as diagnostic if it provided a confident pathologic result. All variables were compared using Chi-square tests. Results Twenty-two of the 34 (64.7%) biopsy procedures yielded a diagnostic sample and 12 (35.3%) were non-diagnostic. Eight participants in the non-diagnostic group were clinically diagnosed with inflammatory arthritis of the manubriosternal or costosternal joints. Longer penetration distance of the tumor by the biopsy needle showed higher diagnostic yield ( P = 0.031). Osteoblastic lesions ( P < 0.001), lesions in the manubriosternal joint ( P = 0.018) and approaches using more obtuse angles ( P = 0.009) were associated with significantly lower diagnostic yields. Malignancy in the final diagnosis led to a higher diagnostic yield than benign lesions ( P < 0.001). Conclusion CT-guided percutaneous sternal biopsy has a relatively lower diagnostic yield. However, acute angle of needle approach may help increase biopsy success rate. Osteoblastic lesions and lesions in the manubriosternal joint tend to have lower diagnostic yield. abstract_id: PUBMED:35964027 Improving CT-guided transthoracic biopsy diagnostic yield of lung masses using intraprocedural CT and prior PET/CT fusion imaging. Objective: The purpose of this study was to evaluate the usefulness of intraprocedural CT and prior PET/CT fusion imaging in improving the diagnostic yield of CT-guided transthoracic core-needle biopsy (CNB) in lung masses. Methods: In total, 145 subjects with lung masses suspicious for malignancy underwent image-guided transthoracic CNB. According to imaging modality the subjects were divided into two groups. PET/CT images obtained no more than 14 days before the biopsy were integrated with intraprocedural CT images. The integrated or fused images were then used to plan the puncture sites. The clinical characteristics, diagnostic yield of CNB, diagnostic accuracy rate, procedure-related complications and procedure duration were recorded and compared between the two groups. Final clinical diagnosis was determined by surgical pathology or at least 6-months follow-up. The diagnostic accuracy of CNB was obtained by comparing with final clinical diagnosis. Results: 145 subjects underwent CNB with adequate samples, including 76 in fusion imaging group and 69 in routine group. The overall diagnostic yield and diagnostic accuracy rate were 80.3% (53/66), 82.9% (63/76) for fusion imaging group, 70.7% (41/58), 75.4% (52/69) for routine group, respectively. In addition, the diagnostic yield for malignancy in fusion imaging group (98.1%, 52/53) was higher than that in routine group (81.3%, 39/48). No serious procedure-related complications occurred in both two groups. Conclusion: CNB with prior PET/CT fusion imaging is particularly helpful in improving diagnostic yield and accurate rate of biopsy in lung masses, especially in heterogeneous ones, thus providing greater potential benefit for patients. Answer: No, CT-guided biopsy of aggressive sacral tumors is not always able to reach a diagnosis. In a series of eight cases of rare sacral tumors, CT-guided biopsy reached a diagnosis in only four cases. In three of these cases, the diagnosis was reached in a single session, while in the fourth case, three sessions were necessary. For the remaining four cases, surgical biopsy was required to reach the diagnoses. The diagnostic efficacy of CT-guided biopsy in this series was reported to be 36% (PUBMED:20416912). This suggests that CT-guided biopsy may be less useful in characterizing primary tumors than metastases, and the diagnosis of primary bone lymphomas is especially complicated. It is recommended to perform surgical biopsy in cases where CT-guided biopsy results are inconclusive (PUBMED:20416912). Additionally, another study reported that of 38 patients who underwent percutaneous needle biopsy for primary sacral lesions, diagnostic tissue was obtained on the first attempt for 31 patients (82%). Five of the remaining seven obtained diagnostic tissue on the second attempt, yielding a 95% diagnosis rate, with only two requiring open biopsies. This indicates that while CT-guided biopsy is a critical component of the diagnostic workup for primary sacral tumors, it may not always be successful on the first attempt, and a small percentage of cases may still require open biopsy (PUBMED:32222551). Therefore, while CT-guided biopsy is a valuable diagnostic tool, it does not guarantee a diagnosis in all cases of aggressive sacral tumors, and alternative methods such as surgical biopsy may be necessary when CT-guided biopsy is inconclusive.
Instruction: Can dentures improve the quality of life of those who have experienced considerable tooth loss? Abstracts: abstract_id: PUBMED:11525225 Can dentures improve the quality of life of those who have experienced considerable tooth loss? Objectives: The aim of this study was to identify variations in the impact of oral health on quality of life (OHQOL) among UK residents in relation to self-reported number of teeth possessed and denture status. In addition, to determine whether recourse to a removable prosthesis for those who claimed that they had experienced considerable tooth loss (having <20 teeth) was associated with quality of life. Methods: The vehicle for this was the Office for National Statistics Omnibus survey in Great Britain. A random probability sample of 2667 addresses was selected in a multistage sampling process. Participants were interviewed about their oral health status. The impact of oral health on quality of life was measured utilising the OHQoL-UK(W) measure. Results: The response rate was 68%. Variations in OHQoL-UK(W) scores were apparent in relation to self-reported number of teeth possessed (P < 0.001) and denture status (P < 0.001). Moreover, disparities in OHQOL were apparent among those who experienced considerable tooth loss who didn't have recourse to a denture (P < 0.001). In regression analysis, those who claimed that they had <20 natural teeth but had no recourse to a denture were less than half as likely to enjoy enhanced oral health related quality of life compared to others in the population (OR = 0.46, 95% Cl 0.30, 0.71), controlling for socio-demographic factors. Conclusions: Experience of considerable tooth loss without recourse to a removable dental prosthesis is an important predictor of oral health related quality of life, as captured by OHQoL-UK(W), and associated with reduced quality of life. abstract_id: PUBMED:25571803 Do dentures improve quality of life? N/A abstract_id: PUBMED:38247018 Exploring experiences of living with removable dentures-A scoping review of qualitative literature. Objective: Examine the literature on the experiences of living with removable dentures (complete or partial) to identify any gaps and provide a map for future research. Background: Increasing proportions of society are living partially dentate with some form of restoration, including removable dentures. Previous studies have reported on the location, materials and usage of these prostheses, along with effects on oral-health-related quality of life (OHRQoL). However, less is known about experiences with removable dentures from a patient-centred perspective. Methods: A scoping review of the qualitative literature was undertaken using the framework of Arksey and O'Malley, updated by Levac et al. Literature searches were carried out using Medline and Web of Science. Papers were screened by title and abstract using inclusion and exclusion criteria. Remaining papers were read in full and excluded if they did not meet the required criteria. Nine papers were included in the final review. Findings: Key themes from these papers were: impact of tooth loss and living without teeth, and its impacts in relation to social position, appearance, confidence and function (chewing and speaking); social norms and tooth loss, including attitudes to tooth retention and treatment costs, and changes in intergenerational norms towards dentures; expectations of treatment, including patients being more involved in decision making, viewing the denture as a "gift" and dentures helping to achieve "an ideal"; living with a removable denture (complete or partial), including patient preparedness for a denture, adaptation and impacts on activities and participation; and the dentist-patient relationship, including issues with information and communication, and differing priorities between patients and dentists. Conclusion: Little qualitative research exists on experiences of living with a removable denture. Existing literature demonstrates the importance of dispersed activities in differing social, spatial and temporal contexts when wearing removable dentures. Focusing on processes of positive adaptation to dentures and OHRQoL, rather than deficits, is also required to fully understand patients' experiences. Additionally, more complex technological advances may not always be in the best interest of every patient. abstract_id: PUBMED:38376436 A New Scale for Rating Oral Health-Related Quality of Life in Denture Wearers. Purpose: When carrying out prosthetic rehabilitation of edentulous and partially edentulous patients, great attention is paid to the personal attitude of the patients, their satisfaction with oral health and psychosocial interaction due to tooth loss, as well as the treatment of the resulting disorders. This attention has led to the development of various instruments for examining the quality of life related to oral health. The aim of this study was to develop and validate a reliable instrument in the Serbian language suitable for measuring oral health-related quality of life in patients who have been rehabilitated with complete or partial dentures. Мaterials and Methods: The study was unicentric and cross-sectional, and assessed the reliability and validity of a newly developed instrument for measuring the oral health-related quality of life in denture wearers (OHRQoL-DW). It was conducted on a sample of 200 adults from Serbia, wearers of various types of dentures, with a mean age 66.9 ± 10.3 years and male/female ratio of 86/114 (43%/57%). Results: The definitive version of the OHRQoL-DW scale with 28 items showed very good reliability, with Cronbach's alpha = 0.938. Good temporal stability of the questionnaire was demonstrated, and satisfactory results were obtained for divergent and convergent validity tests. Exploratory factorial analysis revealed four domains of oral health-related quality of life in denture wearers: physical, psychosocial, environmental and aesthetic. Conclusions: The OHRQoL-DW scale is a reliable and valid generic instrument for measuring the oral health-related quality of life in patients wearing dentures, which is one of the most important outcomes of oral health in prosthetic treatment. abstract_id: PUBMED:17024983 Assessment of the oral health related quality of life of patients awaiting initial prosthetic assessment. The aim of the study was to quantify the impact of oral health problems on the quality of life of patients who were experiencing difficulties with dentures. Two hundred and ten patients who were awaiting an initial specialist assessment appointment were asked to complete the short form Oral Health Impact Profile (OHIP-14). One hundred and sixty three (78%) questionnaires were completed correctly and were analyzed. Overall, the prevalence of impacts was high in all sub-scales, indicating that denture related problems had a negative impact on quality of life of both partially dentate and elderly patients. abstract_id: PUBMED:32940586 Removable Partial Dentures for Older Adults. Improvements in oral health including increased retention of natural teeth have given rise to a partially dentate older population. Replacement of missing natural teeth is important to improve function, aesthetics and quality of life for this patient group. A variety of options are available to replace missing teeth in partially dentate older adults, including fixed, removable and implant retained prostheses. This article will discuss the provision of removable partial dentures including treatment planning and denture design. When planning removable partial dentures, careful attention must be paid to stabilising the patient prior to delivering any prosthesis. Partial dentures should be designed to minimise the potential for plaque accumulation with carefully designed metal based frameworks. Acrylic resin can also be utilised with attention to detail to minimise the risk of damage to delicate supporting tissues. Removable dentures have the advantage that they can be readily added to in the event of further tooth loss which may be particularly relevant for older adults. Partial dentures which optimise support, retention and stability can function very successfully and significantly improve patients' oral health related quality of life. abstract_id: PUBMED:26040175 Oral health-related quality of life of edentulous patients after complete dentures relining. Background/aim: Tooth loss affects oral health-related life quality. More than a third of edentulous patients are not fully satisfied with their complete dentures and mainly complain of insufficient stability, retention, and pain during mastication. Solving the problem may include relining by materials that are based on silicone or acrylic. The aim of this study was to determine the level of patients' satisfaction before and after relining upper dentures with soft and rigid liners. Methods: The patients (n=24) were divided into two study groups. Maxillary denture relining of the first group of patients was performed with hard acrylic based resins while in the second group of patients complete denture was relined with a silicone-based soft liner. They were asked the questions from the specifically adapted the Oral Health Impact Profile Questionnaire for edentulous patients before and three months after relining dentures. Results: After relining the patients showed a higher degree of satisfaction with their dentures in all the tested domains (masticatory function, psychological discomfort, social disability and retention and hygiene). The padents with soft denture relines were more satisfied. Conclusion: Refining of maxillary complete dentures significantly positively impacts the quality of life of patients in all the tested domains (masticatory function, psychological discomfort, social disability, pain and oral hygiene). Better results were achieved using a silicone-based soft liner, which recommends it as the material of choice for relining dentures. abstract_id: PUBMED:16161879 Tooth loss, denture wearing and oral health-related quality of life in elderly Chinese people. Objectives: To investigate the relationship between tooth loss, denture wearing and oral health-related quality of life (OHQoL) among community-dwelling elderly people in Hong Kong. Methods: A questionnaire study of elderly people aged 60-80 years who were recruited at neighborhood social centres for the elderly. The Chinese version of the General Oral Health Assessment Index (GOHAI) was used and information about natural tooth number and denture wearing were obtained. Results: 233 elderly subjects were recruited and interviewed. Around 20% of the partially dentate subjects had their last tooth loss within the previous year and a quarter within the last five years. Significantly fewer edentulous subjects had their last tooth loss within the previous year (3%) and within the last five years (12%, p < 0.001). Twenty two percent of the subjects had difficulty in accepting tooth loss. More edentulous subjects (69%) were satisfied with their dentures than partially dentate denture wearers (37%, p < 0.001). Edentulous elderly subjects had a higher mean GOHAI score (53.0) than partially dentate denture wearers (49.1, p < 0.001). Results from a multiple factor ANOVA revealed that elderly subjects who had loose teeth, difficulty in accepting tooth loss and were not satisfied with their removable dentures had a lower mean GOHAI score. Conclusions: In general, tooth loss and denture wearing did not have a major impact on OHQoL in elderly Chinese people. However, partially dentate denture wearers experienced a greater adverse impact on OHQoL than edentulous subjects most probably due to less satisfaction with their dentures and discomfort associated with loose teeth. abstract_id: PUBMED:25327636 Rehabilitation of oral function with removable dentures--still an option? Tooth loss is a chronic disability, which makes it difficult for patients to perform essential tasks such as eating, communicating with others and socialising. Numerous studies have revealed and addressed the recent rapid development of various prosthodontic materials and treatment patterns. Oral rehabilitation with dentures exerts a great influence on people's daily life and has tremendous social implications. Dentures help to restore an individual's sense of normality and ability to interact normally. With the introduction and progression of implant technology, many troublesome issues can now be solved simply. Nowadays, more and more attention has been paid to new trends (implant-assisted restoration and fixed prostheses). However, removable dentures may be a more appropriate solution under some circumstances, such as if they are a patient's preferred option, if remaining oral tissues are in poor condition, or if they provide the most cost-effective form of treatment. Thus, removable dentures are still an option for the rehabilitation of oral function. The purpose of this article was to retrospectively review the applications of removable dentures and to emphasise their indispensable status. abstract_id: PUBMED:15473385 Reference values in oral health-related quality of life for the abbreviated version of the Oral Health Impact Profile Questionnaires measuring oral health-related quality of life (OHRQoL) need reference values for their interpretation. Population-based norms for a questionnaire allow a comparison of the level of OHRQoL observed for an individual or a group of subjects with the level of that construct in the general population. The aims of this study were to establish normative values for abbreviated instruments of the German version of the Oral Health Impact Profile (OHIP-G). Data of the long OHIP-G (53 items) were collected in a national survey (N=2050, age: 16-79 years). Three short instruments with 5 (OHIP-G5), 14 (OHIP-G14) and 21 items (OHIP-G21) were derived from the long OHIP. Norms were computed for the OHIP-G summary score (simple sum of all item responses). The OHIP-G summary score distribution was described by empirical cumulative distribution functions. Because denture usage was strongly associated with OHIP-G summary scores, percentile-based norms were stratified by this variable. Among subjects without dentures, 50% of the population had OHIP-G5 or OHIP-G14 scores 0 and OHIP-G21 scores < or = 2. Summary scores for the three abbreviated OHIPs were < or = 2, < or = 4, and < or = 7 for subjects with removable dentures and < or = 3, < or = 6, and < or = 8 for subjects with complete dentures. Among subjects without dentures, 90% of the population had < or = 4 OHIP-G5, < or = 11 OHIP-G14, and < or = 16 OHIP-G21 points. For subjects with removable dentures, the figures were < or = 7, < or = 17, and < or = 26 points for OHIP-G5, OHIP-G14 and OHIP-G21, respectively; and for subjects with complete dentures, < or = 10, < or = 25, and < or = 38 points, respectively. The presented norms will provide a frame of reference in future cross-sectional studies in the general population or samples with specific oral conditions when OHRQoL is measured by OHIP-G5, OHIP-G14 or OHIP-G21. Answer: Yes, dentures can improve the quality of life for individuals who have experienced considerable tooth loss. A study identified variations in the impact of oral health on quality of life (OHQoL) among UK residents and found that those with fewer than 20 natural teeth but without recourse to a denture were less likely to enjoy enhanced oral health-related quality of life compared to others in the population, controlling for socio-demographic factors (PUBMED:11525225). This suggests that the use of dentures can be an important predictor of oral health-related quality of life for those with significant tooth loss. Additionally, a scoping review of qualitative literature on the experiences of living with removable dentures highlighted the impact of tooth loss on aspects such as social position, appearance, confidence, and function, and indicated that living with a removable denture can have significant effects on activities and participation (PUBMED:38247018). This review also emphasized the importance of focusing on positive adaptation to dentures and oral health-related quality of life (OHRQoL) to fully understand patients' experiences. Furthermore, a study on the development and validation of a new scale for rating oral health-related quality of life in denture wearers found that dentures can significantly impact the physical, psychosocial, environmental, and aesthetic domains of oral health-related quality of life (PUBMED:38376436). This supports the notion that dentures can play a crucial role in improving the quality of life for those who have lost their natural teeth. In summary, the evidence from these studies indicates that dentures can indeed improve the quality of life for individuals who have experienced considerable tooth loss by enhancing their oral health-related quality of life and positively impacting various aspects of their daily lives.
Instruction: Chronic inflammatory changes in the anal transition zone after stapled ileal pouch-anal anastomosis: is mucosectomy a superior alternative? Abstracts: abstract_id: PUBMED:36591401 Dysplasia at the Anal Transition Zone after IPAA. Ileal pouch-anal anastomosis is a popular way of reconstruction the gastrointestinal tract after total proctocolectomy for ulcerative colitis. The pouch-anal anastomosis is usually stapled, which requires the preservation of a small amount of upper anal canal and lower rectum. This includes the anal transition zone (ATZ), a surprisingly small and irregular ring of tissue at and just above the dentate line. The ATZ and rectal cuff is prone to inflammation and neoplasia, particularly in patients who had a colon cancer or dysplasia at the time their large bowel was removed. This high-risk group needs ATZ/rectal cuff surveillance before and after the surgery. Those without colorectal dysplasia preoperatively are at low risk of developing ATZ/rectal cuff dysplasia postoperatively and follow-up can be more relaxed. Treatment of ATZ dysplasia is difficult and may mean mucosectomy, pouch advancement, pouch removal, or a redo pelvic pouch. abstract_id: PUBMED:18847636 Chronic inflammatory changes in the anal transition zone after stapled ileal pouch-anal anastomosis: is mucosectomy a superior alternative? Background: Chronic inflammation (CI) is commonly found in the anal transition zone (ATZ) after stapled ileal pouch-anal anastomosis (IPAA) for ulcerative colitis (UC). Yet, its impact on defecatory function and the need for a complete mucosectomy has not been completely elucidated. This study aims to evaluate the long-term functional outcomes of patients with CI of the ATZ after stapled IPAA in comparison with mucosectomy patients. Methods: Between June 1987 and November 2007, 66 UC patients were found to have CI of the ATZ after stapled IPAA and were compared with 228 UC patients who underwent mucosectomy with hand-sewn (HS) IPAA. Patients were mailed a questionnaire to assess defecatory function and quality of life. Data were analyzed prospectively. Results: No differences were observed in age, sex, number, or consistency of bowel movements (BMs) between groups. Complete continence was reported by 90.3% of CI and 66.8% of HS patients (P < .001). The CI group also had a significantly lower rate of major incontinence (P < .001). Functional parameters in favor of the CI group included the ability to discriminate between gas and stool (P < .001), the use of protective pads during both the day and the night (P < .001), dietary modifications in the timing of meals (P < .001) and type of food (P = .005), and the presence of perianal rash (P = .019). In the CI group, more patients rated their quality of life as improved from before the operation (P < .001). Conclusions: Preservation of the ATZ, even in presence of persistent inflammation, confers improved continence, better functional outcomes, and superior quality of life. abstract_id: PUBMED:10223755 "Cuffitis" and inflammatory changes in the columnar cuff, anal transitional zone, and ileal reservoir after stapled pouch-anal anastomosis. Purpose: During the past eight to ten years most surgeons have adopted the double-stapled technique to accomplish the pouch-anal anastomosis in restorative proctocolectomy for ulcerative colitis. Little attention has been focused on the functional implications of retaining a segment of diseased columnar mucosa in the upper anal canal. The aim of this study was to investigate clinically significant inflammation in the columnar cuff. Method: In all, 113 patients were studied and 715 biopsies were performed during a 2.5-year period. Biopsy specimens were taken from two or three sites, including the columnar cuff, ileal pouch, and anal transitional zone. Acute and chronic inflammation was scored for biopsy specimens from all three sites and compared with endoscopic assessment and pouch function. Results: In the columnar cuff acute histologic inflammation was found in 13 percent of patients, and in 9 percent this was symptomatic during follow-up and was accompanied by evidence of endoscopic inflammation. Most patients had mild inflammation in the cuff that persisted over time. Inflammation in the pouch, pouch frequency, and anastomotic height were not related to columnar cuff inflammation. Conclusions: Cuffitis is a cause of pouch dysfunction after a double-stapled restorative proctocolectomy. We propose a triad of diagnostic criteria, including symptoms and endoscopic and histologic inflammation. abstract_id: PUBMED:32972106 Ileal Pouch-Anal Anastomosis for Ulcerative Colitis: An Australian Institution's Experience. Purpose: We report outcomes and evaluate patient factors and the impact of surgical evolution on outcomes in consecutive ulcerative colitis patients who had restorative proctocolectomy with ileal pouch-anal anastomosis (IPAA) at an Australian institution over 26 years. Methods: Data including clinical characteristics, preoperative medical therapy, and surgical outcomes were collected. We divided eligible patients into 3 period arms (period 1, 1990 to 1999; period 2, 2000 to 2009; period 3, 2010 to 2016). Outcomes of interest were IPAA leak and pouch failure. Results: A total of 212 patients were included. Median follow-up was 50 (interquartile range, 17 to 120) months. Rates of early and late complications were 34.9% and 52.0%, respectively. Early complications included wound infection (9.4%), pelvic sepsis (8.0%), and small bowel obstruction (6.6%) while late complications included small bowel obstruction (18.9%), anal stenosis (16.8%), and pouch fistula (13.3%). Overall, IPAA leak rate was 6.1% and pouch failure rate was 4.8%. Eighty-three patients (42.3%) experienced pouchitis. Over time, we observed an increase in patient exposure to thiopurine (P=0.0025), cyclosporin (P=0.0002), and anti-tumor necrosis factor (P<0.00001) coupled with a shift to laparoscopic technique (P<0.00001), stapled IPAA (P<0.00001), J pouch configuration (P<0.00001), a modified 2-stage procedure (P=0.00012), and a decline in defunctioning ileostomy rate at time of IPAA (P=0.00002). Apart from pouchitis, there was no significant difference in surgical and chronic inflammatory pouch outcomes with time. Conclusion: Despite greater patient exposure to immunomodulatory and biologic therapy before surgery coupled with a significant change in surgical techniques, surgical and chronic inflammatory pouch outcome rates have remained stable. abstract_id: PUBMED:34982288 Two-time perforation of the ileal J-pouch 6 and 18 years after restorative proctocolectomy and ileal pouch-anal anastomosis for familial adenomatous polyposis: a case report. Background: Perforation of the ileal J-pouch after restorative proctocolectomy and ileal pouch-anal anastomosis are extremely rare. There has been no report of perforation of the ileal J-pouch occurring twice over several years. We report the first case of perforation at 6 and 18 years following restorative proctocolectomy. Case Presentation: The patient was a 52-year-old man who underwent a two-stage restorative proctocolectomy with a hand-sewn ileal J-pouch anal anastomosis due to familial adenomatous polyposis and sigmoid colon cancer at 34 years of age. At the age of 40, he underwent ileal pouch resection at its blind end, abdominal drainage, and anastomotic dilatation. The patient had a perforation of the blind end of the ileal J-pouch from increased intraluminal pressure, with anastomotic stricture and pervasive peritonitis. The patient had no symptoms for a few years; however, 18 years after the initial surgery and 12 years after the first perforation, the patient presented with severe abdominal pain. Computed tomography demonstrated pneumoperitoneum; accordingly, laparotomy was performed. Upon opening the abdominal cavity, contaminated ascites and inflammatory changes were documented involving the ileum. A 2-mm perforation involving the blind end of the ileal J-pouch was also observed and repaired, followed by temporary loop ileostomy creation. Postoperative endoscopy revealed an ulcer in the ileal J-pouch and a stricture located directly at the anastomosis. Conclusions: The blind end of the J-pouch repeatedly perforated over the years due to recurrent anastomotic stricture. Regular surveillance is, therefore, considered necessary for the release of stricture, maintenance of anastomotic patency, and prevention of ileal J-pouch perforation. abstract_id: PUBMED:19230038 Anal transition zone in the surgical management of ulcerative colitis. Preservation of the anal transition zone has long been a significant source of controversy in the surgical management of ulcerative colitis. The two techniques for restorative proctocolectomy and ileal pouch anal anastomosis (RPC IPAA) in common practice are a stapled anastomosis and a handsewn anastomosis; these techniques differ in the amount of remaining rectal mucosa and therefore the presence of the anal transition zone following surgery. Each technique has advantages and disadvantages in long-term functional outcomes, operative and postoperative complications, and risk of neoplasia. Therefore, we propose a selective approach to performing a stapled RPC IPAA based on the presence of dysplasia in the preoperative endoscopic evaluation. abstract_id: PUBMED:36755253 Impact of the body mass index on the retention of the anorectal mucosa after double-stapled ileal pouch-anal anastomosis for ulcerative colitis. Background: Double-stapled ileal pouch-anal anastomosis (DS-IPAA) is easy to construct and has a good functional outcome in patients with ulcerative colitis (UC). However, retention of the anorectal mucosa may lead to a subsequent risk of inflammation and neoplasia. This study aimed to identify factors associated with the retention of a large amount of anorectal mucosa after DS-IPAA. Methods: The medical records of 163 patients who had undergone one-stage total proctocolectomy and DS-IPAA for UC between 2007 and 2020 were retrospectively reviewed. The patients were divided into two groups according to the length of the retained mucosa. The high anastomosis group was defined as having a retained mucosal length of ≥ 30 mm in the anterior or posterior wall. Clinical factors were compared between the high and low anastomosis groups. Results: The high anastomosis group showed a significantly higher body mass index (BMI) (high vs. low: 23.2 vs. 19.0), longer operation time (304 vs. 263) and greater blood loss (357 vs. 240). In the multivariate analysis, high BMI was the only factor significantly associated with high anastomosis (odds ratio 1.32). There was a positive correlation between BMI and the length of the retained mucosa. Conclusions: In DS-IPAA, BMI showed the strongest association with the retention of a large amount of the anorectal mucosa. In high BMI patients, although the risk of inability of anastomosis is little than that of IPAA with mucosectomy, the possible retention of a large amount of mucosa should be considered. abstract_id: PUBMED:31587035 Inflammatory Pouch Conditions Are Common After Ileal Pouch Anal Anastomosis in Ulcerative Colitis Patients. Background: Total proctocolectomy (TPC) with ileal pouch anal anastomosis (IPAA) is the gold standard surgery for ulcerative colitis (UC) patients with medically refractory disease. The aim of this study was to report the rates and risk factors of inflammatory pouch conditions. Methods: This was a retrospective review of UC or IBD unspecified (IBDU) patients who underwent TPC with IPAA for refractory disease or dysplasia between 2008 and 2017. Pouchoscopy data were used to calculate rates of inflammatory pouch conditions. Factors associated with outcomes in univariable analysis were investigated in multivariable analysis. Results: Of the 621 patients more than 18 years of age who underwent TPC with IPAA between January 2008 and December 2017, pouchoscopy data were available for 386 patients during a median follow-up period of 4 years. Acute pouchitis occurred in 205 patients (53%), 60 of whom (30%) progressed to chronic pouchitis. Cuffitis and Crohn's disease-like condition (CDLC) of the pouch occurred in 119 (30%) patients and 46 (12%) patients, respectively. In multivariable analysis, female sex was associated with a decreased risk of acute pouchitis, and pre-operative steroid use and medically refractory disease were associated with an increased risk; IBDU was associated with chronic pouchitis; rectal cuff length ≥2 cm and medically refractory disease were associated with cuffitis; age 45-54 at colectomy was associated with CDLC. Rates of pouch failure were similar in chronic pouchitis and CDLC patients treated with biologics and those who were not. Conclusions: Inflammatory pouch conditions are common. Biologic use for chronic pouchitis and CDLC does not impact the rate of pouch failure. abstract_id: PUBMED:36777759 Adjunctive Pharmacotherapy Use in Patients With Ileal Pouch-Anal Anastomosis (IPAA). Background: This study evaluated the prevalence of adjunctive pharmacotherapies use among ileal pouch-anal anastomosis (IPAA) patients. Methods: The IBD Partners database was queried to compare IPAA patients with and without pouch-related symptoms (PRS). Within the cohort of patients with PRS, patient reported outcomes were compared among opioid, nonsteroidal anti-inflammatory drug (NSAID), and probiotic users. Results: There were no differences in patient reported outcomes based on NSAID or probiotic usage. Opioid users reported increased bowel frequency, urgency, poor general well-being, abdominal pain, and depression (P < 0.05 for all variables). Conclusions: In IPAA patients with PRS, opioid use, but not NSAIDs or probiotics, was associated with a higher burden of PRS. abstract_id: PUBMED:36853391 The anatomy and physiology of the ileal pouch and its relevance to pouch dysfunction. For the last 40 years, the ileal pouch-anal anastomosis has been used in patients with ulcerative colitis, familial adenomatous polyposis, and occasionally severe constipation to reconstruct the gastrointestinal tract after proctocolectomy. Although the procedure has generally been successful in helping patients avoid an ileostomy, it has come with its own set of problems. These include complications of the surgery such as fistulas and bowel obstruction, persistent inflammation of the pouch known as pouchitis, and functional problems related to the lack of expulsive peristalsis in the pouch. It is this last group of problems that is exacerbated by a poor diet, ill-advised anti-diarrheal medications, anal stenosis and pouch twists. As a consequence, patients with pouch problems are frequently referred for radiologic evaluation, with pouchography, defecation studies, and small bowel imaging commonly requested. In this review, the basic anatomy and physiology of the ileal pouch are discussed to provide a logical baseline against which to measure the anatomy of pouches and its relationship to the symptoms of pouch dysfunction. Answer: The question of whether mucosectomy is a superior alternative to address chronic inflammatory changes in the anal transition zone (ATZ) after stapled ileal pouch-anal anastomosis (IPAA) is a matter of ongoing debate. Chronic inflammation (CI) is commonly found in the ATZ after stapled IPAA for ulcerative colitis (UC), but its impact on defecatory function and the need for complete mucosectomy has not been fully elucidated (PUBMED:18847636). A study comparing 66 UC patients with CI of the ATZ after stapled IPAA with 228 UC patients who underwent mucosectomy with hand-sewn (HS) IPAA found that the CI group had better functional outcomes, including complete continence, ability to discriminate between gas and stool, and fewer dietary modifications. They also reported a higher quality of life improvement from before the operation. These findings suggest that preservation of the ATZ, even in the presence of persistent inflammation, may confer improved continence and quality of life compared to mucosectomy (PUBMED:18847636). However, it is important to note that the ATZ and rectal cuff are prone to inflammation and neoplasia, particularly in patients who had colon cancer or dysplasia at the time their large bowel was removed. High-risk patients need ATZ/rectal cuff surveillance before and after surgery, and treatment of ATZ dysplasia can be challenging, potentially requiring mucosectomy, pouch advancement, pouch removal, or a redo pelvic pouch (PUBMED:36591401). In summary, while mucosectomy may be considered in cases with high risk of neoplasia or severe dysplasia, the evidence suggests that preserving the ATZ in the absence of such risks may lead to better functional outcomes and quality of life for patients undergoing stapled IPAA for UC (PUBMED:18847636). However, individual patient risk factors and the presence of dysplasia or cancer should guide the decision-making process (PUBMED:36591401).
Instruction: Do residents receive the same OR guidance as surgeons report? Abstracts: abstract_id: PUBMED:24931416 Do residents receive the same OR guidance as surgeons report? Difference between residents' and surgeons' perceptions of OR guidance. Purpose: Operating room (OR) guidance is important for surgical residents' performance and, ultimately, for the development of independence and autonomy. This study explores the differences in surgical residents' and attending surgeons' perceptions of OR guidance in prerecorded surgical cases. Methods: A total of 9 attending surgeons and 8 surgical residents observed 8 prerecorded surgical cases and were asked to identify both the presence and the type of attending surgeons' OR guidance. Each recorded case was observed by 2 attending surgeons and 1 resident. A previously developed taxonomy for types of OR guidance was applied to analyze the data to explore the difference. Agreement by both attending surgeons on the presence and the type of OR guidance served as the concordant guidance behaviors to which the responses of the residents were compared. Results: Overall, 116 OR guidance events were identified. Attending surgeons agreed on the presence of guidance in 80 of 116 (69.8%) events and consistently identified the type of OR guidance in 91.4% (73/80, Cohen κ = 0.874) of them. However, surgical residents only agreed with attending surgeons on the presence of guidance in 61.25% (49/80) of the events. In addition, there was significant disagreement (Cohen κ = 0.319) between surgical residents and attending surgeons in the type of OR guidance; the residents only identified 54.8% (40/73) of concordant guidance behaviors in the same guidance category as both the surgeons. Among the types of OR guidance, residents and attending surgeons were most likely to agree on the teaching guidance (66.67%) and least likely to agree on the assisting guidance (36.84%). Conclusions: Surgical residents and attending surgeons have different perceptions of both the presence and the type of OR guidance. This difference in perception of OR guidance has important implications for the efficiency of training surgical residents in the OR, and, ultimately on residents' development of independence and autonomy. abstract_id: PUBMED:33117045 Assessment of Occupational Stress Among Oral and Maxillofacial Surgeons and Residents in Saudi Arabia: A Cross-Sectional Study. Purpose: Considering the important role of oral and maxillofacial surgeons in healthcare services and the stressful nature of their job, this study aimed to assess the occupational stress among oral and maxillofacial surgeons and residents in Saudi Arabia. Materials And Methods: Cross-sectional questionnaires were developed, and they included the perceived stress scale and questions about potential sources of stress. A sample size of 180 was determined using a 0.05 level of significance and a precision of ±8%. The survey was distributed using a consecutive non-random sampling method to all oral and maxillofacial surgeons and residents in all regions in Saudi Arabia from May to December 2019. Results: One hundred and seventy-two responses were received. Males were the predominant gender. The perceived stress scale revealed a moderate stress level among surgeons and residents. However, residents had a significantly higher score (P = 0.005). Increased working days were significantly associated with higher stress levels (P = 0.006). Long on-call periods were significantly and positively correlated with increased stress levels among residents since their work schedule was not flexible (P = 0.000). The majority of surgeons and residents believe that they have unconducive and stressful work environments and that working as a maxillofacial surgeon is stressful. Conclusion: The study results suggest that there is increased occupational stress among oral and maxillofacial surgeons and residents in Saudi Arabia. This study highlights the need for stress management programs to minimize stress factors at the workplace and to ensure a healthy working environment for the practitioners. abstract_id: PUBMED:37496375 Surgeons' perspectives on operation report documentation. Background: Operation report documentation is essential for safe patient care and team communication, yet it is often imperfect. This qualitative study aims to understand surgeons' perspectives on operation report documentation, with surgeons reviewing cleft palate repair operation reports. It aims to determine how surgeons write an operation report (in narrative and synoptic report formats) and explore the consequences of incomplete documentation on patient care. Methods: A qualitative semi-structured interview was conducted with cleft surgeons who were asked to consider operation reports and hypothetical clinical cases. Eight operation reports performed at one centre for cleft palate repair were randomly selected for review. Results: An operation report's purpose-patient care, complication documentation, future surgery, and research-will influence the detail documented. All cleft palate repair operation reports had important information missing. Synoptic report writing provides clearer documentation; however, narrative report writing may be a more robust communication and education tool. Surgeons described a bell-curve response in the level of training required to document an operation report-residents knew too little, fellows documented clearly, and Consultants documented briefer reports to highlight salient points. Conclusions: An understanding of surgeons' perspectives on operation report documentation is richer after this study. Surgeons know that clear documentation is essential for patient care and a skill that must be taught to trainees; barriers may be the documentation method. The flexibility of a hybrid operation report format is necessary for surgical care. abstract_id: PUBMED:28049680 Career and Professional Satisfaction of Oral and Maxillofacial Surgery Residents, Academic Surgeons, and Private Practitioners: Does Gender Matter? The aims of this study were to determine whether male vs. female oral and maxillofacial surgery (OMS) residents, academic surgeons (i.e., faculty members), and private practitioners in the U.S. differed in their general career satisfaction and job/professional satisfaction. Survey data were collected in 2011-12 from 267 OMS residents (response rate 55%), 271 OMS academic surgeons (response rate 31%), and 417 OMS private practitioners (response rates 13% web-based survey and 29% postal mail survey). The results showed that while the male vs. female OMS private practitioners and academic surgeons did not differ in their career satisfaction, the female residents had a lower career satisfaction than the male residents (on four-point scale with 4=most satisfied: 3.03 vs. 3.65; p<0.01). The male vs. female OMS private practitioners and academic surgeons also did not differ in their job satisfaction. However, the female residents agreed on average less that they were able to practice OMS in the way they want, felt less proud to be an oral and maxillofacial surgeon, were less satisfied with their career, and were more likely to consider a career change in the next five years than the male residents. While these male and female oral and maxillofacial surgeons in private practice and academia did not differ in their career and job satisfaction, the male and female residents differed significantly, with female residents reporting a significantly poorer career and job satisfaction than male residents. Future research needs to explore ways to improve career and professional satisfaction of female OMS residents. abstract_id: PUBMED:34407435 Training Cardiac Surgeons: Safety and Requirements. To analyze whether cardiac surgical residents can perform their first surgeries without compromising patients' safety or outcomes, by comparing their performance and results to those of senior surgeons. All documented CABGs conducted between 2002 and 2020 were included. Surgeries were divided according to the experience level of the main surgeon (defined by the number of CABG conducted by him/her) using the following thresholds: 1000; 150; 80 and 35. This resulted in 5 groups: senior surgeons (the reference group); attending surgeons; fellow surgeons; advanced residents and new residents. Primary endpoint was 30 day mortality. Secondary endpoints included a list of intra and post-operative parameters (including in-hospital complications). A multivariable analysis was conducted. 16,486 CABG were conducted by 66 different surgeons over a period of 18 years. Multivariable analysis did not find significant differences between both the primary and the secondary endpoints. Skin-to-skin time correlated significantly with experience level, as new residents needed almost 30% more time than senior surgeons (234 vs 180 minutes). With a suitable supervision by experienced surgeons, patient selection and sufficient resources (longer duration of surgery), surgical residents can perform CABGs with good results and without compromising the patient's outcome. abstract_id: PUBMED:34831709 Evaluation of Stress Levels of Trainee Cardiac Surgery Residents during Training Interventions Using Physiological Stress Parameters. Background: This study analysed the psychological and psycho-emotional stress in cardiac surgery. Using heart rate variability (HRV) analysis, it is possible to record intraoperative objective stress responses in surgeons. The aim of the study was to assess with the help of HRV parameters the postulated increased stress levels of cardiac surgeons in training compared to experienced senior cardiothoracic surgeons in exactly the same work situation in order to make qualification-differentiated statements about physiological stress during surgical interventions. Methods: During surgical teaching procedures, long-term ECG data (n = 15 each) for two operating residents and their assisting senior physicians were recorded. Time and frequency domain HRV parameters were analysed. Results: The time-related parasympathetic-dominated HRV parameters RMSSD (19.5 ms vs. 28.1 ms), NN50 (297.67 vs. 693.40), and cardiac interval mean RR (692.8 ms vs. 737.3 ms) indicate a higher stress level in the operating residents compared to the experienced surgeons. The higher stress index (11.61 vs. 8.86) confirms this. Conclusion: Compared to experienced surgeons, operating residents showed lower parasympathetic activity and higher stress levels during cardiac surgery training procedures. abstract_id: PUBMED:28139201 Do female surgeons learn or teach differently? Background: Gender and/or gender-stereotypes might influence surgical education. We hypothesized that female surgeons might focus their learning and teaching differently from male surgeons. Methods: Residents and surgeons (multi-institutional) individually recorded preoperatively discussed learning objectives (LO) for matching cases. Narratives were classified as knowledge-based, skill-based, or attitude-based. Multinomial logistic regression analyses, LO = dependent variable; independent variables = resident/surgeon gender, PGY level, timing of entry-to-procedure date, and quarters-of-year. Results: 727 LOs from 125 residents (41% female) and 49 surgeons (20% female) were classified. Female residents were 1.4 times more likely to select knowledge over skill. With female surgeons, residents were 1.6 and 2.1 times more likely to select knowledge over skill and attitude over skill than if the surgeon was male. PGY 4/5 residents chose attitude-based LOs over junior residents. Conclusion: Resident, surgeon gender and year-of-training influence learning objectives. Whether this reflects gender stereotyping by residents or differences in attending teaching styles awaits further exploration. abstract_id: PUBMED:26153113 Dissecting Attending Surgeons' Operating Room Guidance: Factors That Affect Guidance Decision Making. Purpose: The amount of guidance provided by the attending surgeon in the operating room (OR) is a key element in developing residents' autonomy. The purpose of this study is to explore factors that affect attending surgeons' decision making regarding OR guidance provided to the resident. Methods: We used video-stimulated recall interviews (VSRI) throughout this 2-phase study. In Phase 1, 3 attending surgeons were invited to review separately 30 to 45 minute video segments of their prerecorded surgical operations to explore factors that influenced their OR guidance decision making. In Phase 2, 3 attending surgeons were observed and documented in the OR (4 operations, 341min). Each operating surgeon reviewed their videotaped surgical performance within 5 days of the operation to reflect on factors that affected their decision making during the targeted guidance events. All VSRI were recorded. Thematic analysis and manual coding were used to synthesize and analyze data from VSRI transcripts, OR observation documents, and field notes. Results: A total of 255 minutes of VSRI involving 6 surgeons and 7 surgical operations from 5 different procedures were conducted. A total of 13 guidance decision-making influence factors from 4 categories were identified (Cohen's κ = 0.674): Setting (case schedule and patient morbidity), content (procedure attributes and case progress), resident (current competency level, trustworthiness, self-confidence, and personal traits), and attending surgeon (level of experience, level of comfort, preferred surgical technique, OR training philosophy, and responsibility as surgeon). A total of 5 factors (case schedule, patient morbidity, procedure attributes, resident current competency level, and trustworthiness) influenced attending surgeons' pre-OR guidance plans. "OR training philosophy" and "responsibility as surgeon" were anchor factors that affected attending surgeons' OR guidance decision-making patterns. Conclusions: Surgeons' OR guidance decision making is a dynamic process that is influenced by 13 situational factors. These factors can be used by residency programs to tailor strategies designed to increase resident autonomy in the OR. abstract_id: PUBMED:36592620 Residents Can Do It! A Training Program in Laparoscopic Liver Surgery for General Surgery Residents. Introduction: Laparoscopic liver resections (LLRs) constitute an area of surgery that has been kept away from residents in their hands-on training. The aim of our study is to assess the feasibility and the value of a didactic "step-by-step" program for LLR performed by residents using the swine training model. Methods: From May 2018 to November 2019, three hands-on workshops were held. The protocol involved the performance of cholecystectomy, liver mobilization, minor and major hepatectomies. The participants' performance results in terms of operative time, blood loss, conversion, trainers' intervention, and intraoperative mortality, were recorded. The first workshop was comprised of 30 residents who previously participated in laparoscopic surgery workshops. In the second workshop, after six residents dropped out due to residency completion, the findings for the remaining 24 residents were compared to those for 24 junior-attending surgeons who did not follow the protocol and had not performed LLR previously, and to another 24 residents in a third workshop, who had not taken the training program before but followed the protocol. Results: All residents fully completed the surgical procedures. Trained residents achieved better operative times and less blood loss compared to junior-attending surgeons (p < 0.017), however, the remaining parameters were comparable. When compared to non-trained residents, those who underwent training achieved significantly better results only in operative times (p < 0.001). Conclusion: A continuous LLR "step-by-step" training program on swine for residents is feasible and the "step-by-step" protocol is a valuable tool for a proper surgical education. abstract_id: PUBMED:31633540 Happiness in Plastic Surgery: A Cross-Sectional Analysis of 595 Practicing Plastic Surgeons, Fellows, Residents, and Medical Students. Background: Despite a heightened appreciation for wellness in medicine, there exists little information specific to plastic surgery. The purpose of this research was to assess happiness within the field of plastic surgery. Methods: A cross-sectional study was conducted in July of 2017 by distributing an American Society of Plastic Surgeons sponsored survey to a random cohort of current practicing American Society of Plastic Surgeons members, residents and fellows. In addition, the same survey was sent to medical students applying to integrated plastic surgery residency. Total happiness scores (Subjective Happiness Scale) were averaged and compared between and within surveyed groups. Results: A total of 595 individuals completed surveys, including 287 practicing surgeons, 116 residents, 12 fellows, and 180 medical students. Differences in happiness scores between the groups were statistically significant (P < 0.01). For practicing physicians, happiness scores were significantly greater for those more than 20 years out from training (P < 0.01). Furthermore, a significantly positive correlation was found between practice expectations coming out of residency and happiness scores (Pearson correlation coefficient, 0.2; P < 0.01). Conclusions: Despite the prevalence of burnout and mental health disorders associated with a career in medicine, plastic surgeons and trainees report high levels of happiness. Practicing plastic surgeons report increased happiness further out from training and when meeting practice expectations coming out of training. Otherwise, there were no significant differences in happiness between groups. Regarding sex, it is encouraging to report no significant sex discrepancies with happiness in a field where women still face significant adversity. Answer: No, residents do not receive the same OR guidance as surgeons report. A study exploring the differences in surgical residents' and attending surgeons' perceptions of OR guidance in prerecorded surgical cases found that there was a significant difference between the two groups. Attending surgeons agreed on the presence of guidance in 69.8% of events and consistently identified the type of OR guidance in 91.4% of those events. However, surgical residents only agreed with attending surgeons on the presence of guidance in 61.25% of the events. Moreover, there was significant disagreement between surgical residents and attending surgeons in the type of OR guidance, with residents only identifying 54.8% of concordant guidance behaviors in the same guidance category as both surgeons. This indicates that surgical residents and attending surgeons have different perceptions of both the presence and the type of OR guidance (PUBMED:24931416).
Instruction: Enucleation: is wrapping the implant necessary for optimal motility? Abstracts: abstract_id: PUBMED:12918553 Enucleation: is wrapping the implant necessary for optimal motility? Purpose: To compare the mobility of wrapped and unwrapped hydroxyapatite orbital implants after enucleation. Methods: Fifteen consecutive adult patients underwent enucleation with hydroxyapatite implant placement. These patients randomly received either a wrapped or unwrapped implant. A masked observer measured the motility of implants at 6 weeks after implantation. The conjunctiva was marked and measured with a ruler. These measurements were used to compare the motility of the wrapped versus unwrapped implant. Results: The mean elevation for the wrapped group (n = 7) was 4 mm and for the unwrapped group (n = 8) was 3.9 mm. Depression was 4.1 mm for the wrapped group and 3.6 mm for the unwrapped group. Medial excursions were 5.3 for the wrapped group and 5.0 mm for the unwrapped group. Lateral excursions were 3.7 mm for the wrapped group and 3.5 mm for the unwrapped group. A repeated-measures analysis of variance was performed, and no statistically significant difference was found between the two groups. Conclusions: We conclude that placement of unwrapped hydroxyapatite implants after enucleation provides essentially the same motility as wrapped implants. This technique also saves time and money and eliminates the possibility of infectious disease transmission from donor wrapping materials. abstract_id: PUBMED:30393295 A Wireless Implant for Gastrointestinal Motility Disorders. Implantable functional electrical stimulation (IFES) has demonstrated its effectiveness as an alternative treatment option for diseases incurable pharmaceutically (e.g., retinal prosthesis, cochlear implant, spinal cord implant for pain relief). However, the development of IFES for gastrointestinal (GI) tract modulation is still limited due to the poorly understood GI neural network (gut⁻brain axis) and the fundamental difference among activating/monitoring smooth muscles, skeletal muscles and neurons. This inevitably imposes different design specifications for GI implants. This paper thus addresses the design requirements for an implant to treat GI dysmotility and presents a miniaturized wireless implant capable of modulating and recording GI motility. This implant incorporates a custom-made system-on-a-chip (SoC) and a heterogeneous system-in-a-package (SiP) for device miniaturization and integration. An in vivo experiment using both rodent and porcine models is further conducted to validate the effectiveness of the implant. abstract_id: PUBMED:32186227 Optimal path generation in scala tympani and path planning for robotic cochlear implant of perimodiolar electrode. In this study, a new idea of the optimal path generation method was proposed and a path planning strategy for robotic cochlear implant of perimodiolar electrode was designed. The centerline of scala tympani channel was taken as the optimal implant path of the perimodiolar electrode, which aimed to reduce the damage of the electrode to the cochlea during implantation. First, the three-dimensional cochlear model was reconstructed based on the micro-computed tomography images of cochlea, and it was re-segmented to obtain the cross sections of the scala tympani at different angles. Then, the image processing method was used to determine the central point of the scala tympani cross sections. The cubic B-spline interpolation method was used to fit these discrete central points to generate the optimal path. Finally, the coordinate information of the optimal path was combined with the stylet extraction state of perimodiolar electrode to conduct the path planning for robotic cochlear implant, and the result was sent to the robot for kinematic inverse solution to obtain the robot motion trajectory. The robotic cochlear implant experiment was performed with the model of scala tympani. The results showed that the maximum implant force based on path planning was 0.084 N, and the maximum implant force without path planning was 0.134 N. The optimal path generation and the path planning method effectively help to reduce the damage of the electrode to the cochlea. abstract_id: PUBMED:32304165 Restriction of eye motility in patients with RETINA IMPLANT Alpha AMS. Purpose: To evaluate the motility of the eye in patients with the RETINA IMPLANT Alpha AMS. Methods: Eye motility was determined in eight gaze directions in ten blind retinitis pigmentosa patients, who had received the RETINA IMPLANT Alpha AMS, before implantation of the subretinal implant and at six time-points up to one year after. Results: The analysis of eye motility showed a restriction in the upgaze and gaze to the temporal side directly after surgery in eight of the nine patients included. The degree of motility restriction decreased continuously with recovery during the observation time. One year after surgery, eye motility was still restricted in the majority of patients, especially in the upgaze to the temporal side at 20° (five of seven patients). Conclusion: Retinal implants with intraorbital parts (e.g. connecting cables) caused restriction in the temporal and superior viewing directions in the majority of patients. Although this restriction might be cosmetically visible, this limitation in eye motility has no effects on the monocular vision and the implant's efficacy for daily use. abstract_id: PUBMED:27432325 Flagellar motility is necessary for Aeromonas hydrophila adhesion. Adhesion to host surface or cells is the initial step in bacterial pathogenesis, and the adhesion mechanisms of the fish pathogenic bacteria Aeromonas hydrophila were investigated in this study. First, a mutagenesis library of A. hydrophila that contained 332 random insertion mutants was constructed via mini-Tn10 Km mutagenesis. Four mutants displayed the most attenuated adhesion. Sequence analysis revealed that the mini-Tn10 insertion sites in the four mutant strains were flgC(GenBank accession numbers KX261880), cytb4(GenBank accession numbers JN133621), rbsR(GenBank accession numbers KX261881) and flgE(GenBank accession numbers JQ974982). To further study the roles of flgC and flgE in the adhesion of A. hydrophila, some biological characteristics of the wild-type strain B11, the mutants M121 and M240, and the complemented strains C121 and C240 were investigated. The results showed that the mutation in flgC or flgE led to the flagellar motility of A. hydrophila significant reduction or abolishment. flgC was not necessary for flagellar biosynthesis but was necessary for the full motility of A. hydrophila, flgE was involved in both flagellar biosynthesis and motility. The flagellar motility is necessary for A. hydrophila to adhere to the host mucus, which suggests flagellar motility plays crucial roles in the early infection process of this bacterium. abstract_id: PUBMED:27158686 Automatic graph-based localization of cochlear implant electrodes in CT. Cochlear Implants (CIs) restore hearing using an electrode array that is surgically implanted into the cochlea. Research has indicated there is a link between electrode location within the cochlea and hearing outcomes, however, comprehensive analysis of this phenomenon has not been possible because techniques proposed for locating electrodes only work for specific implant models or are too labor intensive to be applied on large datasets. We present a general and automatic graph-based method for localizing electrode arrays in CTs that is effective for various implant models. It relies on a novel algorithm for finding an optimal path of fixed length in a graph and achieves maximum localization errors that are sub-voxel. These results indicate that our methods could be used on a large scale to study the link between electrode placement and outcome across electrode array types, which could lead to advances that improve hearing outcomes for CI users. abstract_id: PUBMED:37237613 Pre-Planning the Surgical Target for Optimal Implant Positioning in Robotic-Assisted Total Knee Arthroplasty. Robotic-assisted total knee arthroplasty can attain highly accurate implantation. However, the target for optimal positioning of the components remains debatable. One of the proposed targets is to recreate the functional status of the pre-diseased knee. The aim of this study was to demonstrate the feasibility of reproducing the pre-diseased kinematics and strains of the ligaments and, subsequently, use that information to optimize the position of the femoral and tibial components. For this purpose, we segmented the pre-operative computed tomography of one patient with knee osteoarthritis using an image-based statistical shape model and built a patient-specific musculoskeletal model of the pre-diseased knee. This model was initially implanted with a cruciate-retaining total knee system according to mechanical alignment principles; and an optimization algorithm was then configured seeking the optimal position of the components that minimized the root-mean-square deviation between the pre-diseased and post-operative kinematics and/or ligament strains. With concurrent optimization for kinematics and ligament strains, we managed to reduce the deviations from 2.4 ± 1.4 mm (translations) and 2.7 ± 0.7° (rotations) with mechanical alignment to 1.1 ± 0.5 mm and 1.1 ± 0.6°, and the strains from 6.5% to lower than 3.2% over all the ligaments. These findings confirm that adjusting the implant position from the initial plan allows for a closer match with the pre-diseased biomechanical situation, which can be utilized to optimize the pre-planning of robotic-assisted surgery. abstract_id: PUBMED:32064837 Convergent implant transmucosal collar and healing abutment: aesthetics influence on soft tissues. A clinical study. Two conditions are necessary for a correct and functional prosthetic implant rehabilitation: maintaining pre-implant soft tissue health and stability of bone tissue, in terms of implant osseous-integration and maintenance of optimal crestal attachment levels. In addition to these parameters - necessary for the longevity of the restoration - one of the main aspects of therapy is the achievement of a final aesthetic that reproduces as faithfully as possible the natural anatomy of the lost tooth and the associated soft tissues. To achieve this last objective, an implant system was designed and used by our group. This implant is characterized by a convergent trans-mucosal emergence fixture associated with a progressive closing system of trans-mucosa healing pillars (healing abutment). This guarantee, together with the micro and macrostructure of the implant, an immediate and highly aesthetic condition of the peri-implant soft tissues, and in the same time an optimal seal on the convergent neck of the implant itself. abstract_id: PUBMED:31815974 Contour Management of Implant Restorations for Optimal Emergence Profiles: Guidelines for Immediate and Delayed Provisional Restorations. Adequate management of the implant-supported restoration has become an important task when trying to obtain optimal esthetic outcomes. The transgingival area must be developed to maintain or influence the final appearance of the peri-implant soft tissues. Two distinct zones within the implant abutment/crown can be identified: the critical contour and the subcritical contour. Their design and subsequent alteration may impact the peri-implant soft tissue architecture, including the gingival margin level and zenith, labial alveolar profile, and gingival color. Defining these two areas helps clarify how to process soft tissue contours and may additionally improve the necessary communication with the laboratory. Since there are many protocols for placing implants, it is worthwhile to determine similarities in the contouring and macrodesign of their corresponding provisional restorations. Therefore, the purpose of this paper is to discern the general characteristics of the critical and subcritical contours for provisional restorations made for immediate and delayed implants in order to obtain guidelines for daily clinical practice. abstract_id: PUBMED:25614976 Design optimization of a radial functionally graded dental implant. In this work, we use FEA to test the hypothesis that a low-modulus coating of a cylindrical zirconia dental implant would reduce the stresses in the peri-implant bone and we use design optimization and the rule of mixture to estimate the elastic modulus and the porosity of the coating that provides optimal stress shielding. We show that a low-modulus coating of a dental implant significantly reduces the maximum stresses in the peri-implant bone without affecting the average stresses thus creating a potentially favorable biomechanical environment. Our results suggest that a resilient coating is capable of reducing the maximum compressive and tensile stresses in the peri-implant bone by up to 50% and the average stresses in the peri-implant bone by up to 15%. We further show that a transitional gradient between the high-modulus core and the low-modulus coating is not necessary and for a considered zirconia/HA composite the optimal thickness of the coating is 100 µ with its optimal elastic at the lowest value considered of 45 GPa. Answer: Based on the study described in abstract PUBMED:12918553, wrapping the implant is not necessary for optimal motility following enucleation. The study compared the mobility of wrapped and unwrapped hydroxyapatite orbital implants after enucleation in fifteen adult patients. The motility of the implants was measured at 6 weeks post-implantation, and the results showed no statistically significant difference in elevation, depression, medial excursions, or lateral excursions between the wrapped and unwrapped implant groups. The mean elevation for the wrapped group was 4 mm and for the unwrapped group was 3.9 mm. Depression was 4.1 mm for the wrapped group and 3.6 mm for the unwrapped group. Medial excursions were 5.3 for the wrapped group and 5.0 mm for the unwrapped group. Lateral excursions were 3.7 mm for the wrapped group and 3.5 mm for the unwrapped group. Therefore, the study concluded that unwrapped hydroxyapatite implants provide essentially the same motility as wrapped implants. Additionally, using unwrapped implants saves time and money and eliminates the risk of infectious disease transmission from donor wrapping materials.
Instruction: Do race, ethnicity, and psychiatric diagnoses matter in the prevalence of multiple chronic medical conditions? Abstracts: abstract_id: PUBMED:23552429 Do race, ethnicity, and psychiatric diagnoses matter in the prevalence of multiple chronic medical conditions? Background: The proportion of people in the United States with multiple chronic medical conditions (MCMC) is increasing. Yet, little is known about the relationship that race, ethnicity, and psychiatric disorders have on the prevalence of MCMCs in the general population. Methods: This study used data from wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions (N=33,107). Multinomial logistic regression models adjusting for sociodemographic variables, body mass index, and quality of life were used to examine differences in the 12-month prevalence of MCMC by race/ethnicity, psychiatric diagnosis, and the interactions between race/ethnicity and psychiatric diagnosis. Results: Compared to non-Hispanic Whites, Hispanics reported lower odds of MCMC and African Americans reported higher odds of MCMC after adjusting for covariates. People with psychiatric disorders reported higher odds of MCMC compared with people without psychiatric disorders. There were significant interactions between race and psychiatric diagnosis associated with rates of MCMC. In the presence of certain psychiatric disorders, the odds of MCMC were higher among African Americans with psychiatric disorders compared to non-Hispanic Whites with similar psychiatric disorders. Conclusions: Our study results indicate that race, ethnicity, and psychiatric disorders are associated with the prevalence of MCMC. As the rates of MCMC rise, it is critical to identify which populations are at increased risk and how to best direct services to address their health care needs. abstract_id: PUBMED:36601333 Racial/Ethnic Differences in Psychiatric and Medical Diagnoses Among Autistic Adults. Background: Racial/ethnic disparities in access to diagnostic services are pervasive for autistic children. However, a few studies have examined racial/ethnic health disparities among autistic adults, who commonly experience higher rates of health conditions than non-autistic adults. We aimed at examining the intersection of autism and race/ethnicity in association with psychiatric and medical diagnoses. Methods: The study population included adult members of Kaiser Permanente Northern California enrolled from 2008 to 2012. We ascertained 1507 adults who had an autism diagnosis documented in their electronic medical records. We sampled a matched control group of adults without an autism diagnosis (N = 15,070) at a 10:1 ratio. Our sample was 46% White, 17% Hispanic, 16% Asian, 7% Black, and 14% other race/ethnicity. We compared health diagnoses (a) between autistic and non-autistic adults within strata of race/ethnicity and (b) across race/ethnicity within strata of autistic and non-autistic adults. Lastly, we examined the interaction between autism and race/ethnicity on both multiplicative and additive scales. Results: Autistic adults were more likely to be diagnosed with most medical and psychiatric conditions compared with their non-autistic counterparts of the same race/ethnicity. Among autistic adults, Black, Hispanic, and Asian adults were less likely to be diagnosed with psychiatric conditions and Black and Hispanic autistic adults were more likely to be diagnosed with obesity than their White counterparts. In interaction models, we found that adults who were Black and autistic were disproportionately less likely to be diagnosed with psychiatric conditions and autoimmune disease and more likely to be diagnosed with hypertension than expected. Conclusion: Health vulnerabilities may be compounded at the intersection of autism and race/ethnicity. Future research should continue to apply an intersectional lens toward understanding and addressing these disparities. Our findings likely underestimate the health disparities that exist in uninsured autistic adults and those living in other parts of the United States. abstract_id: PUBMED:26863467 Race and Ethnic Group Differences in Comorbid Major Depressive Disorder, Generalized Anxiety Disorder, and Chronic Medical Conditions. This study tested whether race and ethnic group differences exist for lifetime major depressive disorder and/or general anxiety disorder with one or more chronic medical conditions. Data from the National Survey of American Life, which included 3570 African American, 1438 Caribbean Black, and 891 non-Hispanic White adults were analyzed. Outcomes included at least one and multiple chronic medical conditions, from a list of 14 medical conditions (e.g., arthritis, cancer, diabetes, kidney disease, stroke, heart disease, etc.). Logistic regressions were fitted to data to determine how the association between major depressive disorder, general anxiety disorder, and one or more chronic medical conditions vary across race and ethnicity. Lifetime major depressive disorder (but not lifetime general anxiety disorder) was associated with at least one chronic medical condition among African Americans and Caribbean Blacks, but not non-Hispanic Whites. Lifetime major depressive disorder was similarly associated with multiple chronic medical conditions among African Americans, Caribbean Blacks, and non-Hispanic Whites. For Caribbean Blacks, stronger associations were found between major depressive disorder and general anxiety disorder with one or more chronic medical conditions compared to African Americans and non-Hispanic Whites. Findings suggest that race and ethnicity may shape the links between comorbid psychiatric disorders and chronic medical conditions. Mental health screening of individuals with chronic medical conditions in primary health-care settings may benefit from tailoring based on race and ethnicity. More research is needed to understand why associations between physical and mental health vary among race and ethnic groups. abstract_id: PUBMED:33807808 Private Insurance and Mental Health among Older Adults with Multiple Chronic Conditions: A Longitudinal Analysis by Race and Ethnicity. Older adults with multiple chronic conditions have a higher risk than those without multiple conditions of developing a mental health condition. Individuals with both physical and mental conditions face many substantial burdens. Many such individuals also belong to racial and ethnic minority groups. Private insurance coverage can reduce the risks of developing mental illnesses by increasing healthcare utilization and reducing psychological stress related to financial hardship. This study examines the association between private insurance and mental health (i.e., depressive symptoms and cognitive impairment) among older adults in the United States with multiple chronic conditions by race and ethnicity. We apply a multivariate logistic model with individual fixed-effects to 12 waves of the Health and Retirement Study. Among adults with multiple chronic conditions in late middle age nearing entry to Medicare and of all racial and ethnic groups, those without private insurance have a stronger probability of having depressive symptoms. Private insurance and Medicare can mediate the risk of cognitive impairment among non-Hispanic Whites with multiple chronic conditions and among Blacks regardless of the number of chronic conditions. Our study has implications for policies aiming to reduce disparities among individuals coping with multiple chronic conditions. abstract_id: PUBMED:31396543 Race and Ethnic Differences in the Associations between Cardiovascular Diseases, Anxiety, and Depression in the United States. Introduction: Although cardiovascular diseases and psychiatric disorders are linked, it is not yet known if such links are independent of comorbid medical diseases and if these associations depend on race and ethnicity. This study aimed to determine if the associations between cardiovascular diseases with general anxiety disorder (GAD) and major depressive episode (MDE) are independent of comorbid medical diseases and if these links differ among African Americans, Caribbean Blacks, and Non-Hispanic Whites. Methods: This cross-sectional study enrolled African American, Caribbean Black, and Non-Hispanic White adults who had participated in the National Survey of American Life (NSAL), 2001 - 2003. Data on socioeconomics (age, sex, and education level) were collected. Self-reported physician diagnosed cardiovascular diseases (heart disease, atherosclerosis, hypertension, and stroke) and chronic medical conditions (peptic ulcer, cancer, diabetes, liver disease, kidney disease, asthma, other chronic respiratory diseases, sickle cell anemia, and glaucoma) were measured. The 12month GAD and MDE were measured using the Composite International Diagnostic Interview (CIDI). Logistic regressions were fitted to data to determine if the associations between cardiovascular diseases and 12-month GAD and 12-month MDE are independent of socio-economic status and comorbid chronic medical diseases across race and ethnic groups. Results: Above and beyond other medical conditions, heart disease and atherosclerosis were associated with 12-month GAD among Caribbean Blacks, but not African Americans or non-Hispanic Whites. Hypertension was associated with 12-month MDE among African Americans, and heart disease was associated with 12-month MDE among Caribbean Blacks. None of the cardiovascular diseases were associated with 12-month MDE among non-Hispanic Whites, while all the other medical conditions were controlled. Conclusion: Our study showed race and ethnicity may be associated with specific patterns of comorbidity between cardiovascular diseases and 12month MDE and GAD. By other words, the link between psychiatric disorders and cardiovascular diseases may depend on race and ethnicity. More research is needed to explore the behavioral and mental health profile of individuals with heart disease based on race and ethnicity. Race and ethnicity should inform mental health evaluation of patients with cardiovascular diseases. abstract_id: PUBMED:27913765 Prevalence of Chronic Conditions and Multimorbidities in a Geographically Defined Geriatric Population With Diverse Races and Ethnicities. Objective: The objective of this study is to examine racial/ethnic differences in prevalence of chronic conditions and multimorbidities in the geriatric population of a state with diverse races/ethnicities. Method: Fifteen chronic conditions and their dyads and triads were investigated using Hawaii Medicare 2012 data. For each condition, a multivariable logistic regression model was used to investigate differences in race/ethnicity, adjusting for subject characteristics. Results: Of the 84,212 beneficiaries, 27.8% were Whites, 54.6% Asians, and 5.2% Hispanics. Racial/ethnic disparities were prevalent for most conditions. Compared with Whites, Asians, Hispanics, and Others showed significantly higher prevalence rates in hypertension, hyperlipidemia, diabetes, and most dyads or triads of the chronic conditions. However, Whites had higher prevalence rates in arthritis and dementia. Discussion: Race/ethnicity may need to be considered when making clinical decisions and developing health care programs to reduce health disparities and improve quality of life for older individuals with chronic conditions. abstract_id: PUBMED:31763576 Exploring Racial Disparities in Mental Health Diagnoses and Neighborhood Disorganization Among an Urban Cohort of Children and Adolescents with Chronic Medical Conditions. Objective: This article describes the demographic distribution of, and association between, neighborhood disorganization and mental health diagnosis by race in a large cohort of urban children with chronic medical conditions. Methods: Data for this study were from Coordinated Healthcare for Complex Kids (CHECK), a health care demonstration project funded by the Center for Medicare and Medicaid Innovation. We conducted regression analyses to examine the relationship between neighborhood disorganization and mental health diagnosis among 6,458 children enrolled in CHECK. Results: The most common mental health diagnoses were mood disorders (8.6%), Attention-Deficit/Hyperactivity Disorder (7.4%), conduct disorders (6.1%), and anxiety disorders (4.8%). Black children had the highest neighborhood disorganization scores compared with other racial/ethnic categories. However, Black children had the lowest proportion of mental health diagnoses. Lower neighborhood disorganization was associated with having a mental health diagnosis; however, when adding race/ethnicity to the model, neighborhood disorganization no longer was significant. Conclusions: Level of neighborhood disorganization was highly correlated with racial/ethnic composition of the neighborhoods, and Black children disproportionately resided in highly disorganized neighborhoods compared with other groups. Neighborhood disorganization may not have sufficient variability within the racial/ethnic categories, which may explain the absence of an interaction between race/ethnicity and mental health diagnosis. abstract_id: PUBMED:33797298 Prevalence of Psychiatric Diagnoses and Psychiatric Consultation Service Utilization on Inpatient Medical Units. Objective: This practice improvement project sought to determine the prevalence of psychiatric diagnoses among patients admitted to a community hospital's inpatient medical units and which diagnoses were serviced by the hospital's psychiatric consultation service. Method: Electronic medical record data on adult patients of five medical units admitted with a psychiatric condition between October 1, 2019, and December 31, 2019, were used. Psychiatric ICD-10 (International Classification of Diseases, 10th Revision) codes and diagnosis names extracted were categorized into seven major diagnostic groups. A total of 687 adult patients with 82 psychiatric ICD-10 codes were analyzed using descriptive statistics. Results: Substance-related and addictive disorders were the most prevalent psychiatric diagnoses. Ninety-six percent (n = 658) of patients residing on medical floors with psychiatric disorders were hospitalized for a principal medical problem. Seventy-three cases received psychiatric consultations during their stay. Sixty percent (n = 44) of those cases had psychiatric disorders from two or more diagnostic categories. Conclusions: Multidisciplinary, team-based health care delivery models that include a psychiatric nurse can provide an effective approach to treat patients in community hospitals with multiple psychiatric and medical comorbidities. Hospitals could take a significant role in providing substance use disorder treatment and equipping medical nurses with training to competently care for patients with psychiatric disorders on medical units. Further research into the prevalence and impact of patients with co-occurring and multiple psychiatric diagnoses in community hospitals is needed to implement effective health care delivery models and provide appropriate treatment options in the community. abstract_id: PUBMED:33420609 COVID-19 Hospitalization by Race and Ethnicity: Association with Chronic Conditions Among Medicare Beneficiaries, January 1-September 30, 2020. Objectives: We assessed the association between hospitalization for illness from COVID-19 infection and chronic conditions among Medicare beneficiaries (MBs) with fee-for-service (FFS) claims by race and ethnicity for January 1-September 30, 2020. Methods: We used 2020 monthly Medicare data from January 1-September 30, 2020, reported to the Centers for Medicare and Medicaid Services to compute hospitalization rates per 100 COVID-19 MBs with FFS claims who were hospitalized (ICD-10-CM codes: B97.29 before April 1, 2020; ICD-10-CM codes: U07.1 from April 1, 2020, onward) with or without selected chronic conditions. We used logistic regression to estimate adjusted odds ratios with 95% confidence intervals for association of person-level rate of being hospitalized with COVID-19 and each of 27 chronic conditions by race/ethnicity, controlling for age, sex, and urban-rural residence among MBs. Results: COVID-19-related hospitalizations were associated with all selected chronic conditions, except osteoporosis and Alzheimer disease/dementia among COVID-19 MBs. The top five conditions with the highest odds for hospitalization among COVID-19 MBs were end-stage renal disease (adjusted odds ratios (aOR): 2.15; 95% CI: 2.10-2.21), chronic kidney disease (aOR: 1.54; 95% CI: 1.52-1.56), acute myocardial infarction (aOR: 1.45; 95% CI: 1.39-1.53), heart failure (aOR: 1.43; 95% CI: 1.41-1.44), and diabetes (aOR: 1.37; 95% CI: 1.36-1.39). Conclusions: Racial/ethnic disparities in hospitalization rate persist among MBs with COVID-19, and associations of COVID-19 hospitalization with chronic conditions differ among racial/ethnic groups in the USA. These findings indicate the need for interventions in racial/ethnic populations at the highest risk of being hospitalized with COVID-19. abstract_id: PUBMED:35621126 Rates of diagnoses of sleep disorders in children with chronic medical conditions. Study Objectives: This investigation examines sleep disorder (SD) diagnoses in a large population of children and adolescents with chronic medical conditions (CMCs). Little is known about SD diagnoses in this population. The large population used in this study allowed examination of SD rates by CMC type and demographics. Methods: Data were from the Coordinated Health Care for Complex Kids (CHECK) project designed for Medicaid-funded children and adolescents with at least 1 CMC from a large metropolitan area. The study population (n = 16,609) was limited to children and adolescents, 0 to 18 years of age. SD and CMC diagnoses were obtained from Medicaid claims data. Results: Fourteen percent of the population (mean age of 9.1 years [standard deviation = 5.2]; 35.8% African American; 56.4% male; 77 with more than 1 CMC) received a sleep disorder diagnosis. The most frequent diagnosis was sleep-disordered breathing (11.2%), followed by nocturnal enuresis (1.2%) and insomnia (1%). SDs were diagnosed more frequently in those with multiple CMCs than in those with 1 CMC (19.7% vs 5.8%; P < .001). Insomnia rates in Hispanic/Latinx (1.2%) and African American (0.8%) children and adolescents were significantly lower (both P < .001) than in Caucasians (3.5%). Odds of receiving a sleep diagnosis varied among CMCs. Conclusions: Our analysis of Medicaid claims data of a large urban cohort offers detailed information about the rates of sleep diagnoses and suggests underdiagnosis of SDs in this vulnerable, high-risk, primarily ethnic minority population. Underrecognition of sleep disorders has short- and long-term health and economic consequences. Study results may help clinicians implement appropriate SD screening and management for children and adolescents with CMCs. Citation: Adavadkar PA, Pappalardo AA, Glassgow AE, et al. Rates of diagnoses of sleep disorders in children with chronic medical conditions. J Clin Sleep Med. 2022;18(8):2001-2007. Answer: Yes, race, ethnicity, and psychiatric diagnoses do matter in the prevalence of multiple chronic medical conditions (MCMC). Studies have found that there are significant differences in the prevalence of MCMC based on race and ethnicity, as well as the presence of psychiatric disorders. For instance, compared to non-Hispanic Whites, Hispanics reported lower odds of MCMC, while African Americans reported higher odds of MCMC after adjusting for covariates. Additionally, people with psychiatric disorders reported higher odds of MCMC compared to those without psychiatric disorders. There were also significant interactions between race and psychiatric diagnosis associated with rates of MCMC, indicating that the odds of MCMC were higher among African Americans with psychiatric disorders compared to non-Hispanic Whites with similar psychiatric disorders (PUBMED:23552429). Furthermore, research has shown that autistic adults are more likely to be diagnosed with most medical and psychiatric conditions compared to their non-autistic counterparts of the same race/ethnicity. Among autistic adults, Black, Hispanic, and Asian adults were less likely to be diagnosed with psychiatric conditions and Black and Hispanic autistic adults were more likely to be diagnosed with obesity than their White counterparts (PUBMED:36601333). Race and ethnicity have also been found to shape the links between comorbid psychiatric disorders and chronic medical conditions. For example, lifetime major depressive disorder was associated with at least one chronic medical condition among African Americans and Caribbean Blacks, but not non-Hispanic Whites. Caribbean Blacks showed stronger associations between major depressive disorder and general anxiety disorder with one or more chronic medical conditions compared to African Americans and non-Hispanic Whites (PUBMED:26863467). These findings suggest that race and ethnicity are important factors to consider when examining the prevalence and management of multiple chronic medical conditions, particularly in the context of psychiatric comorbidities. It is critical to identify populations at increased risk and direct healthcare services to address their specific needs, taking into account the complex interplay between race, ethnicity, and mental health (PUBMED:23552429; PUBMED:36601333; PUBMED:26863467).
Instruction: Mirabegron causes relaxation of human and rat corpus cavernosum: could it be a potential therapy for erectile dysfunction? Abstracts: abstract_id: PUBMED:27124860 Mirabegron causes relaxation of human and rat corpus cavernosum: could it be a potential therapy for erectile dysfunction? Objective: To examine the effects of mirabegron, a selective β3 -adrenoceptor agonist that has recently been approved for the treatment of overactive bladder (OAB), on erectile function. Stimulation of β3 -adrenoceptors localised in cavernosal smooth muscle cells may play a physiological role in mediating penile erection, and offer a beneficial pharmacological action for patients who have OAB and erectile dysfunction (ED). Materials And Methods: Corpus cavernosal (CC) specimens were obtained from patients with ED and Peyronie's disease undergoing penile prosthesis implantation. Erectile responses were also evaluated in vivo after intracavernosal injection (ICI) of mirabegron in anaesthetised rats. Mirabegron-elicited relaxation responses (10(-8) -10(-3) m) on phenylephrine-induced contraction were seen in human CC (HCC) and rat CC strips in isolated organ-bath studies. The effects of inhibitors, namely L-NAME [N(G) -nitro-L-arginine methyl ester, a competitive inhibitor of nitric oxide synthase (NOS), 100 μm], ODQ [1H-(1,2,4) oxadiazolo(4,3-α) quinoxalin-1-one, a soluble guanylyl cyclase (sGC) inhibitor, 30μm], methylene blue (a NOS and sGC inhibitor, 20μm), SR59230A (β3 -adrenoceptor blocker, 1 μm), and fasudil [Rho-associated protein kinase (ROCK) inhibitor, 0.1 μm], on mirabegron-induced relaxation responses were evaluated. Responses to mirabegron were compared with responses to isoprenaline and nebivolol. Immunohistochemistry was used to localise β3 -adrenoceptors and ROCK in CC smooth muscle cells. In vivo rat data were expressed as intracavernosal pressure (ICP)/mean arterial pressure, and total ICP. Results: Mirabegron resulted in a relaxation of phenylephrine-evoked CC contractions in a concentration-dependent manner and SR59230A antagonised the mirabegron-induced relaxations in HCC and rat CC. Other inhibitors, L-NAME, ODQ, and methylene blue, did not affect the mirabegron-induced relaxation responses. Mirabegron relaxation responses at concentrations (0.1-10 μm) were enhanced by fasudil (ROCK inhibitor) in rat but not in HCC strips. KCl-induced contractions in HCC and rat CC were partially inhibited by mirabegron. In vivo, ICI of mirabegron (doses of 0.1-1 mg/kg) had a minor effect on ICP when compared with vehicle administration. Immunohistochemistry data showed β3 -adrenoceptors localised in the smooth muscle cells of the HCC and rat CC. Conclusions: Mirabegron markedly relaxed isolated CC strips by activating β3 -adrenoceptors independently of the NO-cGMP pathway. There is also evidence of the existence of a close functional link between β3 -adrenoceptors and the RhoA/ROCK pathway. These results may support further clinical studies using combinations of mirabegron with ROCK and phosphodiesterase type 5 inhibitors (PDE5i) for the treatment of ED, especially in patients who do not respond to PDE5i therapy. abstract_id: PUBMED:31228454 Mirabegron elicits rat corpus cavernosum relaxation and increases in vivo erectile response. Mirabegron is the first β3-adrenoceptor agonist approved on the market and may offer beneficial pharmacological action in patients with overactive bladder and erectile dysfunction. Here, we further investigate the mechanisms by which mirabegron induces rat corpus cavernosum (CC) relaxation. Adult male Wistar rats were used. The CC were isolated for in vitro functional assays and β-adrenoceptors subtypes mRNA expression evaluation. Animals were treated orally with mirabegron (30 mg/kg, 3 h), tadalafil (10 mg/kg, 3 h) or both for intracavernous pressure (ICP). Intracellular levels of cAMP and cGMP were also determined. The β1-, β2- and β3-adrenoceptors subtypes were expressed in rat CC. Mirabegron produced concentration-dependent CC relaxations that were unaffected by the β1-, β2- or β3-adrenoceptor antagonists atenolol (1 μM), ICI-118,551 (1 μM) and L748,337 (10 μM), respectively. Mirabegron-induced relaxations were not affected by the phosphodiesterase type 4 inhibitor, rolipram, or the adenylyl cyclase selective inhibitor, SQ 22,536. Potassium channel- or calcium influx-blockade are not involved in mirabegron-induced relaxations. In contrast, mirabegron produced rightward shifts in the contractile response induced by the α1-adrenoceptor agonist, phenylephrine. Finally, cavernous nerve stimulation caused frequency-dependent ICP increases, which were significantly increased in rats treated with mirabegron in a similar degree of tadalafil-treated rat, without promoting a significant cAMP or cGMP accumulation. Together, our results demonstrate that mirabegron induced CC relaxation through α1-adrenoceptor blockade. Care should be taken to translate the effect of mirabegron into the clinic, especially when using rat as an animal model of erectile dysfunction. abstract_id: PUBMED:28208559 Re: Mirabegron Causes Relaxation of Human and Rat Corpus Cavernosum: Could it be a Potential Therapy for Erectile Dysfunction? N/A abstract_id: PUBMED:30978726 Mirabegron, A Selective β3-Adrenoceptor Agonist Causes an Improvement in Erectile Dysfunction in Diabetic Rats. Aim: To investigate the possible beneficial effect of mirabegron [a selective β3-adrenoceptor (AR) agonist] treatment on erectile dysfunction (ED) in streptozotocin-induced diabetic rats. Methods: Sprague-Dawley rats (n=20) were divided into two groups: control group and streptozotocin-induced diabetic group. In vivo erectile responses were evaluated after intracavernosal injection of mirabegron (0.4 mg/kg) in rats. The relaxation responses to electrical field stimulation (EFS, 10 Hz), sodium nitroprusside (SNP, 10 nM) and sildenafil (1 μM) of corpus cavernosum (CC) strips were examined after the incubation with mirabegron (10 μM). β3-ARs expression and localization were determined by Western blot and immunohistochemical analyses in CC tissue. Results: In vivo erectile responses of diabetic rats [intracavernasal pressure (ICP) / mean arterial pressure, 0.17±0.01] were decreased, which were restored after administration of mirabegron (0.75±0.01, P<0.001). The basal ICP (7.1±0.6 mmHg) in diabetic rats was markedly increased after mirabegron (36.1 ±5.4 mmHg, P<0.01). Mirabegron caused markedly relaxation in diabetic rat CC after phenylephrine precontraction. The relaxation responses to EFS and sildenafil were reduced in diabetic CC, which were increased in the presence of mirabegron. Mirabegron enhanced SNP-induced relaxation response in both groups. The expression and immunoreactivity of β3-ARs localized to CC smooth muscle were observed in control and diabetic rats. Conclusions: This is the first study to show that intracavernosal administration of mirabegron improved erectile function and neurogenic relaxation of CC in diabetic rats. These results may be supported by further studies using combinations of mirabegron and phosphodiesterase type 5 (PDE5) inhibitors for the treatment of diabetic ED, especially in patients who do not respond to PDE5 inhibitor therapy. abstract_id: PUBMED:25164497 Latest pharmacotherapy options for benign prostatic hyperplasia. Introduction: Benign prostatic hyperplasia (BPH) is a very common condition in men over 50 years, often resulting in lower urinary tract symptoms (LUTS). Medical therapy aims at improving quality of life and preventing complications. The range of drugs available to treat LUTS is rapidly expanding. Areas Covered: Silodosin is a relatively new α1-adrenoreceptor antagonist that is selective for α1A-adrenergic receptor. While causing smooth muscle relaxation in the lower urinary tract, it minimizes blood pressure-related adverse effects. Tadalafil, a PDEs type 5 inhibitor, is a drug recently approved for the treatment of BPH/LUTS that challenges the standard therapy with α1-blockers, especially in men with concomitant erectile dysfunction (ED). Mirabegron is the first β3-adrenoceptor agonist approved for the treatment of symptoms of overactive bladder. BPH-related detrusor overactivity (DO) may be successfully targeted by mirabegron. Gonadotropin-releasing hormone antagonists, intraprostatic injections with NX-1207 and vitamin D3 receptor analogues exerted beneficial effects on LUTS but need further evaluation in clinical studies. Expert Opinion: Choosing the right treatment should be guided by patients' symptoms, comorbidities and potential side effects of available drugs. Silodosin is a valid option for elderly and for people taking antihypertensive drugs. BPH patients affected by ED can target both conditions with continuous tadalafil therapy. The encouraging data on mirabegron use in BPH-DO have to be further assessed in larger prospective randomized clinical trials. abstract_id: PUBMED:23190275 Latest treatment for lower urinary tract dysfunction: therapeutic agents and mechanism of action. Recent studies suggest that antimuscarinics might suppress bladder afferent activity by blocking muscarinic receptors in the urothelium, myofibroblasts and detrusor, thereby improving overactive bladder symptoms. β(3)-Adrenoceptors are predominantly expressed in the human bladder and mediate relaxation of detrusor muscle. β(3)-Adrenoceptor agonists increase bladder capacity and prolong micturition interval. It is assumed that β(3)-adrenoceptor agonists could exert an inhibitory effect on bladder afferent through β(3)-adrenoceptors in the urothelium and detrusor, which eventually improve the symptom of urgency. Mirabegron is a potent and selective β(3)-adrenoceptor agonist. A Japanese phase 3 study showed that mirabegron has excellent efficacy and safety for treating overactive bladder. α(1)-Adrenoceptor antagonists (α(1)-blockers) have become a mainstay of male lower urinary tract symptoms treatment. The α(1)(A) subtype is known to mediate functional obstruction as a result of benign prostatic enlargement. Recent studies have suggested that α(1)(A)-adrenoceptors are additionally involved in the generation of storage symptoms. The α(1)(D) subtype is thought to play a role in the facilitation of voiding reflex; that is; storage symptoms. α(1)-Blockers often fail to alleviate overactive bladder symptoms. In this context, combination therapy with α(1)-blockers and antimuscarinics has been recommended. Treatment with 5α-reductase inhibitor for 1 year improves urinary symptoms and flow rate by reducing prostatic volume in men with benign prostatic enlargement. A pooled analysis showed that the long-term (2 or 4 years) treatment with 5α-reductase inhibitor reduced the rate of progression to acute urinary retention and surgery. Combination therapy with 5α-reductase inhibitor and α(1)-blocker was shown to provide a rapid improvement in lower urinary tract symptoms, and reduce the relative risk of acute urinary retention and benign prostatic hyperplasia-related surgery. Phosphodiesterase inhibitors might target a nitric oxide-cyclic guanosine monophosphate pathway in the prostate, urethra and bladder. Phosphodiesterase-5 inhibitors (sildenafil or tadalafil) were shown to provide clinically relevant improvements in both male lower urinary tract symptoms and erectile dysfunction. abstract_id: PUBMED:28117002 β3-Adrenoceptor Control of Lower Genitourinary Tract Organs and Function in Male: An Overview. The sympathetic nervous system is one component of the nervous regulatory system of the physiological function of the lower genitourinary tract. Our knowledge on the role of this sympathetic system has advanced during the last decade due to the characterization of β3-adrenoceptors (β3-ARs) in the urogenital system. This review focuses on the pharmacological and molecular evidence supporting the functional roles of β3-AR in male genitourinary tissues of various species. An electronic search in two different databases was performed including MEDLINE (PubMed) and EMBASE from 2010 to 2016. β3-agonists may be a promising alternative to antimuscarinics in the treatment of overactive bladder (OAB) based on available evidence. Although more recent studies have evaluated the involvement of β3-ARs in the physiological control and regulation of various tissues of the lower genitourinary tract mainly urinary bladder, penis, urethra, ureter, there are few innovations in the pipe-line. Among the β3-agonists, mirabegron is a unique drug licensed for the treatment of patients with OAB. Many drugs classified as β3-agonists are still under investigations for the treatment of OAB, lower urinary tract symptoms, ureteral stones, benign prostate hyperplasia, prostate cancer and erectile dysfunction. This review discusses the potential roles of β3-AR as new therapeutic targets by evaluating the results of preclinical and clinical studies related to male lower genitourinary tract function. Looking into the future, the potential benefits of β3- AR agonists from experimental and clinical investigations may provide an attractive therapeutic option. abstract_id: PUBMED:24285261 New strategies in treating male lower urinary tract symptoms. Purpose Of Review: Improved understanding of the pathogenesis of lower urinary tract symptoms (LUTS) has led to the development of new drugs to treat male LUTS. The review aims to give an overview to the new drugs and to compounds in the pipeline. Recent Findings: Tadalafil, a phosphodiesterase type 5 inhibitor, is a drug newly approved for the treatment of male LUTS and a true new challenger for the current standard treatment with alpha1 blockers, particularly in men with concomitant erectile dysfunction. Botulinum toxin and mirabegron, a beta3 agonist, might be of value in treating persistent storage LUTS. Intraprostatic injections with botulinum toxin, NX-1207, and PRX302, need further evaluation but might be treatment alternatives in the future. Similarly, vitamin D3 receptor analogues (e.g., elocalcitol), gonadotropin-releasing hormone antagonists (e.g., cetorelix), and modulators of the cannabinoid system (e.g., fatty acid amide hydrolase inhibitors) need further evaluation in clinical studies. Other compounds, such as transient receptor potential vanilloid antagonists, Rho kinase inhibitors, purinergic receptor blockers, and endothelin targeting drugs, are still at experimental stages. Summary: Novel drugs for the treatment of male LUTS have been introduced recently. Clinical practice along with further trials will have to prove their value, along with other compounds that are still in their early phase of development. abstract_id: PUBMED:34215861 Mirabegron improves erectile function in men with overactive bladder and erectile dysfunction: a 12-week pilot study. Phosphodiesterase type 5 inhibitors (PDE5i) is the only approved oral treatment for erectile dysfunction (ED) in the US, and alternative management remains necessary when this treatment fails or is contraindicated. Targeting other pathways than the NO-cGMP pathway and/or combining this approach with PDE5i may introduce new treatments for men who are unresponsive to PDE5i. This study aims to evaluate whether Mirabegron improves erectile function in men with concurrent overactive bladder and mild to moderate ED. Twenty subjects, 40-70 years old, registering International Index of Erectile Function (IIEF) score 11-25 and International Prostate Symptom Score 8-20, were treated with Mirabegron therapy for 12 weeks. Study participants were re-administered IIEF and OAB-q questionnaires on weeks 2, 4, 8, and 12 and assessed for adverse events. The primary and secondary endpoints were an increase in the IIEF-5 score of 4 units and a decrease in the Overactive Bladder questionnaire (OAB-q) symptom severity score of 10 units between study time points. Thirteen men completed the 12-week study. Mirabegron treatment improved the IIEF-5 scores in five patients (38.4%) by 4 points or more, whereas IIEF-5 scores were not affected by Mirabegron treatment in eight patients (61.5%). There were no clinically relevant decreases in the IIEF-5 score. Significant improvements were observed in intercourse satisfaction at week eight compared to baseline (p = 0.01). Orgasmic function and sexual desire were not affected by Mirabegron treatment. As expected, Mirabegron treatment reduced OAB symptoms based on OAB-q short form (p = 0.006) and OAB-q total health-related quality of life (HRQL) scores compared to baseline (p = 0.03). Residual bladder volumes were not affected by treatment. No serious side effects were reported during the study period. This study suggests that Mirabegron may improve both EF and OAB-related symptoms in some individuals without causing serious adverse events. abstract_id: PUBMED:35352535 Perspectives on the urological care in Parkinson's disease patients. Parkinson's disease (PD) is recognized as the most common neurodegenerative disorder after Alzheimer's disease. Lower urinary tract symptoms are common in patients with PD, either storage symptoms (overactive bladder symptoms or OAB) or voiding symptoms. The most important diagnostic clues for urinary disturbances are provided by the patient's medical history. Urodynamic evaluation allows the determination of the underlying bladder disorder and may help in the treatment selection. Pharmacologic interventions especially anticholinergic medications are the first-line option for treating OAB in patients with PD. However, it is important to balance the therapeutic benefits of these drugs with their potential adverse effects. Intra-detrusor Botulinum toxin injections, electrical stimulation were also used to treat OAB in those patients with variable efficacy. Mirabegron is a β3-agonist that can also be used for OAB with superior tolerability to anticholinergics. Desmopressin is effective for the management of nocturnal polyuria which has been reported to be common in PD. Deep brain stimulation (DBS) surgery is effective in improving urinary functions in PD patients. Sexual dysfunction is also common in PD. Phosphodiesterase type 5 inhibitors are first-line therapies for PD-associated erectile dysfunction (ED). Treatment with apomorphine sublingually is another therapeutic option for PD patients with ED. Pathologic hypersexuality has occasionally been reported in patients with PD, linked to dopaminergic agonists. The first step of treatment of hypersexuality consists of reducing the dose of dopaminergic medication. This review summarizes the epidemiology, pathogenesis, risk factors, genetic, clinical manifestations, diagnostic test, and management of PD. Lastly, the urologic outcomes and therapies are reviewed. Answer: The research on mirabegron, a selective β3-adrenoceptor agonist, suggests that it could be a potential therapy for erectile dysfunction (ED). Studies have shown that mirabegron causes relaxation of human and rat corpus cavernosum (CC) in a concentration-dependent manner, which is a critical process in achieving an erection (PUBMED:27124860). The relaxation effect of mirabegron on CC was observed to be independent of the nitric oxide-cGMP pathway, which is the primary mechanism of action for the current first-line ED treatments, phosphodiesterase type 5 inhibitors (PDE5i) (PUBMED:27124860). This indicates that mirabegron could offer a different therapeutic pathway for ED, potentially benefiting patients who do not respond to PDE5i therapy. Further studies have supported these findings, demonstrating that mirabegron can induce CC relaxation through α1-adrenoceptor blockade (PUBMED:31228454). In vivo studies in rats have shown that mirabegron treatment can significantly increase erectile response, comparable to the effect of tadalafil, a PDE5i (PUBMED:31228454). Additionally, mirabegron has been shown to improve erectile function and neurogenic relaxation of CC in diabetic rats, suggesting its potential use in diabetic ED patients who are unresponsive to PDE5 inhibitors (PUBMED:30978726). A pilot study in men with overactive bladder and mild to moderate ED found that mirabegron treatment improved erectile function in some individuals without causing serious adverse events (PUBMED:34215861). This suggests that mirabegron may have a role in improving both erectile function and overactive bladder-related symptoms. Overall, the evidence points to mirabegron as a promising candidate for ED therapy, particularly in patients who do not respond to traditional PDE5i treatments. However, further clinical studies are needed to fully establish its efficacy and safety profile in the treatment of ED (PUBMED:27124860; PUBMED:31228454; PUBMED:30978726; PUBMED:34215861).
Instruction: Is it necessary to perform surveillance endomyocardial biopsies in heart transplant recipients? Abstracts: abstract_id: PUBMED:8719449 Is it necessary to perform surveillance endomyocardial biopsies in heart transplant recipients? Background: Routine surveillance endomyocardial biopsies to diagnose unsuspected rejection are performed at 3- to 12-month intervals after heart transplantation. From 1979 to 1989, surveillance biopsies were routinely performed as a part of the yearly evaluation. Methods: A retrospective analysis of the follow-up data showed that "routine surveillance biopsies" had an extremely low yield, and, on the basis of the results of this study, we discontinued to perform surveillance biopsies beyond 6 months after transplantation. To validate these results, we compared the outcome of two groups of patients who had similar demographics and identical immunosuppression, except that in one group the surveillance biopsies were not performed. Results: No difference was found in either actuarial survival rate or freedom from late rejection between the two groups. Conclusions: These findings confirm that routine surveillance heart biopsies beyond 6 months after transplantation are not necessary and they should be performed only if there is clinical suspicion of rejection or as part of a research protocol. abstract_id: PUBMED:35745819 Therapeutic Drug Monitoring of Tacrolimus-Personalized Therapy in Heart Transplantation: New Strategies and Preliminary Results in Endomyocardial Biopsies. Tacrolimus (TAC) is an immunosuppressant drug approved both in the US and in the EU, widely used for the prophylaxis of organ rejection after transplantation. This is a critical dose drug: low levels in whole blood can lead to low exposure and a high risk of acute rejection, whereas overexposure puts patients at risk for toxicity and infection. Both situations can occur at whole-blood concentrations considered to be within the narrow TAC therapeutic range. We assumed a poor correlation between TAC trough concentrations in whole blood and the incidence of acute rejection; therefore, we propose to study TAC concentrations in endomyocardial biopsies (EMBs). We analyzed 70 EMBs from 18 transplant recipients at five scheduled follow-up visits during the first year post-transplant when closer TAC monitoring is mandatory. We observed five episodes of acute rejection (grade 2R) in three patients (2 episodes at 0.5 months, 2 at 3 months, and 1 at 12 months), when TAC concentrations in EMBs were low (63; 62; 59; 31; 44 pg/mg, respectively), whereas concentrations in whole blood were correct. Our results are preliminary and further studies are needed to confirm the importance of this new strategy to prevent acute rejection episodes. abstract_id: PUBMED:34189402 Case report of a coronary artery-right ventricular fistula following repeat endomyocardial biopsies in a heart transplant patient. Background: Endomyocardial biopsy (EMB) remains the gold standard for cellular rejection surveillance in heart transplant recipients. Coronary artery fistula formation is a rare late and potentially catastrophic complication of repeated endomyocardial biopsies, without contemporary evidence on incidence or management. Case Summary: A 47-year-old male was found to have a fistula between his right ventricle and his left anterior descending artery on an angiogram that was performed as a part of regular screening of coronary allograft vasculopathy. Given the low shunt fraction, asymptomatic nature, and lack of guidelines on definitive management, the patient is undergoing conservative management with regular surveillance. Discussion: Coronary artery fistulas were once thought to be rare complications of repeated EMB, but the true prevalence is likely to be higher than previously believed. Ideal treatment and monitoring is unknown given the relative rarity of the condition. abstract_id: PUBMED:9256204 Are surveillance endomyocardial biopsies necessary during OKT3 induction therapy? Background: To determine the utility of surveillance endomyocardial biopsies (EMBs) during a 14-day OKT3 induction course after cardiac transplantation, histologic results of the first two EMBs were retrospectively reviewed. Methods: Seventy-three consecutive cardiac transplant recipients who received an OKT3-based quadruple sequential immunosuppressive protocol were analyzed. Patients were predominantly white (85%) and male (72%), with ischemic cardiomyopathy (54%) and a pretransplant panel-reactive antibody level of <10% (93%). Results: The first EMB in 73 patients demonstrated no rejection in 70 patients (96%) and grade 1A rejection in 3 patients (4%). The second EMB showed no rejection in 64 patients (88%), grade 1A or 1B rejection in 8 patients (11%), and grade 3A rejection without hemodynamic compromise in only 1 patient (1%). Absolute CD3+ cells remained below 25 lymphocytes/mm3, and mean trough OKT3 serum levels exceeded 500 ng/ml throughout the 14 days of therapy, demonstrating the immunosuppressive efficacy of OKT3. Posttransplant echocardiograms showed normal left ventricular systolic function. Conclusions: Since 145 of 146 EMBs (99%) demonstrated no or minimal allograft rejection, a large cost savings could be realized if EMBs were performed only when clinically indicated during the 14-day OKT3 induction course in cardiac transplant recipients. Appropriate immunologic monitoring and echocardiographic testing may obviate the need for performing costly EMBs during OKT3 induction without an adverse clinical outcome. abstract_id: PUBMED:36209425 Subtyping of cardiac amyloidosis by mass spectrometry-based proteomics of endomyocardial biopsies. Background: Cardiac amyloidosis is a severe condition leading to restrictive cardiomyopathy and heart failure. Mass spectrometry-based methods for cardiac amyloid subtyping have become important diagnostic tools but are currently used only in a few reference laboratories. Such methods include laser-capture microdissection to ensure the specific analysis of amyloid deposits. Here we introduce a direct proteomics-based method for subtyping of cardiac amyloidosis. Methods: Endomyocardial biopsies were retrospectively analysed from fresh frozen material of 78 patients with cardiac amyloidosis and from 12 biopsies of unused donor heart explants. Cryostat sections were digested with trypsin and analysed with liquid chromatography - mass spectrometry, and data were evaluated by proteomic software. Results: With a diagnostic threshold set to 70% for each of the four most common amyloid proteins affecting the heart (LC κ, LC λ, TTR and SAA), 65 of the cases (87%) could be diagnosed, and of these, 61 cases (94%) were in concordance with the original diagnoses. The specimens were also analysed for the summed intensities of the amyloid signature proteins (ApoE, ApoA-IV and SAP). The intensities were significantly higher (p < 0.001) for all assigned cases compared with controls. Conclusion: Cardiac amyloidosis can be successfully subtyped without the prior enrichment of amyloid deposits with laser microdissection. abstract_id: PUBMED:31967669 Global longitudinal strain assessment of the left ventricle by speckle tracking echocardiography detects acute cellular rejection in orthotopic heart transplant recipients: A systematic review and meta-analysis. Background: In orthotopic heart transplant recipients, surveillance with endomyocardial biopsy is crucial to detect acute cellular rejection (ACR) early. ACR is a common and serious complication of transplantation with substantial morbidity and mortality. Speckle tracking echocardiography with global longitudinal strain (GLS) assessment of the left ventricle has emerged as a possible noninvasive screening modality. We have conducted a systematic literature review and meta-analysis to evaluate the role of GLS in diagnosing ACR. Methods: The following databases were queried: PubMed, Cochrane Central Register of Controlled Trials (CENTRAL), Scopus, and Embase. We compiled all articles evaluating changes in GLS in comparison to endomyocardial biopsy in ACR dated prior to September 2019. Weighted mean differences (WMD) and 95% confidence intervals (CIs) were pooled by using a random effects model. In order to determine the risk of bias, we used the revised version of the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Results: Twelve studies met inclusion criteria of which ten were chosen. These studies encompassed 511 patients and 1267 endomyocardial biopsies. There was a significant difference in GLS between patients who did and did not have ACR proven by biopsy (WMD = 2.18; 95% CI: 1.57-2.78, P = <.001; I2 = 76%). The overall sensitivity for GLS in detecting ACR was 78% (CI: 63%-90%, P = .123; I2 = 52.2%) while the overall specificity was 68% (CI: 50%-83%, P = <.001; I2 = 88.3%). Conclusion: Global longitudinal strain assessment of the left ventricle by speckle tracking echocardiography is useful in detecting ACR and could potentially reduce the burden of frequent endomyocardial biopsies in heart transplant recipients. abstract_id: PUBMED:33342245 Experience of endomyocardial biopsy from a tertiary care center in India. Background: Endomyocardial biopsy is the gold standard and has a definite role in the surveillance of cardiac allograft rejection. Its role in other cardiac diseases is limited. However, it is required for conclusive diagnosis of a few entities in which it can influence patient management. There is no reported data regarding the utility of endomyocardial biopsy in the Indian population. Thus, this study was undertaken in a tertiary care center in India to assess the utility of endomyocardial biopsy in various cardiac diseases in the context of clinical diagnoses. Methods: All endomyocardial biopsies conducted over a 27-year period were evaluated. Clinical details including indication for biopsy were collected. Histopathological findings were recorded and classified as definitive diagnosis, probable diagnosis with features consistent with the clinical diagnosis, and nonspecific morphological findings. Results: A total of 927 endomyocardial biopsies from 719 patients were reviewed. Endomyocardial biopsy was diagnostic in 12.5% of native cardiac biopsies and 52.1% showed nondiagnostic findings. The most frequent diagnoses were amyloidosis (58.7%) and myocarditis (8.6%). Endomyocardial biopsy had a diagnostic role in evaluation of restrictive cardiac diseases. Endomyocardial fibrosis and tubercular myocarditis, relatively more prevalent in the Indian population, were also identified. Cases of rheumatic heart disease, desmin cardiomyopathy, and microfibrillar cardiomyopathy were surprise findings, proving the usefulness of endomyocardial biopsy in detecting some rare cardiac conditions. Conclusion: Endomyocardial biopsy is an important tool for the diagnosis of specific cardiac diseases including some rare entities, and for conditions which are more prevalent in our country, requiring biopsy confirmation. abstract_id: PUBMED:8719456 Utility of surveillance biopsies in infant heart transplant recipients. Background: Endomyocardial biopsy remains the primary means of rejection surveillance after orthotopic heart transplantation in adults. Perpetual surveillance endomyocardial biopsy has been questioned, however, because of low yield beyond the early posttransplantation period. This issue has not been adequately studied in the pediatric population. The objectives of this study were to define the rate of rejection in infants undergoing orthotopic heart transplantation, correlate rejection with signs and symptoms, and evaluate the utility of surveillance endomyocardial biopsy. Methods: Records of all patients 24 months of age or younger undergoing orthotopic heart transplantation were reviewed; 38 patients underwent 42 transplantation; 256 endomyocardial biopsies were performed for surveillance, cardiac symptoms, noncardiac symptoms, or lowered immunosuppression. Results: There were 22 rejection episodes International Society for Heart and Lung transplantation grade 1B or higher, half of which occurred in neonates 30 days of age or younger. Linearized rejection rates and actuarial freedom from rejection were not different between neonates and older infants. Linearized rejection rates reached a plateau 3 months after orthotopic heart transplantation of 0.07 episodes/100 patient days. No positive surveillance endomyocardial biopsies were obtained beyond 6 months after orthotopic heart transplantation. The probability of a positive biopsy (International Society for Heart and Lung Transplantation grade 1B or higher) was 20% or more for any other indication (odds ratios for rejection were 12.9 for cardiac symptoms, 3.3 for noncardiac symptoms, and 10.8 for lowered immunosuppression as determined by logistic regression more than 6 months after orthotopic heart transplantation). Conclusions: Rejection rates are not different between neonatal and older infants, and endomyocardial biopsies done solely for surveillance beyond 6 months after orthotopic heart transplantation rarely yield positive results. abstract_id: PUBMED:10933318 Endomyocardial biopsy in pediatric heart transplant recipients: a useful exercise? (Analysis of 1,169 biopsies). The objective of this study was to define the diagnostic yield for endomyocardial biopsy (EMB) procedures performed for various indications in a large pediatric heart transplant population. Endomyocardial biopsy procedure has been employed as the 'gold standard' for rejection surveillance. Previous studies have questioned the value of surveillance EMB beyond the early post-transplant period. We retrospectively reviewed data on 82 pediatric heart transplant recipients with serial EMB. A total of 1,169 EMB were performed during a follow-up period of 2-149 months (median 41 months). EMB were classified by age at transplantation, time from transplant, immunosuppressive regimen used [tacrolimus vs. cyclosporin A (CsA)] and indication, i.e. surveillance, follow-up after rejection or lowering of immunosuppression, non-specific clinical symptoms and graft dysfunction. During the first year after heart transplantation, surveillance EMB demonstrated significant rejection [International Society for Heart and Lung Transplantation (ISHLT) grade > or = 3A] in 18% of biopsies with the yield being 14-43% for all other indications. Surveillance EMB 1-5 yr post-transplantation were found to have a lower diagnostic yield in infants (4%, vs. 13% in children) and in patients with favorable first-year rejection history (9% vs. 17% in 'frequent rejectors'). Tacrolimus-based immunosuppression was associated with significantly less rejection, but only in the first year post-transplantation (14% in tacrolimus vs. 24% in CsA surveillance EMB, p = 0.035). Surveillance EMB remains an important diagnostic tool for rejection surveillance during the first 5 years after pediatric heart transplantation. Endomyocardial biopsy is particularly warranted after reduction of immunosuppression and for monitoring for ongoing rejection after treatment of acute rejection episodes. abstract_id: PUBMED:38369817 A retrospective analysis of anticoagulant safety among heart transplant recipients undergoing endomyocardial biopsy. Background: Transvenous endomyocardial biopsy is an invasive procedure which is used to diagnose rejection following an orthotopic heart transplant. Endomyocardial biopsy is widely regarded as low risk with all-cause complication rates below 5% in most safety studies. Following transplant, some patients require therapeutic anticoagulation. It is unknown whether anticoagulation increases endomyocardial biopsy bleeding risk. Methods: Records from 2061 endomyocardial biopsies performed for post-transplant rejection surveillance at our institution between November 2016 and August 2022 were reviewed. Bleeding complications were defined as vascular access-related hematoma or bleeding, procedure-related red blood cell transfusion, and new pericardial effusion. Relative risk and small sample-adjusted 95% confidence interval was calculated to investigate the association between bleeding complications and anticoagulation. Results And Conclusions: The overall risk of bleeding was 1.2% (25/2061 cases). There was a statistically significant increase in bleeding among patients on intravenous (RR 4.46, CI 1.09-18.32) but not oral anticoagulants (RR .62, CI .15-2.63) compared to patients without anticoagulant exposure. There was a trend toward increased bleeding among patients taking warfarin with INR ≥ 1.8 (RR 3.74, CI .90-15.43). Importantly, no bleeding events occurred in patients taking direct oral anticoagulants such as apixaban. Based on these results, intravenous rather than oral anticoagulation was associated with a significantly higher risk of bleeding complications following endomyocardial biopsy. Answer: Surveillance endomyocardial biopsies (EMBs) have traditionally been performed at regular intervals after heart transplantation to detect unsuspected rejection. However, the necessity of routine surveillance EMBs has been questioned. A retrospective analysis from 1979 to 1989 indicated that routine surveillance biopsies beyond 6 months post-transplantation had an extremely low yield in detecting rejection. Based on these findings, the practice of performing surveillance biopsies beyond 6 months was discontinued, and a comparison of outcomes between two groups of patients—with and without surveillance biopsies—showed no difference in actuarial survival rate or freedom from late rejection. This suggests that routine surveillance heart biopsies beyond 6 months post-transplantation may not be necessary and should be performed only if there is clinical suspicion of rejection or for research purposes (PUBMED:8719449). In pediatric heart transplant recipients, surveillance EMBs beyond 6 months post-transplantation rarely yielded positive results for rejection. The probability of a positive biopsy was significantly higher when there were cardiac symptoms, noncardiac symptoms, or lowered immunosuppression, indicating that surveillance biopsies may not be as useful in infants beyond the early post-transplantation period (PUBMED:8719456). Furthermore, the utility of surveillance EMBs during a 14-day OKT3 induction course after cardiac transplantation was questioned, as the majority of biopsies demonstrated no or minimal allograft rejection. This suggests that appropriate immunologic monitoring and echocardiographic testing may reduce the need for costly EMBs during OKT3 induction without adversely affecting clinical outcomes (PUBMED:9256204). However, it is important to note that EMB remains the gold standard for cellular rejection surveillance, and there are risks associated with repeated biopsies, such as the formation of coronary artery fistulas, which is a rare but potentially catastrophic complication (PUBMED:34189402). In conclusion, while EMB is a critical tool for rejection surveillance, the necessity of routine surveillance biopsies beyond the early post-transplantation period is debatable, and their use may be more judiciously applied based on clinical indications rather than as a fixed protocol for all patients.
Instruction: Fibrillation and sharp-waves: do we need them to diagnose ALS? Abstracts: abstract_id: PUBMED:12365065 Fibrillation and sharp-waves: do we need them to diagnose ALS? Background: Fibrillation/sharp-waves (fibs-sw) are considered an essential neurophysiological marker in Amyotrophic Lateral Sclerosis (ALS). We investigated how frequently an initial electrophysiological (EMG) evaluation, performed in ALS patients, revealed no fibs-sw. Methods: The initial EMG studies from 68 ALS patients were analysed. The presence of fibs-sw in at least two muscles of each limb and two cranial-innervated muscles was assessed. Results: We observed no fibs-sw in cranial-innervated muscles, and 7/15 bulbar-onset patients had no fibs-sw in limb muscles. Two of 28 upper-limb-onset patients, who had a short history before EMG, showed no fibs-sw. All lower-limb-onset patients had fibs-sw. Generally patients with no fibs-sw had diffuse abundant fasciculations. Conclusions: The absence of fibs-sw should not necessarily preclude the neurophysiological diagnosis of ALS in bulbar-onset patients. Early affected upper-limb-onset patients might show no fibs-sw, but profuse fasciculations associated with mild to moderate denervation-reinnervation in weak muscles are found. abstract_id: PUBMED:26973282 The presence of spontaneous EMG activity in sternocleidomastoid is associated with ventilatory dysfunction in ALS. We investigated electromyography (EMG) of the sternocleidomastoid (SCM) in 128 patients with amyotrophic lateral sclerosis (ALS) including correlation with forced vital capacity (FVC) and ALS Functional Rating Scale scores. The presence of fibrillation potentials and positive sharp waves in the SCM was significantly more frequently observed in patients with an FVC <80% (31/49, 63%) than in patients with an FVC≥80% (34/79, 43%). This study suggests that the SCM could be concomitantly involved with primary respiratory muscles in ALS. This is of value since needle EMG is a less invasive procedure in SCM than in the diaphragm or intercostal muscles. abstract_id: PUBMED:36474455 ALS Regional Variants (Brachial Amyotrophic Diplegia and Amyotrophic Leg Diplegia): Still A Diagnostic Challenge in Neurology. Purpose: We illustrate three patients with regional amyotrophic lateral sclerosis (ALS) variants and hope to improve accuracy in diagnosis for this scarce group of diseases. Case Report: Amyotrophic lateral sclerosis (ALS) represents a broad spectrum of acquired and inherited neurodegenerative conditions involving the upper and motor neurons. Typical ALS remains a clinical diagnosis that is not hard to diagnose. Still, when it comes to atypical forms of ALS, the physicians may face some difficulties differentiating between atypical forms of ALS and other neurological diseases, such as multifocal motor neuropathy, chronic inflammatory demyelinating polyneuropathy, and spinal muscular atrophy. Both brachial amyotrophic diplegia (BAD) and leg amyotrophic diplegia (LAD) are considered regional variants of ALS. We are here to report two cases of BAD and one case of LAD. All these 3 cases showed progression of the disease after longitudinal follow- up for approximately two years. However, after two years, their disease progressions were slow and confined to their 'regions' of upper or lower limbs. Conclusion: BAD and LAD are unique regional variants of ALS with a significantly better prognosis than typical ALS. The phenotypic characteristics of regional ALS variants must be recognized when physicians are to tailor advice on disease progression, disease outcome, drug therapy, and end-of-life planning for patients with ALS or ALS variants. abstract_id: PUBMED:8559172 Are fibrillation potentials and positive sharp waves the same? No. Electrodiagnostic medicine consultants report electrical activity in muscle recorded at rest and during voluntary movement by means of waveform and firing rate characteristics. This principle allows us to distinguish fibrillation potentials from positive sharp waves. Although in most cases these two potentials have the same clinical significance, there are at least five different situations in which they do not have an identical meaning: (1) positive sharp waves can be recorded earlier after a peripheral nerve injury than can fibrillation potentials; (2) occasionally, nonclinically significant diffuse positive sharp wave activity may be seen in the absence of fibrillation activity (i.e., "EMG disease"); (3) positive sharp waves may be seen in distal muscles of "normal" subjects without the presence of fibrillation activity or clinical significance; (4) positive sharp waves without fibrillation potentials may be seen following local muscle trauma; and (5) positive sharp waves may be seen alone in some demyelinating polyneuropathies. By accurately describing the observed potentials, the electrodiagnostic medicine consultant may be able to obtain more clinically useful information from an electrodiagnostic study. abstract_id: PUBMED:26399278 State and metropolitan area-based amyotrophic lateral sclerosis (ALS) surveillance. Unlabelled: Our objective was to develop state and metropolitan area-based surveillance projects to describe the characteristics of those with ALS and to assist with evaluating the completeness of the National ALS Registry. Because the literature suggested that ethnic/racial minorities have lower incidence of ALS, three state and eight metropolitan areas were selected to over-represent ethnic/racial minorities to have a sufficient number of minority patients. Project activities relied on reports from medical providers and medical records abstraction. The project areas represented approximately 27% of the U.S. Population: The combined racial and ethnic distribution of these areas is 64.4% white, 16.0% African-American, 6.7% Asian, and 28.3% Hispanic. Most neurologists did not diagnose or provide care for ALS patients. The number of unique patients reported was close to expected (5883 vs. 6673). Age and gender distribution of patients was similar to the literature. The crude average annual incidence rate was 1.52 per 100,000 person-years, CI 1.44-1.61, and the 2009 prevalence rate was 3.84 per 100,000 population, CI 3.70-3.97. In conclusion, this study represents the largest number of clinically diagnosed ALS patients reported by neurologists in the U.S. Comparison of these data with those in the National ALS Registry will help evaluate the completeness of administrative databases. abstract_id: PUBMED:24951380 A paraneoplastic Sharp syndrome reversible after resection of a benign schwannoma: a paraneoplastic syndrome? Paraneoplastic syndromes commonly occur in malignancies and often precede the first symptoms of the tumor. By definition, paraneoplastic syndromes are only associated with malignancies although some exceptions have been reported, occurring with benign tumors. We report a patient presenting with a clinical and serological Sharp syndrome, followed a few months later by a cervical schwannoma. Curative surgical resection of the mass resulted in a clinical and serological healing from the Sharp syndrome. To our knowledge, this is the first report of a benign schwannoma complicated by a possible paraneoplastic Sharp syndrome. abstract_id: PUBMED:28600595 Assessing behavioural changes in ALS: cross-validation of ALS-specific measures. The Beaumont Behavioural Inventory (BBI) is a behavioural proxy report for the assessment of behavioural changes in ALS. This tool has been validated against the FrSBe, a non-ALS-specific behavioural assessment, and further comparison of the BBI against a disease-specific tool was considered. This study cross-validates the BBI against the ALS-FTD-Q. Sixty ALS patients, 8% also meeting criteria for FTD, were recruited. All patients were evaluated using the BBI and the ALS-FTD-Q, completed by a carer. Correlational analysis was performed to assess construct validity. Precision, sensitivity, specificity, and overall accuracy of the BBI when compared to the ALS-FTD-Q, were obtained. The mean score of the whole sample on the BBI was 11.45 ± 13.06. ALS-FTD patients scored significantly higher than non-demented ALS patients (31.6 ± 14.64, 9.62 ± 11.38; p < 0.0001). A significant large positive correlation between the BBI and the ALS-FTD-Q was observed (r = 0.807, p < 0.0001), and no significant correlations between the BBI and other clinical/demographic characteristics indicate good convergent and discriminant validity, respectively. 72% of overall concordance was observed. Precision, sensitivity, and specificity for the classification of severely impaired patients were adequate. However, lower concordance in the classification of mild behavioural changes was observed, with higher sensitivity using the BBI, most likely secondary to BBI items which endorsed behavioural aspects not measured by the ALS-FTD-Q. Good construct validity has been further confirmed when the BBI is compared to an ALS-specific tool. Furthermore, the BBI is a more comprehensive behavioural assessment for ALS, as it measures the whole behavioural spectrum in this condition. abstract_id: PUBMED:30634563 MyomiRNAs Dysregulation in ALS Rehabilitation. Amyotrophic lateral sclerosis (ALS) is a rare, progressive, neurodegenerative disorder caused by degeneration of upper and lower motor neurons. The disease process leads, because of lower motor neuron involvement, to progressive muscle atrophy, weakness, and fasciculations and for the upper motor neuron involvement leads to spasticity. Muscle atrophy in ALS is caused by a neural dysregulation in the molecular network controlling fast and slow muscle fibers. Denervation and reinnervation processes in skeletal muscle occur in the course of ALS and are modulated by rehabilitation. MicroRNAs (miRNAs) are small, non-coding RNAs that are involved in different biological functions under various pathophysiological conditions. MiRNAs can be secreted by various cell types and they are markedly stable in body fluids. MiR-1, miR-133 a miR-133b, and miR-206 are called "myomiRs" and are considered markers of myogenesis during muscle regeneration and contribute to neuromuscular junction stabilization or sprouting. We observed a positive effect of a standard aerobic exercise rehabilitative protocol conducted for six weeks in 18 ALS patients during hospitalization in our center. This is a preliminary study, in which we correlated clinical scales with molecular data on myomiRs. After six weeks of moderate aerobic exercise, we found lower levels in serum of myomiRNAs. Our data suggest that circulating miRNAs changed during skeletal muscle recovery in response to physical rehabilitation in ALS. However, no firm conclusions can be made on the ALS-specific effect of exercise on miRNA levels. abstract_id: PUBMED:37147520 Equating norms between the ALS Cognitive Behavioral Screen (ALS-CBS™) and the Edinburgh Cognitive and Behavioural ALS Screen (ECAS) in non-demented ALS patients. Background: The present study aimed at deriving equating norms to estimate scores on the Edinburgh Cognitive and Behavioural ALS Screen (ECAS) based on those on the ALS Cognitive Behavioral Screen (ALS-CBS™) in an Italian cohort of non-demented ALS patients. Methods: ALS-CBS™ and ECAS scores of 293 ALS patients without frontotemporal dementia were retrospectively retrieved. Concurrent validity of the ALS-CBS™ towards the ECAS was tested by covarying for demographics, disease duration and severity, presence of C9orf72 hexanucleotide repeat expansion and behavioural features. A linear-smoothing equipercentile equating (LSEE) model was employed to derive ALS-CBS™-to-ECAS cross-walks. Gaps in LSEE-based estimation were managed via a linear regression-based equating approach. Equivalence between empirical and derived ECAS scores was tested via a two-one-sided test (TOST) procedure for the dependent sample. Results: The ALS-CBS™ predicted the ECAS (β = 0.75), accounting for the vast majority of its variance (60% out of an R2 = 0.71). Consistently, a strong, one-to-one linear association between ALS-CBS™ and ECAS scores was detected (r = 0.84; R2 = 0.73). The LSEE was able to estimate conversions for the full range of the ALS-CBS™, except for raw scores equal to 1 and 6 - for whom a linear equating-based equation was derived. Empirical ECAS scores were equivalent to those derived with both methods. Discussion: Italian practitioners and researchers have been herewith provided with valid, straightforward cross-walks to estimate the ECAS based on ALS-CBS™ scores in non-demented ALS patients. Conversions herewith provided will help avoid cross-sectional/longitudinal inconsistencies in test adoption within research, and possibly clinical, settings. abstract_id: PUBMED:37109471 Insight into Elderly ALS Patients in the Emilia Romagna Region: Epidemiological and Clinical Features of Late-Onset ALS in a Prospective, Population-Based Study. Few studies have focused on elderly (>80 years) amyotrophic lateral sclerosis (ALS) patients, who represent a fragile subgroup generally not included in clinical trials and often neglected because they are more difficult to diagnose and manage. We analyzed the clinical and genetic features of very late-onset ALS patients through a prospective, population-based study in the Emilia Romagna Region of Italy. From 2009 to 2019, 222 (13.76%) out of 1613 patients in incident cases were over 80 years old at diagnosis, with a female predominance (F:M = 1.18). Elderly ALS patients represented 12.02% of patients before 2015 and 15.91% from 2015 onwards (p = 0.024). This group presented with bulbar onset in 38.29% of cases and had worse clinical conditions at diagnosis compared to younger patients, with a lower average BMI (23.12 vs. 24.57 Kg/m2), a higher progression rate (1.43 vs. 0.95 points/month), and a shorter length of survival (a median of 20.77 vs. 36 months). For this subgroup, genetic analyses have seldom been carried out (25% vs. 39.11%) and are generally negative. Finally, elderly patients underwent less frequent nutritional- and respiratory-supporting procedures, and multidisciplinary teams were less involved at follow-up, except for specialist palliative care. The genotypic and phenotypic features of elderly ALS patients could help identify the different environmental and genetic risk factors that determine the age at which disease onset occurs. Since multidisciplinary management can improve a patient's prognosis, it should be more extensively applied to this fragile group of patients. Answer: Fibrillation potentials and sharp waves are considered essential neurophysiological markers in the diagnosis of Amyotrophic Lateral Sclerosis (ALS). However, their absence does not necessarily preclude the diagnosis of ALS. In a study analyzing initial electromyography (EMG) studies from 68 ALS patients, it was observed that no fibrillation potentials or sharp waves were found in cranial-innervated muscles, and some patients with bulbar-onset or upper-limb-onset ALS showed no fibrillation potentials or sharp waves in limb muscles, especially when they had a short history before the EMG was performed. However, these patients often had diffuse abundant fasciculations and mild to moderate denervation-reinnervation in weak muscles, suggesting that the absence of fibrillation potentials and sharp waves should not necessarily exclude the neurophysiological diagnosis of ALS in these cases (PUBMED:12365065). Furthermore, fibrillation potentials and sharp waves are not identical, and there are situations where they do not have the same clinical significance. For example, positive sharp waves can be recorded earlier after a peripheral nerve injury than fibrillation potentials, and they may appear alone in some demyelinating polyneuropathies (PUBMED:8559172). This distinction is important for electrodiagnostic medicine consultants to obtain more clinically useful information from an electrodiagnostic study. In conclusion, while fibrillation potentials and sharp waves are important markers for ALS diagnosis, their absence, particularly in certain onset types of ALS or early in the disease course, does not rule out the diagnosis. Other clinical features and findings, such as abundant fasciculations and denervation-reinnervation patterns, should be considered in the neurophysiological diagnosis of ALS (PUBMED:12365065).
Instruction: Does disease activity add to functional disability in estimation of utility for rheumatoid arthritis patients on biologic treatment? Abstracts: abstract_id: PUBMED:26316579 Does disease activity add to functional disability in estimation of utility for rheumatoid arthritis patients on biologic treatment? Objective: Treatment in general is mostly directly aimed at disease activity, and measures such as the DAS28 might therefore present important additional information. Our aim was to develop and validate a model that uses a combination of disease activity (DAS28) and HAQs to estimate EuroQoL 5-dimension scale (EQ5D) utilities. Methods: Longitudinal data from a cohort study in RA patients from the Utrecht Rheumatoid Arthritis Cohort study Group (Stichting Reumaonderzoek Utrecht) who started treatment with a biologic drug were used for mapping and validation. All 702 observations, including DAS28, HAQ and EQ5D assessed at the same time points, were used. The observations were randomly divided into a subset for development of the model (n = 428 observations) and a subset for validation (n = 274). A stepwise multivariable regression analysis was used to test the association of DAS28 (components) and HAQ (domains) with EQ5D. Model performance was assessed using the explained variance (R(2)) and root mean square errors. Observed and predicted utility scores were compared to check for under- or overestimation of the scores. Finally, the performance of the model was compared with published mapping models. Results: Lower DAS28 score and HAQ items dressing and grooming, arising, eating, walking and activities were associated with higher EQ5D scores. The final model had an explained variance of 0.35 and a lower root mean square error as compared with other models tested. The agreement between predicted and observed scores was fair. Conclusion: HAQ components estimate EQ5D better than total HAQ. Adding DAS28 to HAQ components does not result in better utility estimations. abstract_id: PUBMED:33536080 Outcomes in rheumatoid arthritis patients treated with abatacept: a UK multi-centre observational study. Background: Rheumatoid arthritis (RA) is an inflammatory autoimmune disease that causes chronic synovitis, resulting in progressive joint destruction and functional disability and affects approximately 400,000 people in the UK. This real-world study aimed to describe the characteristics, treatment patterns and clinical outcomes of patients who received abatacept in UK clinical practice. Methods: This was a multi-centre, retrospective, observational study of patients with RA treated with abatacept at four UK centres between 01 January 2013 and 31 December 2017. Data were collected from medical records of each patient from the index date (date of first bDMARD initiation) until the most recent visit, death or end of study (31 December 2017). Results: In total, 213 patients were included in the study. Patients received up to eight lines of therapy (LOTs). Treatment with abatacept, or any other bDMARD, was associated with reductions in DAS28-ESR and DAS28-CRP scores at 6 and 12 months. The distribution of EULAR responses (good/moderate/no response) tended to be more favourable for patients when receiving abatacept than when receiving other bDMARDs (22.8%/41.3%/35.9% versus 16.6%/41.4%/42.1% at 6 months, and 27.9%/36.1%/36.1% versus 21.2%/34.5%/44.2% at 12 months). Patients receiving abatacept at LOT1 (n = 68) spent significantly longer on treatment compared with patients receiving other bDMARDs (53.4 vs. 17.4 months; p< 0.01); a similar trend was observed for LOT2. Among patients who discontinued after 6 months, a greater proportion experienced infection requiring antibiotics when receiving other bDMARDs compared to those receiving abatacept. Conclusions: RA patients who received bDMARDs, including abatacept, experienced reduced disease activity. When receiving abatacept as first or second line of therapy, patients persisted with treatment significantly longer than those receiving other bDMARDs. abstract_id: PUBMED:34915575 Effectiveness and safety of subcutaneous abatacept in biologic-naïve RA patients at Week 52: A Japanese multicentre investigational study (ORIGAMI study). Objectives: To evaluate the effectiveness and safety of abatacept over 52 weeks in biologic-naïve rheumatoid arthritis (RA) patients with moderate disease activity in the prospective, 5-year, observational study (ORIGAMI study) in Japan. Methods: Abatacept (125 mg) was administered subcutaneously once a week. Clinical outcomes included Simplified Disease Activity Index (SDAI) remission at Week 52 (primary endpoint), Japanese Health Assessment Questionnaire (J-HAQ), EuroQol 5-Dimension Questionnaire (EQ-5D), treatment retention, and safety. The results were compared with those of conventional synthetic disease-modifying antirheumatic drug (csDMARD) controls from the ongoing Institute of Rheumatology, Rheumatoid Arthritis (IORRA) registry. Results: Overall, 325 patients were enrolled, with a mean age of 66.9 ± 12.7 years. The proportion of patients achieving SDAI remission (≤3.3) at Week 52 was 18.9% (95% CI: 14.3-23.6) and low disease activity (≤11) was 53.3% (95% CI: 47.4-59.1). A significant improvement was observed in J-HAQ and EQ-5D over 52 weeks in both the abatacept and csDMARD groups. The probability of abatacept treatment retention at Week 52 was 69.9% (95% CI: 64.7-75.5). Adverse events and serious adverse events were reported in 50.0% and 12.1% of patients, respectively. Conclusions: Abatacept significantly improved disease activity, physical disability, and quality of life for up to 52 weeks in RA patients in a real-world setting. abstract_id: PUBMED:25365086 Vectra DA for the objective measurement of disease activity in patients with rheumatoid arthritis. Quantitative and regular assessment of disease activity in rheumatoid arthritis (RA) is required to achieve treatment targets such as remission and to optimize clinical outcomes. To assess inflammation accurately, predict joint damage and monitor treatment response, a measure of disease activity in RA should reflect the pathological processes resulting in irreversible joint damage and functional disability. The Vectra DA blood test is an objective measure of disease activity for patients with RA. Vectra DA provides an accurate, reproducible score on a scale of 1 to 100 based on the concentrations of 12 biomarkers that reflect the pathophysiologic diversity of RA. The analytical validity, clinical validity, and clinical utility of Vectra DA have been evaluated for patients with RA in registries and prospective and retrospective clinical studies. As a biomarker-based instrument for assessing disease activity in RA, the Vectra DA test can help monitor therapeutic response to methotrexate and biologic agents and assess clinically challenging situations, such as when clinical measures are confounded by non-inflammatory pain from fibromyalgia. Vectra DA scores correlate with imaging of joint inflammation and are predictive for radiographic progression, with high Vectra DA scores being associated with more frequent and severe progression and low scores being predictive for non-progression. In summary, the Vectra DA score is an objective measure of RA disease activity that quantifies inflammatory status. By predicting risk for joint damage more effectively than conventional clinical and laboratory measures, it has the potential to complement these measures and optimise clinical decision making. abstract_id: PUBMED:33094270 Factors associated with disability in patients with rheumatoid arthritis with persistent moderate disease activity: a retrospective cohort study. Background: Many patients with rheumatoid arthritis (RA) do not attain remission/low disease activity, remaining in a moderate disease activity state (MDAS) with ongoing disability and impaired quality of life (QoL). If patients in persistent MDAS with poor future outcomes could be prospectively identified, they could arguably be treated more intensively. We evaluated baseline factors predicting function (Health Assessment Questionnaire-Disability Index [HAQ-DI] scores) and QoL (3-level EuroQol-5 dimensions questionnaire [EQ-5D-3L] index scores) at 12 months in patients with RA in persistent MDAS in a real-world setting. Methods: Patients with persistent MDAS (Disease Activity Score for 28-joint count based on erythrocyte sedimentation rate [DAS28-ESR] 3.2-5.1 on at least two consecutive outpatient appointments over 12 months) were identified retrospectively from Guy's Hospital RA Centre and analysed in two groups: (1) biologic naïve at baseline or (2) receiving/ever received biologics. The baseline timepoint was the second-visit MDAS DAS28-ESR score; the endpoint was the closest visit to 12 months. Linear regression analyses evaluated relationships between baseline variables and (1) 12-month HAQ-DI scores, (2) 12-month rank-transformed EQ-5D-3L index scores, (3) 12-month changes in HAQ-DI scores, and (4) 12-month changes in EQ-5D-3L index scores. Results: The analysis included 207 biologic-naïve and 188 biologic-experienced patients. All patients had moderate disability (mean HAQ-DI 1.21 and 1.46) and impaired QoL (mean EQ-5D-3L index scores 0.52 and 0.50). Many reported moderate/severe pain (93 and 96%) and showed little change in HAQ-DI and EQ-5D-3L index scores over 12 months. In both biologic-naïve and biologic-experienced groups, multivariate analysis revealed a significant association between baseline HAQ-DI scores and endpoint HAQ-DI scores (β = 0.67, P < 0.001 and β = 0.76, P < 0.001, respectively), 12-month changes in HAQ-DI scores (both β = - 0.21, P < 0.001), and 12-month EQ-5D-3L index scores (β = - 0.57, P < 0.001 and β = - 0.29, P = 0.004, respectively). Baseline EQ-5D-3L index scores were significantly associated with 12-month changes in EQ-5D-3L index scores in both groups (β = - 0.73, P < 0.001 and β = - 0.40, P = 0.003, respectively). Conclusions: Patients with RA in persistent MDAS experience substantial ongoing physical disability, poor QoL, and pain. HAQ-DI is an important predictor of future disability and reduced QoL, supporting current national recommendations to measure HAQ-DI in routine care. abstract_id: PUBMED:23818710 Do radiographic joint damage and disease activity influence functional disability through different mechanisms? Direct and indirect effects of disease activity in established rheumatoid arthritis. Objective: To explore the relationship between rheumatoid arthritis (RA) disease activity and functional disability over time, considering indirect (predictive) and direct (concurrent) associations as well as the influence of radiographic joint damage and treatment strategy. Methods: Functional disability [Health Assessment Questionnaire (HAQ)], disease activity [28-joint Disease Activity Score (DAS28)], and radiographic joint damage [Sharp/van der Heijde score (SHS)] were measured in 4 consecutive randomized controlled trials with increasingly intensive (tight control) treatment strategies. Average followup time for the 3 cohorts was 97, 53, and 50 months, respectively. Next to current DAS28, the previous DAS28 was used to study the predictive effect of a change in DAS28 on progression of functional disability (HAQ). Finally, it was investigated whether SHS mediated the predictive effect of DAS28. Results: In patients treated with intensive treatment strategies, the progression of HAQ over time was statistically significantly less (p < 0.0001). The predictive influence of DAS28 on HAQ progression increased over the duration of the disease. SHS was not found to influence HAQ progression and did not mediate the predictive effect of DAS28. In the less intensively treated patients, the direct effect of disease activity decreased with disease duration, and contrarily, SHS did influence HAQ progression, but was not found to (fully) mediate the predictive effect of DAS28. Conclusion: In patients with RA treated with modern treatment strategies, there is less functional decline over time. Further, disease activity does predict functional decline but joint damage does not. This might indicate that factors associated with cumulative disease activity but not visible on radiographs can influence functional decline in patients with RA. This further underlines the importance of disease activity as a treatment target in early RA and in established RA. abstract_id: PUBMED:19884622 Treatment of very early rheumatoid arthritis with symptomatic therapy, disease-modifying antirheumatic drugs, or biologic agents: a cost-effectiveness analysis. Background: Long-term control or remission of rheumatoid arthritis (RA) may be possible with very early treatment. However, no optimal first therapeutic strategy has been determined. Objective: To assess the potential cost-effectiveness of major therapeutic strategies for very early RA. Design: Decision analytic model with probabilistic sensitivity analyses. Data Sources: Published data, the National Data Bank for Rheumatic Diseases, and actual 2007 hospital costs. Target Population: U.S. adults with very early RA (symptom duration <or=3 months). Time Horizon: Lifetime. Perspective: Health care provider and societal. Intervention: 3 management strategies were compared: a symptomatic or "pyramid" strategy with initial nonsteroidal anti-inflammatory drugs, patient education, pain management, and low-dose glucocorticoids, and disease-modifying antirheumatic drugs (DMARDs) at 1 year for nonresponders; early DMARD therapy with methotrexate; and early therapy with biologics and methotrexate. Outcome Measures: Cost per quality-adjusted life-year (QALY) gained. Results Of Base-case Analysis: By reducing the progression of joint erosions and subsequent functional disability, both early intervention strategies increase quality-adjusted life more than the pyramid strategy and save long-term costs. When the cost of very early intervention is factored in, the cost-effectiveness ratio of the early DMARD strategy is $4849 per QALY (95% CI, $0 to $16 354 per QALY) compared with the pyramid strategy, whereas the benefits gained through the early biologic strategy come at a substantial incremental cost. The early DMARD strategy maximizes the effectiveness of early DMARDs and reserves the use of biologics for patients with more treatment-resistant disease of longer duration, for which the incremental benefit of biologics is greater. Results Of Sensitivity Analysis: The early biologic strategy becomes more cost-effective if drug prices are reduced, risk for death is permanently lowered through biologic therapy, patients experience drug-free remission, responders can be selected before therapy initiation, or effective alternative antirheumatic agents are available for patients for whom several biologics have failed. Limitations: Data on the long-term effect of very early therapeutic interventions on the natural progression in disability and joint erosions are limited. The study considered only tumor necrosis factor inhibitors and not the newer biologics. Conclusion: According to the most objective measures of RA progression, very early intervention with conventional DMARDs is cost-effective. The cost-effectiveness of very early intervention with biologics remains uncertain. abstract_id: PUBMED:28864641 Trends in Treatment, Outcomes, and Incidence of Orthopedic Surgery in Patients with Rheumatoid Arthritis: An Observational Cohort Study Using the Japanese National Database of Rheumatic Diseases. Objective: In this study, we investigated the changes in clinical outcome, treatment, and incidence of orthopedic surgery in patients with rheumatoid arthritis (RA) from 2004 to 2014. Methods: Data were studied from the Japanese nationwide cohort database, NinJa (National Database of Rheumatic Diseases by iR-net in Japan), from 2004 to 2014. The time trends in the incidence of orthopedic procedures were analyzed using linear regression analysis. The cross-sectional annual data were compared between 2004 and 2014 to analyze the changes in clinical outcome and treatment. Results: The incidence of orthopedic surgeries in patients with RA consistently decreased from 72.2 procedures per 1000 patients in 2004 to 51.5 procedures per 1000 patients in 2014 (regression coefficient = -0.0028, 95% CI -0.0038 to -0.0019, p < 0.001). The greatest reduction was found in total knee arthroplasty and total hip arthroplasty. Disease activity and functional disability improved significantly over this decade. The proportions of patients receiving methotrexate and biologic disease-modifying antirheumatic drugs significantly increased from 39.6% and 1.7% in 2004 to 63.8% and 27.4% in 2014, respectively. Conclusion: The overall incidence of orthopedic surgeries in patients with RA significantly decreased, accompanied by improved clinical outcomes because of the expanded use of effective drugs; however, the declining trend differed between procedures or locations. The results from the present study suggest that there might be a change in supply and demand for orthopedic surgeries. abstract_id: PUBMED:29447376 The relationship between depression and biologic treatment response in rheumatoid arthritis: An analysis of the British Society for Rheumatology Biologics Register. Objective: To investigate the relationship between depressive symptoms and treatment response and disease activity in RA over a 1-year follow-up. Methods: Data from the British Society for Rheumatology Biologics Register were used, representing 18 421 RA patients receiving biologic treatment. Depressive symptoms were identified through one of three assessments: reporting a history of depression, the Medical Outcomes Survey 36-item Short Form or the EuroQol five-dimension scale. Logistic regression analyses examined the relationship between baseline depressive symptoms and odds of good treatment response by 1 year. Multilevel models addressed the association between baseline depressive symptoms and disease activity outcomes over 1-year follow-up, adjusting for age, gender, disease duration, comorbidities and baseline disease activity and physical disability. Results: Depression symptoms at biologic treatment initiation were associated with 20-40% reduced odds of achieving a good treatment response at 1 year. Depressive symptoms at baseline also associated with reduced improvement in disease activity over the course of follow-up. Patients with a history of depression or reporting symptoms of depression according to the EuroQol five-dimension scale showed reduced improvement in tender and swollen joints, patient global assessment and ESR over 1-year follow-up. Patients with depression symptoms according to the 36-item Short Form showed reduced improvement in tender and swollen joints, but not ESR or patient global assessment. Conclusion: Experiencing symptoms of depression at the start of biologics treatment may reduce the odds of achieving a good treatment response, and reduce improvement in disease activity over time. Depression should be managed as part of routine clinical care to optimize treatment outcomes. abstract_id: PUBMED:21909946 Assessment of the utility of visual feedback in the treatment of early rheumatoid arthritis patients: a pilot study. Earlier studies revealed that visual feedback has contributed in the management of neuromuscular as well as psychiatric disorders; however, it has not yet been applied in rheumatology. Visual feedback is a relatively new tool that enables the patient to visualize as well as monitor a real-time change of their disease activity parameters as well as the patient's reported outcome measures. Integrating electronic data recording in the standard rheumatology clinical practice made visual feedback possible. To evaluate the feasibility of using the visual feedback in patients with early inflammatory arthritis (EA) and how ubiquitous computing technology can improve the patients' compliance and adherence to therapy, this was a double-blind randomized controlled study, which included 111 patients diagnosed to have EA according to the new ACR/EULAR criteria. All patients received disease-modifying antirheumatic drugs (DMARDs) therapy and monitored regularly over the period of 1 year. By the 6th month of treatment, the patients were randomly allocated to an active group (55 patients) to whom the visual feedback (visualization of charts showing the progression of disease activity parameters) was added to their management protocol, and a control group (56 patients) who continued their standard management protocols. The patients were monitored for another 6-months period. All the patient's disease activity parameters, patient reported outcome measures (PROMs), medications, scores of falls, and cardiovascular risks were recorded electronically. Primary outcome was the change in the patients' adherence to their medications, disease activity score (DAS-28), and PROMs: pain score, patient global assessment, functional disability, and quality of life. Secondary outcome was the answers to a questionnaire completed by every patient in both the active group and control group (using Visual Analogue Scale) by the end of 1 year of management, to rate from the patient's perspective the impact of the management protocol, whether using the standard or visual feedback approach, on them and their disease. The visual feedback provided a significant greater reduction in disease activity parameters as well as improvement of the patients' adherence to antirheumatic therapy (P < 0.01). Also stopping the DMARDs therapy because of intolerance was significantly less in the active group. Concerns about the future was significantly less in the active group whereas inability to coup with daily life and disease stress were significantly more among the control group. The improvement of disease activity parameters was associated with improvement in functional disability and quality of life scores. Mean changes in disease parameters showed no significant differences at 3-6 months of therapy but differences were statistically significant at 12-months follow-up (P < 0.01). Medication compliance was significantly correlated with changes in all measured disease parameters. By recording and monitoring disease activity parameters electronically and incorporating the visual feedback approach into clinical practice, a new experience can be created. Visual feedback enabled the patients to see how they are doing regarding their disease activity and helps to optimize their adherence to their treatment. Visual feedback had a positive and significant impact on the disease activity control. Answer: The estimation of utility for rheumatoid arthritis (RA) patients on biologic treatment is influenced by both disease activity and functional disability. Disease activity, as measured by the Disease Activity Score for 28 joints (DAS28), and functional disability, as assessed by the Health Assessment Questionnaire (HAQ), are both important factors in determining the quality of life for RA patients. However, when it comes to estimating utility using the EuroQoL 5-dimension scale (EQ5D), the HAQ components estimate EQ5D better than the total HAQ score. Adding DAS28 to HAQ components does not result in better utility estimations, suggesting that while disease activity is an important aspect of RA, it may not add significant value to the estimation of utility when HAQ components are already considered (PUBMED:26316579). Moreover, studies have shown that biologic treatments, such as abatacept, can lead to reduced disease activity and improved functional disability in RA patients. For instance, abatacept treatment was associated with reductions in DAS28 scores and improvements in physical disability and quality of life, as measured by the Japanese Health Assessment Questionnaire (J-HAQ) and EQ-5D (PUBMED:33536080, PUBMED:34915575). This indicates that effective biologic treatment can positively impact both disease activity and functional disability, which are key contributors to the overall utility and quality of life for RA patients. Furthermore, the Vectra DA blood test, which provides an objective measure of disease activity in RA, has been shown to correlate with imaging of joint inflammation and predict radiographic progression. High Vectra DA scores are associated with more frequent and severe progression, while low scores are predictive of non-progression, highlighting the importance of disease activity in the functional decline of RA patients (PUBMED:25365086). In summary, while disease activity is a critical factor in the management and prognosis of RA, its addition to functional disability measures may not significantly enhance the estimation of utility for RA patients on biologic treatment. Effective biologic therapies that reduce disease activity can lead to improvements in functional disability and quality of life, which are essential components of utility assessment in RA (PUBMED:26316579, PUBMED:33536080, PUBMED:34915575, PUBMED:25365086).
Instruction: Inhibition of the iNOS pathway in inflammatory macrophages by low-dose X-irradiation in vitro. Is there a time dependence? Abstracts: abstract_id: PUBMED:12627258 Inhibition of the iNOS pathway in inflammatory macrophages by low-dose X-irradiation in vitro. Is there a time dependence? Background: Low radiation doses (< or = 1.25 Gy), if applied 6 h before or after stimulation, are known to inhibit the inducible nitric oxide synthase (iNOS) pathway in inflammatory macrophages in vitro. We therefore investigated the time dependence and the underlying molecular mechanism of this effect, since it may be involved in the clinically observed anti-inflammatory and analgesic efficacy of low-dose radiotherapy. Material And Methods: Metabolic activity, nitric oxide (NO) production, iNOS- and hemoxygenase 1-(HO-1-)protein and -mRNA expression by macrophages in vitro after stimulation with LPS/IFN-gamma (0.1 microg ml(-1)/100 U ml(-1)) were investigated. Irradiation was performed at 6, 4, 2 h before and 0, 2, 4, 6 h after stimulation with doses ranging from 0.3 to 10 Gy. For each group, three independent experiments were performed over a period of 30 h with sampling intervals of 3 h. Results: In stimulated macrophages, metabolic activity was not affected by radiation doses up to 10 Gy. A dose-dependent modulation of the cumulative NO production was observed with significant inhibition by low radiation doses < or = 1.25 Gy) and return to control level and even higher concentrations by higher doses (< or = 5 Gy). The degree of inhibition did not show any significant time dependence within the experimental time window used. The iNOS-mRNA expression 3-18 h following stimulation and subsequent irradiation was not affected by doses < or = 1.25 Gy. The iNOS-protein expression 6-24 h following stimulation and subsequent irradiation was reduced by doses < or = 1.25 Gy. By contrast, neither HO-1-protein nor HO-1-mRNA expression at the same time points was influenced by these low doses. Conclusion: The inhibitory interference of low radiation doses with the iNOS pathway in inflammatory macrophages appears to be based on radiation effects on the translational and posttranslational control mechanisms of iNOS activity. However, contrary to our working hypothesis this is not related to radiation-induced induction of HO-1 expression and thereby increased degradation of heme which is essential for iNOS activity. Thus, other posttranslational modifications such as the proteasome degradation pathway might be involved. abstract_id: PUBMED:32123858 Pharmacological inhibition of STAT3 pathway ameliorates acute liver injury in vivo via inactivation of inflammatory macrophages and hepatic stellate cells. Liver diseases represent a major health problem worldwide, in particular, acute liver injury is associated with high mortality and morbidity. Inflammatory macrophages and hepatic stellate cells (HSCs) are known to be involved in the pathogenesis of acute liver injury. In this study, we have investigated the implication of STAT3 inhibition in acute liver injury/early fibrogenesis. In fibrotic human livers, we found STAT3 mRNA expression was significantly upregulated and correlated with collagen I expression. In vitro, STAT3 signaling pathway was found to be activated in TGFβ-activated HSCs and inflammatory macrophages. STAT3 inhibitor, WP1066 significantly inhibited TGFβ-induced collagen I, vimentin and α-SMA expression, and contractility in human HSCs. In LPS- and IFNγ-induced pro-inflammatory macrophages, WP1066 strongly attenuated nitric-oxide release and expression of major inflammatory markers such as TNF-α, iNOS, CCL2, IL-1β, IL-6, and CCR2. In vivo in CCl4-induced acute liver injury mouse model, WP1066 significantly reduced collagen expression, HSCs activation, and intrahepatic inflammation. Finally, in LPS-induced human hepatic 3D spheroid model, WP1066 inhibited LPS-induced fibrotic and inflammatory parameters. In conclusion, our results demonstrate that the therapeutic inhibition of STAT3 pathway using WP1066 targeting HSCs and inflammatory macrophages suggests a potential pharmacological approach for the treatment of acute liver injury. abstract_id: PUBMED:22219714 New role for L-arginine in regulation of inducible nitric-oxide-synthase-derived superoxide anion production in raw 264.7 macrophages. Dietary supplementation with L-arginine was shown to improve immune responses in various inflammatory models. However, the molecular mechanisms underlying L-arginine effects on immune cells remain unrecognized. Herein, we tested the hypothesis that a limitation of L-arginine could lead to the uncoupled state of murine macrophage inducible nitric oxide synthase and, therefore, increase inducible nitric-oxide-synthase-derived superoxide anion formation. Importantly, we demonstrated that L-arginine dose- and time dependently potentiated superoxide anion production in bacterial endotoxin-stimulated macrophages, although it did not influence NADPH oxidase expression and activity. Detailed analysis of macrophage activation showed the time dependence between LPS-induced iNOS expression and increased O(2)(∙-) formation. Moreover, downregulation of macrophage iNOS expression, as well as the inhibition of iNOS activity by NOS inhibitors, unveiled an important role of this enzyme in controlling O(2)(∙-) and peroxynitrite formation during macrophage stimulation. In conclusion, our data demonstrated that simultaneous induction of NADPH oxidase, together with the iNOS enzyme, can result in the uncoupled state of iNOS resulting in the production of functionally important levels of O(2)(∙-) soon after macrophage activation with LPS. Moreover, we demonstrated, for the first time that increased concentrations of L-arginine further potentiate iNOS-dependent O(2) (∙-) formation in inflammatory macrophages. abstract_id: PUBMED:284412 Regulation of macrophage migration by products of the complement system. Agents formerly shown to induce rapid macrophage spreading were examined for their ability to modify the migration of macrophages in the capillary tube assay. Products of the activation of the contact phase of blood coagulation as well as the purified component Bb, the large cleavage fragment of factor B of the alternative complement pathway produced a dose-dependent inhibition of migration. In addition, inflammatory macrophages elicited with either a lipopolysaccharide endotoxin or thioglycollate medium exhibited rapid spreading and inhibited migration, whereas resident cells did not. A close correlation existed, therefore, between enhanced spreading and inhibited migration under both in vitro induced and in vivo situations. Cleavage products of component C5 of the classical complement pathway enhanced macrophage migration and did not alter spreading. In mixtures of C5 cleavage products and Bb, the predominant peptide determined the outcome of the reaction. Factor B, a normal secretory product of macrophages, may represent a common substrate for several of the proteases that induce spreading, inhibit migration, and lead to the generation of the enzymatically active fragment Bb. abstract_id: PUBMED:2992812 Correlation between chemoattractant-induced leukocyte adherence inhibition, macrophage chemotaxis, and macrophage inflammatory responses in vivo. Variations in the magnitude of inflammatory macrophage response in vivo and macrophage chemotaxis in vitro, observed among inbred mouse strains, suggest that these traits are genetically-regulated. The development of an A X B series of recombinant inbred (RI) strains of mice derived from the C57BL/6J (B, high responder) and A/J (A, low responder) resulted in the availability of a large number of new inbred strains which express a spectrum of variations in the magnitude of these traits. These strains were used in the present study as a tool to examine the possible correlation between the phenomenon of leukocyte adherence inhibition (LAI) and those of macrophage inflammatory response in vivo and macrophage chemotaxis in vitro under the assumption the LAI requires the same cellular events as chemotaxis and that LAI resembles, grossly, the accumulation of nonadherent inflammatory cells in vivo. The typing of A X B RI strains for the traits of LAI, macrophage accumulation in vitro, and macrophage inflammatory response in vivo resulted in a correlation between the magnitude of response of those three phenomena in the total of 19 inbred strains tested, thus suggesting that the chemoattractant-induced LAI is biologically related to the events that mediate macrophage chemotaxis in vitro and the macrophage inflammatory response to sterile irritants in vivo. abstract_id: PUBMED:31001070 Natural Diterpenoid Oridonin Ameliorates Experimental Autoimmune Neuritis by Promoting Anti-inflammatory Macrophages Through Blocking Notch Pathway. The diterpenoid compound, Oridonin, extracted from the Chinese herb, Rabdosia rubescens, possesses multiple biological activities and properties. Oridonin exhibited efficient anti-inflammatory activity by inducing a switch in macrophage polarization to the anti-inflammatory phenotype through inhibition of the Notch pathway in our in vitro study; therefore, its potential therapeutic effects were further investigated in the animal model of human Guillain-Barré syndrome (GBS) and other polyneuropathies - experimental autoimmune neuritis (EAN). Either preventive or therapeutic treatments with Oridonin greatly attenuated disease peak severity, suppressed paraparesis, shortened disease duration, and even delayed EAN onset. Progression of neuropathic pain, demyelination, inflammatory cellular accumulations, and inflammatory cytokines in peripheral nerves were significantly attenuated. Meanwhile, accumulation of immune cells in the spinal roots and microglial activation in the lumbar spinal cord were also reduced. Interestingly, Oridonin treatment significantly increased the proportion of anti-inflammatory macrophages and made them locally dominant among all infiltrated macrophages in the peripheral nerves. The down-regulation of local Notch pathway proteins, together with our in vitro results indicated their possible involvement. Taken together, our results demonstrated that Oridonin effectively suppressed EAN by attenuating local inflammatory reaction and increasing the proportion of immune regulating macrophages in the peripheral nerves, possibly through blockage of the Notch pathway, which suggests Oridonin as a potential therapeutic candidate for human GBS and neuropathies. abstract_id: PUBMED:36444908 Low-Intensity Focused Ultrasound Ameliorates Ischemic Heart Failure Related to the Cholinergic Anti-Inflammatory Pathway. Objectives: This study aims to determine the effect of low-intensity focused ultrasound (LIFU) in ischemic heart failure (IHF) and explore the potential neuroimmune mechanism. Methods: Sprague-Dawley rats were subjected to ultrasound (US) with specific parameters, and electrocardiograms were recorded to analyze the effect of LIFU and/or vagal denervation on heart rate. Thereafter, myocardial infarction (MI) was induced by left anterior artery ligation, and LIFU was performed three times a day for 25 days after MI. Echocardiography, Masson staining, and ELISA were used to evaluate the effect of LIFU on the structure and function of the heart. Finally, ELISA, flow cytometry, qRT-PCR, and Western blot analysis were performed to determine the effect of LIFU on the inflammation and the expression of the cholinergic anti-inflammatory pathway (CAP)-related mediators. Results: LIFU reduced heart rate in rats (control vs LIFU, P < .01), and vagotomy (VT) eliminated this effect of LIFU on heart rate (VT vs LIFU + VT, P > .01). LIFU-ameliorated IHF in terms of cardiac structure and function (MI vs MI + LIFU, P < .01), but VT abrogated the beneficial effect of LIFU (MI + VT vs MI + LIFU + VT, P > .01). After the treatment of LIFU, decreased levels of inflammatory cytokines, increased proportion of anti-inflammatory macrophages, and increased expression of CAP-related mediators (MI vs MI + LIFU, P < .01). Conclusions: LIFU ameliorates IHF whereas the CAP plays a promising role. LIFU has the potential to be a novel nonpharmacological and noninvasive therapy for the treatment of coronary artery disease and other cardiovascular diseases. abstract_id: PUBMED:28964797 ω-Alkynyl arachidonic acid promotes anti-inflammatory macrophage M2 polarization against acute myocardial infarction via regulating the cross-talk between PKM2, HIF-1α and iNOS. Macrophage polarization determines the timing for the switch from the inflammation phase to the inflammation resolution phase after acute myocardial infarction. The aim of the present study was to investigate whether ω-alkynyl arachidonic acid could mitigate the inflammatory lipid mediators in the regulation of macrophage phenotypes and functions with a special regard to myocardial infarction. We initially discovered that ω-alkynyl arachidonic acid selectively suppressed the up-regulation of inducible nitric oxide synthase (iNOS) over cyclooxygenase-2 (COX-2) in LPS-stimulated macrophages. ω-Alkynyl arachidonic acid also reduced the expression of macrophage M1 biomarkers (e.g., TNF-α, CXCL10, iNOS and IL-6) but increased the expression of macrophage M2 biomarkers (e.g., IL-10 and arginase-1) in LPS-stimulated macrophages. Moreover, ω-alkynyl arachidonic acid markedly enhanced the phagocytotic activity of macrophages against fluorescently-labeled beads or apoptotic H9c2 cardiac cells. We further investigated the in vivo cardioprotective activities of ω-alkynyl arachidonic acid in a mouse model of myocardial infarction. ω-Alkynyl arachidonic acid indeed reduced infarct size, cardiac damage and the leakage of myocardial enzymes CK-MB. Mechanistic studies revealed that ω-alkynyl arachidonic acid suppressed the overexpression and nuclear translocation of glycolytic enzyme PKM2 in LPS-stimulated macrophages. Furthermore, co-immunoprecipitation assay suggested that ω-alkynyl arachidonic acid disrupted the interaction between PKM2 and HIF-1α. Consequently, ω-alkynyl arachidonic acid diminished HIF-1α binding to the HRE sequence in iNOS promoter in response to LPS stimulation. Collectively, ω-alkynyl arachidonic acid may promote the anti-inflammatory M2 polarization of macrophages in acute myocardial infarction via regulating the cross-talk between PKM2, HIF-1α and iNOS. abstract_id: PUBMED:3089649 Regulation of macrophage accessory cell activity by mycobacteria. II. In vitro inhibition of Ia expression by Mycobacterium microti. In our preceding study, we showed that infection of mice with Mycobacterium microti leads to a dramatic increase in Ia expression on local inflammatory macrophage populations. However, the majority of these cells did not contain intracellular organisms. To evaluate the effect of parasitism of macrophages by M. microti, Ia-induction experiments were performed in vitro. We show here that Ia expression is increased on peritoneal macrophages treated with either crude lymphokine preparations or recombinant gamma-interferon (gamma-IFN) and that this expression is suppressed by M. microti in a dose dependent fashion. The degree of suppression varied between macrophage populations and could be achieved to a lesser extent with killed organisms. It was partially reversed with indomethacin but only poorly so at high infection levels. Inhibition of Ia expression may be of importance in the generation and maintenance of chronic infection. abstract_id: PUBMED:38367277 Inhibition of mevalonate pathway by macrophage-specific delivery of atorvastatin prevents their pro-inflammatory polarisation. Adjustment of the cellular metabolism of pro-inflammatory macrophages is essential for their bactericidal function; however, it underlies the development of many human diseases if induced chronically. Therefore, intervention of macrophage metabolic polarisation has been recognised as a potent strategy for their treatment. Although many small-molecule inhibitors affecting macrophage metabolism have been identified, their in vivo administration requires a tool for macrophage-specific delivery to limit their potential side effects. Here, we establish Drosophila melanogaster as a simple experimental model for in vivo testing of macrophage-specific delivery tools. We found that yeast-derived glucan particles (GPs) are suitable for macrophage-specific delivery of small-molecule inhibitors. Systemic administration of GPs loaded with atorvastatin, the inhibitor of hydroxy-methyl-glutaryl-CoA reductase (Hmgcr), leads to intervention of mevalonate pathway specifically in macrophages, without affecting HMGCR activity in other tissues. Using this tool, we demonstrate that mevalonate pathway is essential for macrophage pro-inflammatory polarisation and individual's survival of infection. Answer: The inhibition of the inducible nitric oxide synthase (iNOS) pathway in inflammatory macrophages by low-dose X-irradiation in vitro was investigated to determine if there is a time dependence to this effect. The study found that low radiation doses (≤ 1.25 Gy), when applied 6 hours before or after stimulation, inhibited the iNOS pathway in inflammatory macrophages. However, the degree of inhibition did not show any significant time dependence within the experimental time window used, which included irradiation at 6, 4, 2 hours before and 0, 2, 4, 6 hours after stimulation. The iNOS-mRNA expression 3-18 hours following stimulation and subsequent irradiation was not affected by doses ≤ 1.25 Gy, while the iNOS-protein expression 6-24 hours following stimulation and subsequent irradiation was reduced by these low doses. The study concluded that the inhibitory interference of low radiation doses with the iNOS pathway appears to be based on radiation effects on the translational and posttranslational control mechanisms of iNOS activity, rather than radiation-induced induction of hemoxygenase 1 (HO-1) expression (PUBMED:12627258).
Instruction: Does sentinel node biopsy improve the management of endometrial cancer? Abstracts: abstract_id: PUBMED:36362690 Applications and Safety of Sentinel Lymph Node Biopsy in Endometrial Cancer. Lymph node status is important in predicting the prognosis and guiding adjuvant treatment in endometrial cancer. However, previous studies showed that systematic lymphadenectomy conferred no therapeutic values in clinically early-stage endometrial cancer but might lead to substantial morbidity and impact on the quality of life of the patients. The sentinel lymph node is the first lymph node that tumor cells drain to, and sentinel lymph node biopsy has emerged as an acceptable alternative to full lymphadenectomy in both low-risk and high-risk endometrial cancer. Evidence has demonstrated a high detection rate, sensitivity and negative predictive value of sentinel lymph node biopsy. It can also reduce surgical morbidity and improve the detection of lymph node metastases compared with systematic lymphadenectomy. This review summarizes the current techniques of sentinel lymph node mapping, the applications and oncological outcomes of sentinel lymph node biopsy in low-risk and high-risk endometrial cancer, and the management of isolated tumor cells in sentinel lymph nodes. We also illustrate a revised sentinel lymph node biopsy algorithm and advocate to repeat the tracer injection and explore the presacral and paraaortic areas if sentinel lymph nodes are not found in the hemipelvis. abstract_id: PUBMED:15380747 Value of the sentinel node biopsy in uterine cancers In cancer research, regional lymph node status is a major prognostic factor and a decision criterion for adjuvant therapy. The sentinel node procedure, which has emerged to reduce morbidity of extensive lymphadenectomy, remains a major step in the surgical management of various cancers. Sentinel node procedure has become a standard technique for the determination of the nodal stage of the disease in patients with melanoma, vulvar cancer and recently in breast cancer. In cervical and endometrial cancers, the sentinel node biopsy is still at the stage of feasibility. In this article, we review the technical aspects, results and clinical implications of sentinel node procedure in cervical and endometrial cancers. abstract_id: PUBMED:28213057 Utilization of sentinel lymph node biopsy for uterine cancer. Background: To limit the potential short and long-term morbidity of lymphadenectomy, sentinel lymph node biopsy has been proposed for endometrial cancer. The principle of sentinel lymph node biopsy relies on removal of a small number of lymph nodes that are the first drainage basins from a tumor and thus the most likely to harbor tumor cells. While the procedure may reduce morbidity, efficacy data are limited and little is known about how commonly the procedure is performed. Objective: We examined the patterns and predictors of use of sentinel lymph node biopsy and outcomes of the procedure in women with endometrial cancer who underwent hysterectomy. Study Design: We used the Perspective database to identify women with uterine cancer who underwent hysterectomy from 2011 through 2015. Billing and charge codes were used to classify women as having undergone lymphadenectomy, sentinel lymph node biopsy, or no nodal assessment. Multivariable models were used to examine clinical, demographic, and hospital characteristics with use of sentinel lymph node biopsy. Length of stay and cost were compared among the different methods of nodal assessment. Results: Among 28,362 patients, 9327 (32.9%) did not undergo nodal assessment, 17,669 (62.3%) underwent lymphadenectomy, and 1366 (4.8%) underwent sentinel lymph node biopsy. Sentinel lymph node biopsy was performed in 1.3% (95% confidence interval, 1.0-1.6%) of abdominal hysterectomies, 3.4% (95% confidence interval, 2.7-4.1%) of laparoscopic hysterectomies, and 7.5% (95% confidence interval, 7.0-8.0%) of robotic-assisted hysterectomies. In a multivariable model, more recent year of surgery was associated with performance of sentinel lymph node biopsy. Compared to abdominal hysterectomy, those undergoing laparoscopic (adjusted risk ratio, 2.45; 95% confidence interval, 1.89-3.18) and robotic-assisted (adjusted risk ratio, 2.69; 95% confidence interval, 2.19-3.30) hysterectomy were more likely to undergo sentinel lymph node biopsy. Among women who underwent minimally invasive hysterectomy, length of stay and cost were lower for sentinel lymph node biopsy compared to lymphadenectomy. Conclusion: The use of sentinel lymph node biopsy for endometrial cancer increased from 2011 through 2015. The increased use was most notable in women who underwent a robotic-assisted hysterectomy. abstract_id: PUBMED:34035551 Status of Sentinel Lymph Node Biopsy in Endometrial Cancer. The role of lymphadenectomy in surgical staging remains one of the biggest controversies in the management of endometrial cancer. The concept of sentinel lymph node biopsy in endometrial cancer has been evaluated for a number of years, with promising sensitivity rates and negative predictive values. The possibility of adequate staging while avoiding systematic lymphadenectomy leads to a significant reduction in the rate of peri- and postoperative morbidity. Nevertheless, the status of sentinel lymph node biopsy in endometrial cancer has not yet been fully elucidated and is variously assessed internationally. According to current European guidelines and recommendations, sentinel lymph node biopsy in endometrial cancer should be performed only in the context of clinical studies. In this review article, the developments of the past decade are explored concisely. In addition, current data regarding the technical aspects, accuracy and prognostic relevance of sentinel lymph node biopsy are explained and evaluated critically. abstract_id: PUBMED:38041023 A multicenter noninferior randomized controlled study of sentinel lymph node biopsy alone versus sentinel lymph node biopsy plus lymphadenectomy for patients with stage I endometrial cancer, INSEC trial concept. Background: Up to the present time, there has remained a lack of strong evidence as to whether sentinel lymph node biopsy can replace lymphadenectomy for early endometrial cancer. The traditional surgery for endometrial cancer includes pelvic lymphadenectomy and paraaortic lymph node resection, but complications often seriously affect patients' quality of life. Two randomized controlled trials with large samples have proved that lymphadenectomy does not improve the overall recurrence rate and survival rate of patients. On the contrary, it increases the incidence of complications and even mortality. The current trial is designed to clarify whether sentinel lymph node biopsy can replace lymphadenectomy for early endometrial cancer patients with negative lymph nodes. Methods: This study is a randomized, open-label, multicenter and non-inferiority controlled clinical trial in China. Potential participants will be patients with pathologically confirmed endometrial cancer at the Zhejiang Cancer Hospital, Jiaxing Maternity and Child Health Care Hospital, and the First Hospital of Jiaxing in China. The total sample size for this study is 722. Patients will be randomly assigned in a 1:1 ratio to two groups. Patients in one group will undergo sentinel lymph node biopsy + total hysterectomy + bilateral salpingo-oophorectomy ± paraaortic lymph node resection. Patients in the other group will undergo sentinel lymph node biopsy + total hysterectomy + bilateral salpingo-oophorectomy + pelvic lymphadenectomy ± paraaortic lymph node resection. The 3-year disease-free survival rate, overall survival rate, quality of life (use EORTC QLQ-C30 + QLQ-CX24), and perioperative related indexes of the two groups will be compared. Results: We expect to find that for patients with early endometrial cancer, the 3-year disease-free survival rate following sentinel lymph node biopsy with indocyanine green combined with near-infrared fluorescence imaging is similar to that following lymphadenectomy. The operation time, as well as incidence of pelvic lymphocyst, lymphedema of lower limb, and edema of vulva in patients who only undergo sentinel lymph node biopsy are expected to be significantly lower than in patients who undergo lymphadenectomy. The quality of life of patients who undergo sentinel lymph node biopsy alone will be significantly better than that of patients who undergo lymph node dissection. Conclusion: This will prove that the prognosis of sentinel lymph node biopsy alone with indocyanine green combined with near-infrared fluorescence imaging is not inferior to that of sentinel lymph node biopsy plus lymphadenectomy for early stage endometrial cancer with negative nodal assessment intraoperatively. In addition, sentinel lymph node biopsy alone with indocyanine green combined with near-infrared fluorescence imaging results in fewer surgical complications and gives patients better quality of life. Trial Registration: chictr.org.cn, ChiCTR1900023161. Registered 14 May 2019, http://www.chictr.org.cn/edit.aspx?pid=38659&htm=4 . abstract_id: PUBMED:36750844 Characteristics of systematic lymph node dissection and influencing factors of sentinel lymph node biopsy using carbon nanoparticles in endometrial carcinoma: a single-center study. Background: Carbon nanoparticles (CNPs) are a new tracer for lymph node mapping, which can quickly reach and develop lymph nodes through a lymphatic network. This research investigated the characteristics of systematic lymph node dissection and sentinel lymph node biopsy mapped with CNPs in endometrial carcinoma. Methods: We first applied CNPs to systematic lymph node dissection in 18 endometrial carcinoma patients as the study group and another 18 endometrial carcinoma patients who were not injected with anything served as the control group. Then, we applied CNPs to sentinel lymph nodes biopsy in 54 endometrial carcinoma patients. All 54 patients received systematic lymph node dissection after sentinel lymph node biopsy. The detection rate, sensitivity, specificity, and accuracy of systematic lymph node dissection and sentinel lymph node biopsy by CNPs were respectively analyzed. A nomogram model for predicting the success of sentinel lymph node mapping was established. Results: The average number of lymph nodes removed in the CNP-labeled study group was higher than that in the control group (p<0.001). CNPs improved the number of lymph nodes with a diameter ≤ 0.5cm. The detection rate, sensitivity, specificity, and accuracy of sentinel lymph nodes biopsy by CNPs for endometrial carcinoma were 70.4%, 100%, 100%, and 100%, respectively. The nomogram model included factors of long menopause time, cervical cyst, and hard cervical texture, and the area of ROC curve was 0.816. Conclusions: CNPs improve the detection rate of small lymph nodes. CNPs can trace sentinel lymph nodes in evaluating lymph node metastasis in endometrial carcinoma. abstract_id: PUBMED:35576340 Utilization and Outcomes of Sentinel Lymph Node Biopsy for Early Endometrial Cancer. Objective: To examine trends, characteristics, and oncologic outcomes of sentinel lymph node biopsy for early endometrial cancer. Methods: This observational study queried the National Cancer Institute's Surveillance, Epidemiology, and End Results Program by examining 83,139 women with endometrial cancer who underwent primary hysterectomy with nodal evaluation for T1 disease from 2003 to 2018. Primary outcome measures were the temporal trends in utilization of sentinel lymph node biopsy and patient characteristics associated with sentinel lymph node biopsy use, assessed by multivariable binary logistic regression models. Secondary outcome measure was endometrial cancer-specific mortality associated with sentinel lymph node biopsy, assessed by propensity score inverse probability of treatment weighting. Results: The utilization of sentinel lymph node biopsy increased from 0.2 to 29.7% from 2005 to 2018 (P<.001). The uptake was higher for women with endometrioid (0.3-31.6% between 2005 and 2018) compared with nonendometrioid (0.6-21.0% between 2006 and 2018) histologic subtypes (both P<.001). In a multivariable analysis, more recent year surgery, endometrioid histology, well-differentiated tumors, T1a disease, and smaller tumor size were independently associated with sentinel lymph node biopsy use (P<.05). Performance of sentinel lymph node biopsy was not associated with increased endometrial cancer-specific mortality compared with lymphadenectomy for endometrioid tumors (subdistribution hazard ratio [HR] 0.96, 95% CI 0.82-1.13) or nonendometrioid tumors (subdistribution HR 0.85, 95% CI 0.69-1.04). For low-risk endometrial cancer, the increase in sentinel lymph node biopsy resulted in a 15.3 percentage-point (1.4-fold) increase in surgical nodal evaluation by 2018 (expected vs observed rates, 37.8 vs 53.1%). Conclusion: The landscape of surgical nodal evaluation is shifting from lymphadenectomy to sentinel lymph node biopsy for early endometrial cancer in the United States, with no indication of a negative effect on cancer-specific survival. abstract_id: PUBMED:25502426 Sentinel lymph node biopsy in the management of gynecologic cancer. Purpose Of Review: To summarize current knowledge and recent advances in sentinel lymph node (SLN) concept in the three most frequent gynecological cancers. Recent Findings: In cervical cancer, SLN biopsy and ultrastaging has high sensitivity in lymph node staging in patients with bilaterally detected SLN. The presence of micrometastasis is associated with shortened survival. In endometrial cancer, SLN biopsy incorporating an institutional mapping algorithm and ultrastaging has been shown to significantly reduce false-negative rates and increase sensitivity and negative predictive value. Summary: SLN biopsy and ultrastaging is useful in current management of patients with early-stage cervical cancer for multiple reasons, such as the reliable detection of key lymph nodes, identification of micrometastasis and intraoperative triage of patients. Although a complete or selective pelvic and paraaortic lymphadenectomy for adequate staging remains the standard treatment approach in patients with early-stage endometrial cancer, SLN biopsy has been shown to be safe and effective in detecting lymph node metastases. The application of the SLN procedure is safe in patients with early-stage unifocal squamous cell cancer of the vulva (<4 cm) and no suspicious enlarged lymph nodes at imaging. abstract_id: PUBMED:28921419 Sentinel Lymph Node Biopsy in Endometrial Cancer: a New Standard of Care? Opinion Statement: Lymph node status is one of the most important factors in determining prognosis and the need for adjuvant treatment in endometrial cancer (EMCA). Unfortunately, full lymphadenectomy bears significant surgical and postoperative risks. The majority of patients with clinical stage I disease will not have metastatic disease; thus, a full lymphadenectomy only increases morbidity in this population of patients. The use of the sentinel lymph node (SLN) biopsy has emerged as an alternative to complete lymphadenectomy in EMCA. By removing the highest yield lymph nodes, the SLN biopsy has the same diagnostic ability as lymphadenectomy while minimizing morbidity. The sensitivity of sentinel lymph node identification with robotic fluorescence imaging for detecting metastatic endometrial and cervical cancer (FIRES) trial published this year is the largest prospective, multi-institution trial investigating the accuracy of the SLN biopsy for endometrial and cervical cancer. Results of this trial found an excellent sensitivity (97.2%) and false negative rate (3%) with the technique. The conclusions from the FIRES trial and those of a recent meta-analysis are that SLN biopsy has an acceptable diagnostic accuracy in detecting lymphatic metastases, and can replace lymphadenectomy for this diagnostic purpose. There remains controversy surrounding the SLN biopsy in high-risk disease and the use of adjuvant therapy in the setting of low volume disease detected with ultrastaging. Current data suggests that the technique is accurate in high-risk disease and that the increased detection of metastasis helps guide adjuvant therapy such that oncologic outcomes are likely not affected by forgoing a full lymphadenectomy. Further prospective study is needed to investigate the impact of low volume metastatic disease on oncologic outcomes and the need for adjuvant therapy in these patients. abstract_id: PUBMED:30318882 Sentinel node biopsy for treatment of endometrial cancer: current perspectives. The risk of lymph nodal metastases in endometrial cancer varies greatly according to the characteristics of the primary tumor. Surgical staging with a systematic lymphadenectomy in endometrial cancer is debated since three decades. On one hand, it provides important pathological information on the spread of the tumor allowing for an appropriate decision making on adjuvant treatment but on the other side it is characterized by a non-negligible short and long-term morbidity. In the past decade, various efforts have been made in the attempt to apply the concept of the sentinel lymph node mapping in endometrial cancer. The sentinel lymph node mapping has the potential to provide the necessary pathological lymph nodal information at a reasonable cost in terms of morbidity. In this review, the most relevant aspects of the sentinel lymph node mapping in endometrial cancer are summarized. Furthermore, the performance in terms of false negative rates and detection rates, the clinical value of the pathological ultrastaging, its clinical applicability in different scenarios including patients preoperatively considered to be at low or at high risk are discussed. Oncological outcome of the patients who have been submitted to a sentinel lymph node mapping as compared to a full lymhadenectomy are presented as well as technical aspects to improve the performance of the surgical technique. Answer: Sentinel lymph node biopsy has emerged as an important technique in the management of endometrial cancer, offering several potential benefits over systematic lymphadenectomy. The sentinel lymph node is the first node to which cancer cells are likely to spread from the primary tumor, and its biopsy can provide valuable prognostic information and guide adjuvant treatment decisions (PUBMED:36362690). Evidence suggests that sentinel lymph node biopsy has a high detection rate, sensitivity, and negative predictive value, which can improve the detection of lymph node metastases compared to systematic lymphadenectomy (PUBMED:36362690). Additionally, it can reduce surgical morbidity, thus potentially improving the quality of life for patients (PUBMED:36362690). The technique has been shown to be applicable in both low-risk and high-risk endometrial cancer cases (PUBMED:36362690). The use of sentinel lymph node biopsy for endometrial cancer has increased over time, particularly with the advent of minimally invasive surgical techniques such as laparoscopic and robotic-assisted hysterectomies (PUBMED:28213057). This increase in use is associated with shorter hospital stays and lower costs compared to lymphadenectomy (PUBMED:28213057). Despite its advantages, the status of sentinel lymph node biopsy in endometrial cancer has not been fully established internationally, and current European guidelines recommend its use only within clinical studies (PUBMED:34035551). However, ongoing research, such as the INSEC trial, aims to provide stronger evidence regarding whether sentinel lymph node biopsy can replace lymphadenectomy for early endometrial cancer patients with negative lymph nodes (PUBMED:38041023). New tracers like carbon nanoparticles have been used to improve the detection rate of small lymph nodes and have shown promising results in sentinel lymph node biopsy for endometrial carcinoma (PUBMED:36750844). The utilization of sentinel lymph node biopsy has been increasing in the United States, with no indication of a negative effect on cancer-specific survival (PUBMED:35576340). In summary, sentinel lymph node biopsy appears to improve the management of endometrial cancer by providing accurate staging with less morbidity, which may lead to better quality of life for patients. However, further research and consensus on its use in various clinical scenarios are needed (PUBMED:28921419; PUBMED:30318882).
Instruction: Is routine chest radiography necessary for the initial evaluation of fever in neutropenic children with cancer? Abstracts: abstract_id: PUBMED:15266405 Is routine chest radiography necessary for the initial evaluation of fever in neutropenic children with cancer? Background: The yield of routine chest radiography (CXR) as part of the initial management of febrile neutropenic pediatric oncology patients is questionable. Procedure: We retrospectively analyzed the clinical records of neutropenic (absolute neutrophil count < or = 0.5 x 10(9)/L) children with cancer, admitted with oral temperature > or = 38 degrees C to our institution, between January 2001 and October 2002. Following admission, patients received tobramycin plus (piperacillin or ticarcillin-clavulanic acid). Admission routine CXRs were reviewed. Clinical and radiological features were compared with the discharge diagnosis. Age, underlying disease, and the presence of pulmonary symptoms or signs were studied as possible predictors of CXR findings related to pneumonia. Results: In total, 88 patients experienced 170 episodes of fever. A routine admission CXR was obtained for 157 of the episodes. Radiologists found 20 (12.7%) abnormal CXR (6 with a segmental or lobar consolidation considered as a pneumonia). In addition, two patients with abnormal admission CXR developed lobar consolidation on a repeat film, later in their hospital course. There were no differences in age and type of underlying disease between children with or without pneumonia. Respiratory symptoms were initially present in 58 cases. Seven (12%) had pneumonia. Among the 99 asymptomatic cases only one (1%) patient had a pneumonia (P = 0.0041). This child had a positive blood culture for P. aeruginosa at the time of admission. None of the children had initial therapy modified on the basis of radiologic findings. Conclusion: In this study, pneumonia is an unusual cause of fever (5%), especially in the absence of respiratory signs or symptoms (1%). Admission CXR should be reserved for the neutropenic pediatric oncology patient presenting with fever and abnormal respiratory findings. abstract_id: PUBMED:22278307 Diagnostic value of routine chest radiography in febrile, neutropenic children for early detection of pneumonia and mould infections. Background: Despite recent studies failing to demonstrate the value of routine chest radiography (CXR) in the initial evaluation of the febrile neutropenic patient with cancer, this screening test is advocated by some experts. We evaluated the benefits of CXR for early diagnosis of pulmonary infection at St. Jude Children's Research Hospital (SJCRH) with emphasis on early recognition of mould infections. Patients And Methods: We reviewed the courses of 200 consecutive febrile neutropenic pediatric patients to determine if routine CXR at initial evaluation was useful in the identification of clinically occult pneumonia. We also reviewed all cases of proven or probable mould infections from the opening of SJCRH in 1962 until 1998 when routine CXR was no longer practiced in our institution to identify cases that were first recognized by routine CXR. Results: Of 200 febrile neutropenic patients, pulmonary abnormalities consistent with pneumonia were detected by routine CXR in only five patients without pulmonary signs or symptoms. In only one case was a change in management considered. Of the 70 patients with pulmonary mould infection identified from 1962 to 1998, routine CXR was performed in 45 patients at the onset of a febrile, neutropenic episode in which a mould infection was diagnosed. Routine CXR was pivotal in the recognition of the mould infection in only two cases over this 36-year period. Conclusion: CXR is warranted in the evaluation of the newly febrile neutropenic pediatric oncology patient only when respiratory signs or symptoms are present. abstract_id: PUBMED:3183701 Use of routine chest radiography in the evaluation of fever in neutropenic pediatric oncology patients. Evaluation of febrile episodes in children who have become neutropenic during treatment for malignant disease has traditionally included radiography of the chest. It has been our impression that the yield of such examination is low. To test this hypothesis we reviewed all chest radiographs (CXRs) obtained in the above setting in our institution over the last 3 years. These radiographs were independently reviewed by two of us (R.C., J.F.). Sixty-one patients experienced 134 febrile neutropenic episodes for which a CXR was obtained. Only eight (6%) of these films revealed any abnormality. After careful review it was apparent that four of these radiographs did not represent a infectious process. Thus only four of 134 films (2.9%) indicated pulmonary infection as the probable cause of fever in the patient. All four of these patients had prominent respiratory signs or symptoms. Of patients who were febrile but without pulmonary signs/symptoms, only one of 49 had an abnormal radiograph. We feel that such a low yield (at most 2%) calls into question the routine practice of obtaining a CXR in the febrile neutropenic child who is otherwise asymptomatic. abstract_id: PUBMED:14602135 Routine radiography does not have a role in the diagnostic evaluation of ambulatory adult febrile neutropenic cancer patients. Cancer patients treated with chemotherapy are susceptible to bacterial infections. When an adult patient presents with febrile neutropenia, standard diagnostic care includes physical examination, laboratory diagnostics, chest X-ray (CXR) and sinus radiography. However, the yield of routine radiography in the diagnostic evaluation of ambulatory adult febrile neutropenic patients with normal findings at their physical examination is questionable. Two CXRs and one sinus X-ray were obtained in 109 and 106 febrile neutropenic episodes after chemotherapy in ambulatory adult patients who had no clinical signs suggesting pulmonary infection or sinusitis. We found that in only two of 109 (1.8%; 95% Confidence Interval (CI): 0.3-5.8%) febrile neutropenic episodes without clinical signs of new pulmonary disease, the CXR showed a consolidation suggesting pneumonia. In addition, in five of 88 (5.7%; 95% CI: 2.2-12.0%) febrile episodes in asymptomatic patients, sinus X-ray suggested sinusitis. In none of these seven episodes was a change of antibiotic therapy necessary. In the absence of clinical signs indicating pneumonia or sinusitis, the yield of CXR and sinus radiography in ambulatory adult cancer patients presenting with febrile neutropenia is minimal; CXR and sinus radiography should no longer be performed on a routine basis. abstract_id: PUBMED:9305718 Routine chest radiography of children with cancer hospitalized for fever and neutropenia: is it really necessary? Background: Although there have been two reports suggesting that it is not necessary to obtain chest radiographs of all children with cancer who are hospitalized for fever and neutropenia, this practice continues. Methods: Fifty-four children with cancer who were hospitalized for 108 episodes of fever and neutropenia were followed prospectively. Data on their respiratory signs and symptoms were collected on admission and throughout their hospital course. Chest radiographs were obtained at the discretion of the pediatric oncology attending physician and were interpreted by a pediatric radiologist. Results: Pneumonia was documented by chest radiograph in 4 of the 108 episodes (3.7%) of fever and neutropenia. In 10 of the 108 episodes, the children had abnormal respiratory findings; this group included the 4 children with pneumonia documented by chest X-ray examination. None of the children with normal respiratory findings hospitalized for the remaining 98 episodes had pneumonia. Chest radiographs were not obtained for 40 of the 108 episodes of fever and neutropenia. None of the children with these 40 episodes had respiratory abnormalities and all recovered without a problem. Chest radiographs were obtained for the remaining 68 episodes of fever and neutropenia. Of the four children in this group with pneumonia documented by chest X-ray, two were diagnosed on admission, and another two whose initial radiographs were normal developed pneumonia later in their hospital course. There were no differences in age, absolute neutrophil count, temperature at presentation, or type of malignancy between the children who had chest radiographs and the children who did not. Conclusions: Pneumonia is an uncommon cause of infection in children with cancer hospitalized for fever and neutropenia. Therefore, the authors believe it is not necessary to obtain a chest radiograph in children with no respiratory abnormalities who are hospitalized for fever and neutropenia. [See editorial on pages 1009-10, this issue.] abstract_id: PUBMED:15481080 Is routine chest radiography necessary for the initial evaluation of fever in neutropenic children with cancer? N/A abstract_id: PUBMED:1913490 The yield of routine chest radiography in children with cancer hospitalized for fever and neutropenia. A routine admission chest radiograph (CXR) in pediatric patients with cancer who are admitted to the hospital for fever and neutropenia has been advised because the signs and symptoms of pneumonia may be absent. The authors studied 131 consecutive patient admissions for fever and neutropenia to evaluate the diagnostic yield of routine CXR. All patients had a complete history, physical examination, complete blood count, blood culture, urinalysis, urine culture, and CXR. Patients routinely started ceftazidime monotherapy. Results of the CXR were correlated with the presence or absence of signs and symptoms of respiratory disease. Of 128 CXR results, 26 (20%) were abnormal (13 with known malignant disease, 2 with atelectasis, 3 with peribronchial cuffing, and 8 with pneumonia [6%]). Three patients with pneumonia were asymptomatic. Therefore, only 3 of 128 patients (2.3%) had pneumonia on CXR not suspected by physical examination. None would have had initial therapy modified based on the CXR finding alone. The authors concluded that the incidence of pneumonia in a child with fever and neutropenia is low and that routine CXR at diagnostic evaluation is unnecessary in the asymptomatic ambulatory patient. abstract_id: PUBMED:32476589 A clinical approach to non-neutropenic fever in children with cancer. Background: There are a limited number of studies that address non-neutropenic fever episodes in children with cancer, and no standard approach exists. Method: We opt to retrospectively analyze the efficacy of the current clinical approach for management of non-neutropenic fever episodes and the associated risk factors among children with cancer at the Princess Noorah Oncology Center from May 2016 through December 2017. Results: A total of 480 non-neutropenic fever episodes were identified in 131 children, of which 62 episodes were triaged as high-risk non-neutropenic fever and 418 as low-risk non-neutropenic fever. Of those 480 non-neutropenic fever, 361 episodes (75.2%) were associated with the presence of central venous catheters. The overall failure rate of ceftriaxone mono-therapy was observed in 75.6% (11.7% in high-risk non-neutropenic fever with a mean C-reactive protein level of 21.1 (±23.2) mmol/L and 63.9% in low-risk non-neutropenic fever with a mean C-reactive protein level of 17.6 (±53.9) mmol/L). The overall bacteremia rate was 14.4%. The type of organisms isolated was mainly high-risk organisms in 59 non-neutropenic fever episodes (85.5%), OR 1.78 (95% CI: 0.45-7.04) p = 0.41. Of note, all bacteremia were associated with the presence of central venous catheter (100%). Of all the examined risk factors of outpatient treatment failure in low-risk non-neutropenic fever, only prolonged fever of more than three days were significantly associated with bacteremia OR 8.107 [95% CI: 1.744-37.691], p = 0.008. Noteworthy is that almost 43% of non-neutropenic fever episodes were associated with respiratory symptoms. This study provides a baseline for future prospective research assessing the pattern of non-neutropenic fever by focusing on associated risk factors. abstract_id: PUBMED:2012451 The role of the chest roentgenogram in febrile neutropenic patients. In a retrospective review of patients with neutropenia and fever, we sought to determine how often roentgenograms detected pulmonary disease, especially pneumonia, not suggested by signs and symptoms. Further, we sought to determine how often therapy was changed as a result of roentgenographic findings. Overall, 41 (22%) of 187 chest roentgenograms obtained during initial febrile episodes, recurrent fevers, or persistent fevers were abnormal. While most patients had signs and symptoms suggesting the presence of pulmonary disease, 17% had roentgenographic abnormalities detected in the absence of such findings. During initial febrile episodes, therapy was not changed in response to findings on the chest roentgenogram. However, during episodes of persistent or recurrent fever, findings on chest roentgenograms led to changes in therapy in eight (61%) of 13 episodes of which six (40%) resulted in clinical improvement. Chest roentgenograms were therefore found to be an important diagnostic tool in evaluating recurrent or persistent fever in the neutropenic patient but of little use during initial febrile episodes. abstract_id: PUBMED:23830647 Infection in neutropenic patients with cancer. Neutropenic fever sepsis syndromes are common among patients with cancer who are receiving intensive cytotoxic systemic therapy. Recognition of the syndromes and timely initial antibacterial therapy is critical for survival and treatment success. Outcomes are linked to myeloid reconstitution and recovery from neutropenia, control of active comorbidities, and appropriate treatment of the infections that underlie the sepsis syndrome. Hematologists and oncologists must be clear about the prognosis and treatment goals to work effectively with critical care physicians toward the best outcomes for patients with cancer who develop neutropenic sepsis syndromes. Answer: Routine chest radiography (CXR) for the initial evaluation of fever in neutropenic children with cancer is not deemed necessary in the absence of respiratory signs or symptoms. Several studies have shown that the yield of routine CXR in detecting pneumonia or other pulmonary abnormalities in this patient population is low, especially when there are no respiratory symptoms present. One study found that pneumonia was an unusual cause of fever, occurring in only 5% of cases, and was even less common (1%) in patients without respiratory signs or symptoms. The study concluded that admission CXR should be reserved for neutropenic pediatric oncology patients presenting with fever and abnormal respiratory findings (PUBMED:15266405). Another study at St. Jude Children's Research Hospital found that routine CXR at initial evaluation was useful in identifying clinically occult pneumonia in only a small number of patients without pulmonary signs or symptoms. Over a 36-year period, routine CXR was pivotal in the recognition of mould infection in only two cases, suggesting that CXR is warranted only when respiratory signs or symptoms are present (PUBMED:22278307). Similarly, a retrospective review of chest radiographs obtained for febrile neutropenic episodes revealed that only a small percentage indicated pulmonary infection as the probable cause of fever, and all patients with such infections had prominent respiratory signs or symptoms. This led to the conclusion that routine CXR in asymptomatic febrile neutropenic children is not justified (PUBMED:3183701). Furthermore, a prospective study found that pneumonia was documented by chest radiograph in only a small fraction of episodes of fever and neutropenia, and none of the children with normal respiratory findings had pneumonia. This supports the view that it is not necessary to obtain a chest radiograph in children with no respiratory abnormalities who are hospitalized for fever and neutropenia (PUBMED:9305718). In summary, the evidence suggests that routine chest radiography is not necessary for the initial evaluation of fever in neutropenic children with cancer unless they present with respiratory signs or symptoms.
Instruction: A case study of outsourced primary healthcare services in Sindh, Pakistan: is this a real reform? Abstracts: abstract_id: PUBMED:24965769 A case study of outsourced primary healthcare services in Sindh, Pakistan: is this a real reform? Background: Since a decade, low and middle income countries have a rising trend of contracting their primary healthcare services to NGOs. In Pakistan, public sector often lacks capacity to effectively & equitably manage the healthcare services. It led the government to outsource the administration of primary health care services to a semi-autonomous government entity i.e. Peoples' Primary Healthcare Initiative (PPHI). This small scale study has assessed the quality of healthcare services at the contracted Basic Health Units (BHUs) with the PPHI and compared it with those managed by the local district government in the province of Sindh. Methods: A cross-sectional mix methods survey was conducted in November 2011. Two BHUs of each type were selected from the districts Karachi and Thatta in Sindh province. BHUs were selected randomly and a purposive sampling technique was used to recruit the study participants at the two study sites. Focus group discussions were conducted with patients visiting the facility while in-depth interviews were conducted with service providers. An observation based resource availability checklist was also administered. Results: There was a significant difference between the PPHI and the district government administered BHUs with regard to infrastructure, availability of essential medicines, basic medical appliances, mini-lab facilities and vehicles for referrals. These BHUs were found to have sufficient number of trained clinical staff and no punctuality and retention issues whatsoever. The district government administered BHUs presented a dismal picture in all the aspects. Conclusion: Out-sourcing of primary healthcare facilities has resulted in significantly improved certain aspects quality and responsiveness of primary healthcare services. This strategy is likely to achieve an efficient and perhaps an equitable healthcare delivery in low and middle income countries where governments have limited capacity to manage healthcare services. abstract_id: PUBMED:34841923 Primary healthcare reform for chronic conditions in countries with high or very high human development index: A systematic review. Objective: To address the growing burden of chronic disease globally, many countries have developed a national policy for primary healthcare reform. In some countries with high and very high human development index, evaluations of the implementation of these reforms have been published. To date, there has been no systematic review of these evaluations. The objectives of this review are to identify: (a) the vision for primary healthcare; (b) the features of primary healthcare reforms; and (c) evaluation findings of primary healthcare reforms. Methods: A systematic literature review was conducted guided by the PRISMA statement. We searched for academic articles and grey literature from 1 March 2008 to 1 September 2020. Screening and data extraction were conducted by two authors. Descriptive analysis and narrative synthesis were applied. Results: A vision for integrated primary healthcare shifting chronic disease management from specialist hospital services to primary care was found to require new organization and funding models such as collaborative primary healthcare networks and commissioning along with shared governance across health sectors. The need for general practitioner leadership and engagement to support primary healthcare reform was identified. Although there was evidence of barriers in progressing primary healthcare reform, evaluation results showed some positive outcomes, most notably shifts in services towards increased primary care access and utilization. Discussion: A challenge in undertaking the review was the heterogeneity of articles with little consistency in how primary healthcare reform was evaluated and reported on across countries. Evaluation of national health reforms involves complex system-wide projects and is an area that needs further exploration and discussion to determine the most appropriate methodologies for collecting and analysing large-scale data with consideration for service and health outcomes. abstract_id: PUBMED:31272079 Impact of China's referral reform on the equity and spatial accessibility of healthcare resources: A case study of Beijing. In 2015, the Chinese government implemented referral reform in its hierarchical medical system by adjusting the reimbursement rules of medical insurance, in order to guide patients' hospital preference. This reform has impacted the equity and spatial accessibility of healthcare resources in different regions. Taking Beijing as a case study, we calculated and compared the equity and accessibility of healthcare resources before and after referral reform with a three-stage two-step floating catchment area method. We set different referral rates and explored their effects on medical service accessibility and equity. The results showed that the referral reform improved total accessibility of public hospitals in Beijing, but at the same time aggravated the inequality of healthcare resource accessibility among towns and streets. Healthcare accessibility demonstrated a U shape with an increase in referral rates. After testing five scenarios, we conclude that a 90% referral rate from the secondary hospitals to tertiary hospitals could be a trade-off when the government strikes a balance between equal chance of access to health services and high accessibility. abstract_id: PUBMED:16689510 Outsourced services and their consequences for nursing care This experience report intends to turn visible and to share with other professionals a practical experiment. The influence of outsourced services is emphasized in the process of nursing care, focusing the difficulties found, in the same institution, in the units not outsourced related to nursing care and problem resolution of clients submitted to hemodynamics procedures that are hospitalized after the intervention and/or return to common units of the Institution which services are outsourced. The experience evidences the need of a integrated team between the two services, using the continuing education as a way to rescue fundamental knowledge for clients' integral care, as well as a more committed profession with the being and making nursing. abstract_id: PUBMED:26616346 The impact of primary healthcare reform on equity of utilization of services in the province of Quebec: a 2003-2010 follow-up. Introduction: In 2003, the Quebec government made important changes in its primary healthcare (PHC) system. This reform included the creation of new models of PHC, Family Medicine Groups (e.g. multidisciplinary health teams with extended opening hours and enrolment of patients) and Network Clinics (clinics providing access to investigation and specialist services). Considering that equity is one of the guiding principles of the Quebec health system, our objectives are to assess the impact of the PHC reform on equity by examining the association between socio-economic status (SES) and utilization of healthcare services between 2003 and 2010; and to determine how the organizational model of PHC facilities impacts utilization of services according to SES. Methods: We held population surveys in 2005 (n = 9206) and 2010 (n = 9180) in the two most populated regions of Quebec province, relating to utilization and experience of care during the preceding two years, as well as organizational surveys of all PHC facilities. We performed multiple logistical regression analyses comparing levels of SES for different utilization variables, controlling for morbidity and perceived health; we repeated the analyses, this time including type of PHC facility (older vs newer models). Results: Compared with the lowest SES, highest SES is associated with less emergency room visits (OR 0.80) and higher likelihood of at least one visit to a PHC facility (OR 2.17), but lower likelihood of frequent visits to PHC (OR 0.69), and higher affiliation to a family doctor (OR 2.04). Differences remained stable between the 2005 and 2010 samples except for likelihood of visit to PHC source which deteriorated for the lowest SES. Greater improvement in affiliation to family doctor was seen for the lowest SES in older models of PHC organizations, but a deterioration was seen for that same group in newer models. Conclusions: Differences favoring the rich in affiliation to family doctor and likelihood of visit to PHC facility likely represent inequities in access to PHC which remained stable or deteriorated after the reform. New models of PHC organizations do not appear to have improved equity. We believe that an equity-focused approach is needed in order to address persisting inequities. abstract_id: PUBMED:34634876 Effects of Vertical Integration Reform on Primary Healthcare Institutions in China: Evidence From a Longitudinal Study. Background: Integrated care is a global trend in international healthcare reform, particularly for piloting vertical integration involving hospitals and primary healthcare institutions (PHIs). However, evidence regarding the impact of vertical integration on primary healthcare has been mixed and limited. Our study aims to evaluate the empirical effects of vertical integration reform on PHIs in China, and examines variations across integration intensity (tight integration vs. loose collaboration). Methods: This study used a longitudinal design. The time-varying difference-in-difference (DID) method with a fixed-effect model for panel data was adopted. A total of 370 PHIs in the eastern, central, and western areas of China from 2009 to 2018 were covered. Outcome measures included the indicators at three dimensions regarding inpatient and outpatient service volume, patient flow between PHIs and hospitals and quality of chronic disease care (hypertension and diabetes). Results: Significant increases in absolute (the number) and relative (the ratio between PHIs and hospitals) volume of inpatient admissions have been found after reform under tight integration, peaking at 183% and 15.0% respectively, in the third reform year. The quality of hypertension and diabetes care (by indicators of control rate of blood pressure and blood glucose) showed significant improvements under both types of vertical integration after reform. It was much more distinct for the PHIs under tight integration, which had the most significant increase of 34.0% and 22.8% under tight integration for the control rate of hypertension and diabetes compared to the peak of 21.2% and 22.1% respectively under loose collaboration. Conclusion: Our findings suggest that vertical integration (especially tight integration) in China significantly contributed to strengthening primary healthcare in terms of inpatient services and quality of hypertension and diabetes care, providing empirical evidence to other countries on integrating primary healthcare-based health systems. abstract_id: PUBMED:28300982 Primary healthcare in Portugal: 10 years of contractualization of health services in the region of Lisbon. Contractualization consists in the development and implementation of a documented agreement whereby one party (payer) provides compensation to the other party (provider) in exchange for a set of health services to a targeted population. We describe, through a case study, the history and the process of implementation of primary health care contractualization (since 1992) in Portugal, emphasizing the consolidation and future challenges of the primary healthcare reform started in 2005. This article resorts to a case study to reflect on the results obtained in the Cluster of Health Centers of the Northern West, Regional Administration of Lisbon and Tagus Valley, between 2009 and 2015, following implementation of contractualization. It was found that the incentive-related payments will have to be weighted considering the results obtained, strongly influenced by epidemiological and socioeconomic change. abstract_id: PUBMED:21186438 Real healthcare reform: focus on primary care access. Primary care coverage for the uninsured is the first necessary step to reform and can be more cost effective and tolerable than a major system reform. By providing foundational care to the uninsured, more care resources are targeted to those that most need the services, while providing benefits such as increased productivity and reduced inappropriate emergency department utilization. The authors aimed to design a primary care coverage system in the United States for the uninsured using established reimbursement, budgeting, and compliance methods. Providing four primary care visits for acute care, four associated ancillary and four fulfilled pharmaceutical-treatment prescriptions, and one preventive primary care visit per year for nearly 48,000,000 uninsured would cost $36 per month for every working American and legal alien resident. Theoretical and empirical literature was reviewed and the authors applied practical knowledge based on their experience in healthcare systems to develop the Access America Program. abstract_id: PUBMED:23576191 Experiences with primary healthcare in Fuzhou, urban China, in the context of health sector reform: a mixed methods study. China has recently placed increased emphasis on the provision of primary healthcare services through health sector reform, in response to inequitably distributed health services. With increasing funding for community level facilities, now is an opportune time to assess the quality of primary care delivery and identify areas in need of further improvement. A mixed methodology approach was adopted for this study. Quantitative data were collected using the Primary Care Assessment Tool-Chinese version (C-PCAT), a questionnaire previously adapted for use in China to assess the quality of care at each health facility, based on clients' experiences. In addition, qualitative data were gathered through eight semi-structured interviews exploring perceptions of primary care with health directors and a policy maker to place this issue in the context of health sector reform. The study found that patients attending community health and sub-community health centres are more likely to report better experiences with primary care attributes than patients attending hospital facilities. Generally low scores for community orientation, family centredness and coordination in all types of health facility indicate an urgent need for improvement in these areas. Healthcare directors and policy makers perceived the need for greater coordination between levels of health providers, better financial reimbursement, more formal government contracts and recognition/higher status for staff at the community level and more appropriate undergraduate and postgraduate training. abstract_id: PUBMED:36459784 Impact of outsourced machinery services on farmers' green production behavior: Evidence from Chinese rice farmers. As an important part of agricultural socialization services, outsourced machinery services are of great significance for promoting the green development of agriculture. Using the field survey data of 1080 rice growers in Sichuan Province, this paper empirically analyzes outsourced machinery services' impact and role path on farmers' green production behavior. Further, it analyzes the difference in influence from the perspective of group heterogeneity. The research results show that: (1) The outsourced machinery services significantly impact farmers' adoption of no-tillage technology, organic fertilizer application technology, and straw returning technology. The conclusion is still stable after considering endogeneity. (2) The outsourced machinery services indirectly affect farmers' green production behavior by promoting off-farm employment and expanding the scale of farmland. (3) The impact of outsourced machinery services on farmers' green production behavior is not significant in the male group and the group whose households own agricultural machinery. In conclusion, the study proposes to increase assistance to outsourced machinery services providers and encourage service entities to actively publicize and popularize green production technologies while providing services, to play an influential role in guiding and educating farmers. Answer: The case study of outsourced primary healthcare services in Sindh, Pakistan, suggests that this approach has led to significant improvements in the quality and responsiveness of primary healthcare services. The study compared the quality of healthcare services at Basic Health Units (BHUs) managed by the Peoples' Primary Healthcare Initiative (PPHI), a semi-autonomous government entity, with those administered by the local district government. The findings indicated that the PPHI-managed BHUs had better infrastructure, availability of essential medicines, basic medical appliances, mini-lab facilities, and vehicles for referrals. Additionally, these BHUs had sufficient numbers of trained clinical staff without issues related to punctuality and retention, unlike the district government-administered BHUs, which presented a dismal picture in all aspects (PUBMED:24965769). The study concluded that outsourcing primary healthcare facilities has resulted in improved aspects of primary healthcare services, suggesting that this strategy could achieve efficient and potentially equitable healthcare delivery in low and middle-income countries where governments have limited capacity to manage healthcare services. Therefore, based on the evidence from this case study, outsourcing primary healthcare services in Sindh, Pakistan, can be considered a real reform with positive outcomes for the quality and delivery of healthcare services.
Instruction: Can lung cancer risk among nickel refinery workers be explained by occupational exposures other than nickel? Abstracts: abstract_id: PUBMED:24759375 Reconstruction of historical exposures at a Welsh nickel refinery (1953-2000). Past epidemiological studies of workers in a nickel refinery in Clydach, Wales, have shown evidence of large excess respiratory cancer mortality risks [lung cancer relative risk (RR) ≈ 3; nasal cancer RR ≈ 140] in those employed prior to 1930, with risks dropping dramatically in workers hired subsequently. The pre-1930 risks have generally been attributed to high exposures to mixtures of nickel compounds. More recent studies of this refinery's workers have focused on those first hired in 1953, when many of the operations that presumably gave rise to the high exposures were no longer in operation. While these studies have shown greatly decreased lung cancer risks overall (RR ≈ 1.4), and no substantive evidence of increased nasal cancer risk, the absence of reliable exposure estimates have made it difficult to ascertain whether the increased lung cancer risks are nickel related or due to other factors. This study uses nickel measurements from the 1970s to the present, documentation of process changes, and dust measurements taken around the 1950s to construct an exposure matrix for the recent cohort. It provides evidence of at least 30-fold decreases in levels of nickel exposure from the 1950s to the present, with estimated inhalable nickel concentrations in the 1950s in excess of 5mg Ni m(-3). abstract_id: PUBMED:15703528 Can lung cancer risk among nickel refinery workers be explained by occupational exposures other than nickel? Background: Exposures in nickel refineries represent complex chemical mixtures, but only the effect of nickel has been evaluated quantitatively in epidemiologic studies of nickel workers. Methods: For a Norwegian refinery, time- and department-specific exposure estimates were developed for arsenic, sulfuric acid mists, and cobalt in air on the basis of personal measurements and chemical data on raw materials and process intermediates. Exposure to asbestos, as well as employment in high-risk occupations outside the refinery, were assessed. We conducted a case-control study nested in a cohort of refinery workers, with 213 cases (diagnosed 1952-1995) and 525 age-matched controls. We analyzed lung cancer risk, adjusted for smoking, by cumulative exposure and duration of work. Results: There was a substantial association between cumulative exposure to water-soluble nickel and lung cancer risk. Weaker effects were suggested for exposure to arsenic at the refinery and for occupational exposures outside the refinery for 15 years or more. No detectable excess risk was found for refinery exposure to asbestos or sulfuric acid mists, and no dose-related increase in risk was seen from cobalt. Conclusions: Exposure to water-soluble nickel remained the most likely explanation for the excess lung cancer risk in the cohort. Other occupational exposures did not confound the strong dose-related effect of nickel to any appreciable degree. abstract_id: PUBMED:9638480 Update of cancer incidence among workers at a copper/nickel smelter and nickel refinery. Objectives: To assess cancer risk among nickel-exposed workers. Methods: We updated cancer incidence among 1388 workers employed for at least 3 months at a copper/nickel smelter and nickel refinery in Harjavalta, Finland. There were 1155 workers exposed to nickel during the period 1960-1985 in the smelter (566 workers), repair shop (239 workers), or refinery (418 workers). Cancer incidence was followed through the files of the Finnish Cancer Registry up to 31 December 1995. For overall cancer and for a priori selected specific cancer types the ratio of observed to expected numbers of cases was computed as a standardized incidence ratio (SIR), controlled for age, gender, and calendar period and using the region-specific rates as a reference. Results: The overall cancer incidence among both nickel-exposed and unexposed subcohorts was at the expected level. A small increase in lung cancer incidence, which reached statistical significance among workers with a latency exceeding 20 years, was observed among the smelter workers exposed to insoluble nickel compounds. Among workers in the refinery, who were exposed primarily to nickel sulfate at levels below 0.5 mg/m3 as well as to low concentrations of other nickel compounds, there was an increased risk for nasal cancer (SIR 41.1, 95% CI 4.97-148), positively associated with latency and duration of employment, and an excess risk for stomach (SIR 4.98, 95% CI 1.62-11.6) and lung (SIR 2.61, 95% CI 0.96-5.67) cancers. Conclusions: Since elevated nasal and lung cancer risks were confined to the refinery, where the primary exposure was to nickel sulfate, it is likely that nickel sulfate is mainly responsible for the elevated respiratory cancer risk. We cannot rule out whether the excess stomach cancer risk is a chance finding, or related to the working environment. abstract_id: PUBMED:7160938 Cancer of respiratory organs among workers at a nickel refinery in Norway. In a previous report (Pedersen et al., 1973) an excess risk of respiratory cancer among Norwegian nickel workers was demonstrated. The number of cancer cases was relatively small for the study of temporal changes in exposure. In this report the number of cancer cases increased by 70% through an extended follow-up to 1979, and the results published previously were confirmed. Data on smoking habits of the employees at the nickel refinery have now become available. The interaction between smoking and occupational nickel exposure is assessed by comparing the excess risk of lung cancer caused by smoking in the study group and in a sample of the general population. It is shown that the interaction is closer to being additive than multiplicative. This observation has methodological implications for studying temporal changes in occupational exposure. These implications are discussed. No substantial reduction in occupational exposure at the nickel refinery can be observed as far as lung cancer is concerned. For nasal cancer, however, the risk is much smaller among those employed around 1960 than among those employed around 1930. The difference in trend of occupational risk for the two types of cancer cannot be explained by the data presented. abstract_id: PUBMED:1587630 Cancer risk among workers at a copper/nickel smelter and nickel refinery in Finland. A total of 1,388 workers employed for at least 3 months at a copper/nickel smelter and nickel refinery were followed up for cancer from 1953 to 1987 through the Finnish Cancer Registry. There were 1,339 male and 49 female workers, making a total of 27,130 and 706 person-years, respectively. All of the women worked in the refinery, which opened in 1960, the same year the smelting of nickel began. A total of 67 cancers were diagnosed among the men, the standardized incidence ratio for all cancers being 1.0. No cancer was found among the women (1.8 expected). The risk of cancer among men was analysed according to primary site, exposure to nickel, type of work, years since first exposure and age at diagnosis. In the subcohort of nickel refinery workers, one case of sinonasal cancer was observed, against 0.02 expected, but otherwise no significantly increased risks of cancer were found. In addition to the small size of the cohort, the non-positive finding concerning lung cancer might be related to the relatively low arsenic exposure and, perhaps, to the late commencement of nickel production. abstract_id: PUBMED:27747921 Cancer incidence among copper smelting and nickel refining workers in Finland. Background: Among workers employed at a nickel refinery in Harjavalta, Finland an increased risk of lung and sinus cancer has been demonstrated in two previous studies. The current study adds 16 more years of follow-up to these studies. Methods: A total of 1,115 persons exposed to nickel and 194 non-exposed workers in the Harjavalta nickel smelter and refinery were followed up for cancer from 1967 to 2011 through the Finnish Cancer Registry. Results: The total number of cancer cases in men was 251 (Standardized incidence ratio (SIR) 1.05) and in women 12 (SIR 1.22). In the most nickel-exposed work site (refinery), there were 14 lung cancers (SIR 2.01) and 3 sinonasal cancers (SIR 26.7, 95%). Conclusions: It is likely that exposure to nickel compounds is the main reason for elevated nasal cancer risk among the nickel refinery employees and may also contribute to the excess risk of lung cancer. Am. J. Ind. Med. 60:87-95, 2017. © 2016 Wiley Periodicals, Inc. abstract_id: PUBMED:12729252 Lung cancer incidence among Norwegian nickel-refinery workers 1953-2000. Among workers employed at a nickel refinery in Norway between 1910 and 1977 an elevated risk of lung cancer has been demonstrated. A dose-related effect from nickel exposure has been identified, with the strongest gradient for water-soluble nickel. This pattern was recently confirmed in a nested case-control study with adjustment for smoking and potential occupational confounders. In the present study, updated cancer data were used to explore the risk by duration of work at the refinery and by exposure to different forms of nickel. Comparisons were made with the national male population (standardised incidence ratios) as well as internal reference groups (Poisson regression) under adjustment for age and smoking. The results confirmed earlier findings of a strong dose-related risk dependent on duration of work in production departments and cumulative exposure to nickel, most clearly seen for water-soluble nickel. Only slightly elevated risks were found among the unexposed and in the group with no experience from production or maintenance work. The risk associated with exposure to nickel chloride was similar to that for nickel sulfate. Analyses restricted to men exposed after 1967, with estimates based on personal monitoring of nickel in the breathing zone, showed the same risk pattern as for earlier years. Elevated lung cancer incidence was even suggested for workers with their first employment after 1978 when a lot of high exposure jobs were abandoned. The combined effect of exposure to nickel and smoking seemed to be in agreement with a multiplicative risk pattern. abstract_id: PUBMED:8943837 Exposure to nickel compounds and smoking in relation to incidence of lung and nasal cancer among nickel refinery workers. Objectives: To investigate the relation between occupational hazards among nickel refinery workers and their exposure to different forms of nickel over time and the interaction between smoking and total exposure to nickel. Methods: The cohort consisted of 379 workers with first employment 1916-40 and at least three years of employment and 4385 workers with at least one year of employment 1946-83. Data on smoking (ever or never) were available for almost 95% of the cohort. Two analyses were used, indirect standardisation from observed and expected numbers and Poisson regression. Results: During the follow up 1953-93, 203 new cases of lung cancer were observed v 68 expected (standardised incidence ratio (SIR) 3.0, 95% confidence interval (95% CI) 2.6-3.4) and 32 cases of nasal cancer were observed v 1.8 expected (SIR 18.0, 95% CI 12-25). The Poisson regression analysis showed an excess risk of lung cancer in association with exposure to soluble forms of nickel, with a threefold increase in relative risk (RR) (P < 0.001) and a multiplicative effect of smoking and exposure to nickel. The RRs were 1.1 (95% CI 0.2-5.1) for exposed workers who had never smoked and 5.1 (95% CI 1.3-20.5) for exposed workers who smoked. Conclusion: It is not possible to state with certainty which specific nickel compounds are carcinogenic, but a significant excess risk was found for workers exposed to soluble nickel alone or in combination with other forms of nickel. The present study suggests a multiplicative effect of smoking and nickel exposure. abstract_id: PUBMED:10994800 Assessment of historical exposures in a nickel refinery in Norway. Objectives: The aim of the study was, on the basis of new information on nickel species and exposure levels, to generate a specific exposure matrix for epidemiologic analyses in a cohort of Norwegian nickel-refinery workers with a known excess of respiratory cancer. Methods: A department-time-exposure matrix was constructed with average exposure to total nickel estimated as the arithmetic mean of personal measurements for periods between 1973 and 1994. From 1972 back to the start of production in 1910, exposure concentrations were estimated through retrograde calculation with multiplication factors developed on the basis of reported changes in the metallurgical process and work environment. The relative distribution of water-soluble nickel salts (sulfates and chlorides), metallic nickel, and particulates with limited solubility (sulfides and oxides) was mainly derived from speciation analyses conducted in the 1990s. Results: The average concentration of nickel in the breathing zone was < or = 0.7 mg/m3 for all workers after 1978. Exposure levels for smelter and roaster day workers were 2-6 mg/m3 before 1970, while workers in nickel electrolysis and electrolyte purification were exposed to concentrations in the range of 0.15-1.2 mg/m3. The level of water-soluble nickel was of the same order for workers in the smelting and roasting departments as in some of the electrolyte purification departments. Conclusions: Compared with earlier estimates, the present matrix probably offers a more reliable description of past exposures at the plant. abstract_id: PUBMED:6953273 Mortality among workers in a nickel refinery and alloy manufacturing plant in West Virginia. Workers from a nickel refinery at Huntington, W. Va., which received a nickel matte from smelters in Ontario, Canada, during 1922-47 were studied for excesses in sinonasal and lung cancers. Other workers who were in a nickel alloy production plant operated in conjunction with the refinery but who did not work in the refinery were also studied. Evidence is presented on the existence of a nasal cancer hazard and possibly a lung cancer hazard among workers employed in the nickel refinery. Two of these workers developed a sinonasal cancer. Evidence of a respiratory cancer hazard in other parts of the plant before or after 1947 is weak, and the observed results could be due to chance variations from the mortality experience of the standard reference populations. Among nonrefinery workers hired while the nickel refinery was in operation, 2 employees had a probable sinonasal cancer. The first employee worked in a high-temperature conversion process where nickel carbonate was decomposed to nickel oxide, and the second employee is believed to have done maintenance work in the plant where this process was used. For both cohorts combined there were slight overall excesses in lung, stomach, and prostate cancers with some evidence of a dose-response relationship for these cancers. Answer: The lung cancer risk among nickel refinery workers has been primarily attributed to exposure to nickel compounds, particularly water-soluble nickel. Studies have shown a substantial association between cumulative exposure to water-soluble nickel and lung cancer risk, with weaker effects suggested for other exposures such as arsenic at the refinery and occupational exposures outside the refinery (PUBMED:15703528). However, no detectable excess risk was found for refinery exposure to asbestos or sulfuric acid mists, and no dose-related increase in risk was seen from cobalt (PUBMED:15703528). Other studies have also indicated that exposure to nickel compounds, especially nickel sulfate, is likely responsible for elevated respiratory cancer risk, including lung cancer, among nickel refinery workers (PUBMED:9638480). The risk of lung cancer was also found to be elevated among workers with a latency exceeding 20 years, particularly those exposed to insoluble nickel compounds (PUBMED:9638480). Furthermore, an update of cancer incidence among workers at a copper/nickel smelter and nickel refinery suggested that exposure to nickel compounds is the main reason for elevated nasal cancer risk and may also contribute to the excess risk of lung cancer (PUBMED:27747921). This was supported by findings that showed a strong dose-related risk dependent on duration of work in production departments and cumulative exposure to nickel, most clearly seen for water-soluble nickel (PUBMED:12729252). While the interaction between smoking and occupational nickel exposure has been assessed, the combined effect seemed to be closer to additive than multiplicative, indicating that smoking habits do not fully explain the increased lung cancer risks among nickel refinery workers (PUBMED:7160938). Additionally, a study suggested a multiplicative effect of smoking and nickel exposure, further supporting the role of nickel exposure in lung cancer risk among these workers (PUBMED:8943837). In conclusion, while lung cancer risk among nickel refinery workers may be influenced by other occupational exposures, the evidence strongly suggests that exposure to nickel, particularly water-soluble nickel compounds, is the most likely explanation for the excess lung cancer risk observed in this cohort. Other occupational exposures do not appear to confound the strong dose-related effect of nickel to any appreciable degree (PUBMED:15703528).
Instruction: Racial health disparities among special health care needs children with mental disorders: do medical homes cater to their needs? Abstracts: abstract_id: PUBMED:24970879 Racial health disparities among special health care needs children with mental disorders: do medical homes cater to their needs? Background: A health care reform has been taking place to provide cost-effective and coordinated care. One method of achieving these goals is a patient-centered medical home (PCMH) model, which is associated with provision of quality care among children belonging to racial/ethnic minorities. Despite the potential of the PCMH for children of minority backgrounds, little is known about the extent to which minorities with mental disorders have the PCMH. Objective: The study examined racial/ethnic disparities among children with mental disorders in accessing care from the PCMH. Methods: The 2009-2010 National Survey of Children with Special Health Care Needs (CSHCN) was used for this analysis. Multivariate logistic regressions were applied to capture the racial/ethnic disparities and to analyze a composite outcome of the PCMH. Results: An estimated population size of 4 677 904 CSHCN with mental disorders was included. Among them, 59.94% of children reported to have received medical homes. Compared with white children, the odds of receiving any medical home services decreased among Hispanic children (odds ratio [OR] = 0.69; P < .05) and black children (OR = 0.70; P < .05). The likelihood of having a medical home was lower for Hispanic children than white children, when they had attention deficit hyperactivity disorder (ADHD; OR = 0.57; P < .05) and development delay (OR = 0.73; P < .05). Compared with white children with ADHD or depression having a medical home, the odds of black children with ADHD (OR = 0.63; P < .05) and depression (OR = 0.68; P < .05) having a medical home were lower. Conclusions: There were significant racial/ethnic disparities among CSHCN with mental disorders, indicating several sizeable effects of each of the 5 components on Hispanic, black, and other children compared with white children. These differences could be a potential to improve racial/ethnic disparities. abstract_id: PUBMED:30272532 Unmet Need for Care Coordination Among Children with Special Health Care Needs. Socioeconomic disparities in access to care coordination have been noted among children with special health care needs (CSHCN). Following recent policy developments and technological innovation, care coordination has become more widespread, possibly leading to reduced disparity in care coordination access. This study investigates whether child and household characteristics remain associated with unmet need for care coordination among CSHCN. CSHCN (aged <18 years) requiring ≥2 types of health services in the past year were identified in the 2016 National Survey of Children's Health (NSCH). Care coordination was defined as help with arranging the child's care among different doctors or services. Children were classified as not needing care coordination, receiving sufficient care coordination (met need), or needing but not receiving care coordination (unmet need). Weighted multinomial logistic regression examined the association of child characteristics with this outcome. The analysis included 5622 children with no need for care coordination, 1466 with a met need, and 980 with unmet needs. Children with mental health conditions were more likely to have unmet rather than met needs for care coordination (odds ratio = 4.1; 95% confidence interval: 2.7, 6.1; P < 0.001). After multivariable adjustment, race/ethnicity, income, family structure, insurance coverage, place of birth, and use of English in the home were not associated with having unmet rather than met needs for care coordination. Among CSHCN, the latest data from NSCH reveal no evidence of previously described socioeconomic disparities in access to care coordination. Nevertheless, unmet needs for care coordination remain prevalent, especially among children with mental health conditions. abstract_id: PUBMED:35945481 Racial and Ethnic Disparities in Health Care Access and Utilization among Medicaid Managed Care Beneficiaries. This quasi-experimental study examined the impact of a statewide integrated special needs program Health and Recovery Plan (HARP) for individuals with serious mental illness and identified racial and ethnic disparities in access to Medicaid services. Generalized estimating equation negative binomial models were used to estimate changes in service use, difference-in-differences, and difference-in-difference-in-differences in the pre- to post-HARP periods. Implementation of the special needs plan contributed to reductions in racial/ethnic disparities in access and utilization. Notable among those enrolled in the special needs plan was the declining Black-White disparities in emergency room (ER) visits and inpatient stays, but the disparity in non-behavioral health clinic visits remains. Also, the decline of Hispanic-White disparities in ER, inpatient, and clinic use was more evident for HARP-enrolled patients. Health equity policies are needed in the delivery of care to linguistically and culturally disadvantaged Medicaid beneficiaries. abstract_id: PUBMED:21962936 Profiling health and health-related services for children with special health care needs with and without disabilities. Objective: The aims of this study were to profile and compare the health and health services characteristics for children with special health care needs (CSHCN), with and without disabilities, and to determine factors associated with unmet need. Methods: Secondary data analysis of the 2005-2006 National Survey of Children with Special Health Care Needs was conducted. The sociodemographics, health, and health services of CSHCN with and without disabilities were compared. Multivariable logistic regression was employed to examine factors associated with unmet need for health services. Results: Children from minority racial and ethnic groups and children living in or near poverty were over-represented among CSHCN with disabilities, compared with other CSHCN. Statistically higher percentages of CSHCN with disabilities had behavioral problems (39.6% vs 25.2%), anxiety/depressed mood (46.1% vs 24.0%), and trouble making/keeping friends (38.1% vs 15.6%) compared with other CSHCN. Thirty-two percent of CSHCN with disabilities received care in a medical home compared with 51% of other CSHCN. CSHCN with disabilities had higher rates of need and unmet need than other CSHCN for specialty care, therapy services, mental health services, home health, assistive devices, medical supplies, and durable medical equipment. The adjusted odds of unmet need for CSHCN with disabilities were 71% higher than for other CSHCN. Conclusion: CSHCN with disabilities had more severe health conditions and more health services need, but they less commonly received care within a medical home and had more unmet need. These health care inequities should be amenable to policy and health service delivery interventions to improve outcomes for CSHCN with disabilities. abstract_id: PUBMED:19948598 Unmet health care needs among CSHCN with neurologic conditions. Objective: Children with neurologic conditions require a variety of services. With this study we examined health care needs and unmet needs among children with neurologic conditions. Methods: Cross-sectional data reported by parents of 3- to 17-year-olds in the 2005-2006 National Survey of Children With Special Health Care Needs were analyzed. Demographic characteristics, health care needs, and unmet needs of children with special health care needs (CSHCN) and neurologic conditions were descriptively compared with an independent referent group of children without special health care needs; statistical contrasts were performed as a function of the type (conditions included in the Diagnostic and Statistical Manual of Mental Disorders [DSM] or not) and number of reported neurologic conditions. Results: Compared with the parents of children without special health care needs, parents of CSHCN with neurologic conditions were more likely to report unmet health care needs for their child. After adjustment for demographic factors and severity of functional limitation, CSHCN with at least 2 conditions had more visits to a health care provider, needed more services, and reported more unmet needs than CSHCN with a single DSM condition. The magnitude of need among CSHCN was greatest among those with at least 1 of each type of neurologic condition. Conclusions: Unmet health care needs exist among CSHCN with neurologic conditions and are particularly pronounced among children with a combination of both DSM and non-DSM disorders. The health care needs among CSHCN with multiple neurologic conditions may be better served by targeted efforts to improve care coordination. abstract_id: PUBMED:29678400 Predictors of Social Relationships for Children with Special Health Care Needs. Children with special health care needs (CSHCN) experience various developmental challenges including difficulties in social relationships with peers. Using data from the Centers for Disease Control and Prevention's National Survey of Children with Special Health Care Needs 2009-2010, this correlational study examined predictors of difficulty making and keeping friends among CSHCN. Binary logistic regression was used to analyze the data. Findings indicate statistical significance for 12 out of 14 predictor variables. Key findings indicate the odds of having difficulty making and keeping friends are 3.06 times higher for children with an autism spectrum diagnosis and 2.87 times higher for children with some behavior problem when compared to other CSHCN. Overall, the logistic regression model classified 80.7% of the cases correctly. A secondary analysis also identified factors which significantly differentiate between CSHCN who have a lot of difficulty compared to a little difficulty making and keeping friends. For this analysis, eight variables were statistically significant. abstract_id: PUBMED:18195000 Racial and ethnic disparities in medical and dental health, access to care, and use of services in US children. Background: Not enough is known about the national prevalence of racial/ethnic disparities in children's medical and dental care. Objective: The purpose of this work was to examine racial/ethnic disparities in medical and oral health, access to care, and use of services in a national sample. Methods: The National Survey of Children's Health was a telephone survey in 2003-2004 of a national random sample of parents and guardians of 102,353 children 0 to 17 years old. Disparities in selected medical and oral health and health care measures were examined for white, African American, Latino, Asian/Pacific Islander, Native American, and multiracial children. Multivariate analyses were performed to adjust for primary language at home, age, insurance coverage, income, parental education and employment, and number of children and adults in the household. Forty measures of medical and oral health status, access to care, and use of services were analyzed. Results: Many significant disparities were noted; for example, uninsurance rates were 6% for whites, 21% for Latinos, 15% for Native Americans, 7% for African Americans, and 4% for Asians or Pacific Islanders, and the proportions with a usual source of care were as follows: whites, 90%; Native Americans, 61%; Latinos, 68%; African Americans, 77%; and Asians or Pacific Islanders, 87%. Many disparities persisted for > or = 1 minority group in multivariate analyses, including increased odds of suboptimal health status, overweight, asthma, activity limitations, behavioral and speech problems, emotional difficulties, uninsurance, suboptimal dental health, no usual source of care, unmet medical and dental needs, transportation barriers to care, problems getting specialty care, no medical or dental visit in the past year, emergency department visits, not receiving mental health care, and not receiving prescription medications. Certain disparities were particularly marked for specific racial/ethnic groups: for Latinos, suboptimal health status and teeth condition, uninsurance, and problems getting specialty care; for African Americans, asthma, behavior problems, skin allergies, speech problems, and unmet prescription needs; for Native Americans, hearing or vision problems, no usual source of care, emergency department visits, and unmet medical and dental needs; and for Asians or Pacific Islanders, problems getting specialty care and not seeing a doctor in the past year. Multiracial children also experienced many disparities. CONCLUSIONS; Minority children experience multiple disparities in medical and oral health, access to care, and use of services. Certain disparities are particularly marked for specific racial/ethnic groups, and multiracial children experience many disparities. abstract_id: PUBMED:31912377 Adverse Experiences and Special Health Care Needs Among Children. Objectives: To evaluate the association of exposure to adverse childhood experiences (ACEs) and being a child with special health care needs (CSHCNs), and risks of specific ACE exposures with CSHCN status. Methods: An analysis of 91,219 children from the 2011-2012 National Survey of Children's Health. Multivariable logistic regression was used to compare the odds of CSHCN status based on total ACE exposure and on distinct types of ACE exposure. Results: Overall, 20% of children met CSHCN criteria. Although 57% of children had no ACEs, 23% had experienced 1, 13% had experienced 2-3 ACEs, and < 1% had experienced 4 or more ACEs. The odds of CSHCN status increased by 38% with 1 ACE [adjusted odds ratio (aOR) 1.38, 95% confidence interval (95% CI) (1.25-1.52)], but with ≥ 4 ACEs, the odds of CSHCN status increased nearly threefold [aOR 2.95 (95% CI 2.54-3.45)]. Presence of mental illness in the family [aOR 2.12 (95% CI 1.89-2.38)], domestic violence [aOR 1.69 (95% CI 1.48-1.93)], and neighborhood violence [aOR 1.89 (95% CI 1.67-2.14)] were the ACEs most strongly associated with CSHCN status. Conclusions For Practice: Each additional ACE exposure increased the likelihood of CSHCN status, and family and community-level ACEs were associated with increased risk for CSHCN status. The findings suggest that systems of care for CSHCN, including public health, health care, education, and social welfare, should incorporate supports for addressing ACEs among this population and a trauma-informed approach. abstract_id: PUBMED:27157311 Understanding military families who have dependents with special health care and/or educational needs. Background: Little is known about military families who have a dependent with special health care and/or educational needs. The Exceptional Family Member Program (EFMP) is designed to link these families to military/community support services through family support provider (FS providers). Objective: The aim of this study was to understand FS providers' perspectives on the kinds of current challenges the families with whom they work face. This is the first study to ascertain the perspectives of professionals FS providers. Methods: FS providers (N = 160) completed a survey either on the phone or via the web. The survey consisted of four areas regarding EFMP: (1) background information; (2) caseload and work composition; (3) perceptions of Military Family needs; and (4) adequacy of community support services. Results: The most commonly encountered diagnoses in military families were Autism (94%) and Attention-Deficit Hyperactivity Disorder (93%). Between 80% and 90% of FS providers reported working with families dealing with Emotional/Behavioral Disorders, Speech & Language Disorders, Asthma, Developmental Delays, and Mental Health Problems. FS providers noted that relocations are particularly challenging for military families in the EFMP. Conclusions: Training and programming of social service professionals working with military families who have a dependent with special health care and/or educational needs should focus on commonly occurring challenges seen in this population. As much as possible, FS providers should be familiar with evidence-based programs and practices designed to address these pressing problems. The process and execution of relocations should be streamlined so as to enhance continuity of care. abstract_id: PUBMED:18256914 The Healthy People 2010 outcomes for the care of children with special health care needs: an effective national policy for meeting mental health care needs? Objectives: To assess the effectiveness of the Maternal and Child Health Bureau's (MCHB) Six Core Outcomes for children with special health care needs (CSHCN) as indicators in measuring the degree to which mental health care needs are met. Methods: This study analyzes data from the 2001 National Survey of Children with Special Health Care Needs for 9,748 CSHCN who needed mental health care. Bivariate and logistic analyses were employed to investigate the impact of the MCHB's Six Core Outcomes on the probability of having an unmet need for mental health services. Results: Of the 2.3 million CSHCN in the U.S. who needed mental health care in 2001, almost one-fifth did not receive all of the mental health services that they needed. Ultimately, eight Outcomes and sub-categories of Outcomes were considered. Sixty-one percent of CSHCN with a need for mental health care had care that fulfills six of the eight considered Outcomes. Logistic analysis indicates that individual fulfillment of each of the Core Outcomes and fulfillment of additional Outcomes have a significant association with reducing the probability of having an unmet mental health care need for CSHCN. Conclusions: This study is the first attempt to apply the Six Core Outcomes to meeting the needs for mental health care among CSHCN. Estimates of unmet need for mental health care suggest that efforts can be made to improve access for CSHCN. The initial estimates generated by this study indicate that the MCHB Outcomes are important in meeting children's mental health needs and are important indicators for informing MCHB policy. Answer: The concept of a patient-centered medical home (PCMH) is associated with the provision of quality care among children belonging to racial/ethnic minorities. However, research indicates that there are significant racial/ethnic disparities among children with special health care needs (CSHCN) with mental disorders in accessing care from the PCMH. A study using the 2009-2010 National Survey of Children with Special Health Care Needs found that compared with white children, Hispanic and black children had lower odds of receiving any medical home services. Specifically, Hispanic children with attention deficit hyperactivity disorder (ADHD) and developmental delay, as well as black children with ADHD or depression, were less likely to have a medical home compared to their white counterparts (PUBMED:24970879). Despite recent policy developments and technological innovation aimed at improving care coordination, which is a key component of the PCMH model, disparities persist. A study using the 2016 National Survey of Children's Health found that children with mental health conditions were more likely to have unmet rather than met needs for care coordination. However, this study did not find evidence of socioeconomic disparities in access to care coordination among CSHCN (PUBMED:30272532). Additionally, a quasi-experimental study examining the impact of a statewide integrated special needs program for individuals with serious mental illness found that the implementation of the program contributed to reductions in racial/ethnic disparities in access and utilization of Medicaid services. Notably, Black-White disparities in emergency room visits and inpatient stays declined among those enrolled in the special needs plan, although disparities in non-behavioral health clinic visits remained (PUBMED:35945481). In conclusion, while medical homes are designed to cater to the needs of CSHCN with mental disorders, there are still notable racial health disparities in the provision and access to these services. Efforts to improve racial/ethnic disparities could potentially enhance the effectiveness of PCMHs for minority children with mental health conditions.
Instruction: Is it possible to repair diastasis recti and shorten the aponeurosis at the same time? Abstracts: abstract_id: PUBMED:24477520 Is it possible to repair diastasis recti and shorten the aponeurosis at the same time? Background: Abdominal wall deformity secondary to pregnancy is multidirectional. Plication of the anterior rectus sheath is the most widely used technique for correction of this condition. However, it would be desirable to simultaneously perform the transverse and longitudinal repair of this deformity. The aim of this study was to assess changes in the length of the musculoaponeurotic layer after diastasis recti repair using triangular mattress sutures. Methods: Thirty-one women with Nahas' type III/A deformity were divided into two groups: the triangular mattress suture (TS) group and the continuous suture (CS) group. All patients underwent conventional abdominoplasty and diastasis recti repair with medial longitudinal plication performed between two metal clips. The two types of suture were used in both groups. In the TS group, after a CS was performed and removed, TSs were used and maintained in place. In the CS group, the order of suture placement was reversed. The distance between clips was measured before and immediately after suturing and at 3 weeks and 6 months postoperatively using plain abdominal radiographs. Statistical analysis was conducted using Friedman's analysis of variance and Wilcoxon's test. Results: The use of TSs significantly reduced the length of the aponeurosis compared with both the intraoperative situation without suture (P<0.001) and the use of CS (intraoperatively and 6 months after surgery; P<0.001). Conclusion: The repair of diastasis recti using TSs resulted in vertical shortening of musculoaponeurotic layer immediately after the procedure and in the long term. Level Of Evidence Iii: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266. abstract_id: PUBMED:35599692 Laparoscopic Repair of Diastasis Recti: A Case Report and Literature Review. Diastasis recti is a state with separated aponeurosis between two recti caused by weakening of the intercrossing fibers in the linea alba and it causes abdominal protrusion. Common causes comprised of increased intraabdominal pressure, or congenital weakening of myoaponeurotic layer. We describe a patient who underwent laparoscopic repair of diastasis recti. A 30-year-old woman was referred to our outpatient department for an abdominal mass that had appeared 1 year earlier. Physical examination revealed an abdominal wall defect along the midline and computed tomography showed thinning and stretching of the linea alba. The patient underwent laparoscopic repair for diastasis recti. The stretched linea alba was approximated using interrupted sutures from the epigastrium to the suprapubic area. A dual mesh was applied below the peritoneum to prevent recurrence. The patient was discharged without complications, and was followed up for 1 year without recurrence. Laparoscopic repair could be a considerable is a treatment modality for diastasis recti. abstract_id: PUBMED:28333252 Single-Layer Plication for Repair of Diastasis Recti: The Most Rapid and Efficient Technique. Background: Plication of the anterior rectus sheath is the most commonly used technique for repair of diastasis recti, but is also a time-consuming procedure. Objectives: The aim of this study was to compare the efficacy and time required to repair diastasis recti using different plication techniques. Methods: Thirty women with similar abdominal deformities, who had had at least one pregnancy, were randomized into three groups to undergo abdominoplasty. Plication of the anterior rectus sheath was performed in two layers with 2-0 monofilament nylon suture (control group) or in a single layer with either a continuous 2-0 monofilament nylon suture (group I) or using a continuous barbed suture (group II). Operative time was recorded. All patients underwent ultrasound examination preoperatively and at 3 weeks and 6 months postoperatively to monitor for diastasis recurrence. The force required to bring the anterior rectus sheath to the midline was measured at the supraumbilical and infraumbilical levels. Results: Patient age ranged from 26 to 50 years and body mass index from 20.56 to 29.17 kg/m2. A significant difference in mean operative time was found between the control and study groups (control group, 35 min:22 s; group I, 14 min:22 s; group II, 15 min:23 s; P < 0.001). Three patients in group II had recurrence of diastasis. There were no significant within- and between-group differences in tensile force on the aponeurosis. Conclusions: Plication of the anterior rectus sheath in a single-layer with a continuous suture showed to be an efficient and rapid technique for repair of diastasis recti. Level Of Evidence 1: abstract_id: PUBMED:29344785 Laparoscopic intracorporeal rectus aponeuroplasty (LIRA technique): a step forward in minimally invasive abdominal wall reconstruction for ventral hernia repair (LVHR). Background: Closing the defect (CD) during laparoscopic ventral hernia repair began to be performed in order to decrease seroma, to improve the functionality of the abdominal wall, and to decrease the bulging effect. However, tension at the incision after CD in large defects is related to an increased rate of pain and recurrence. We present the preliminary results of a new technique for medium midline hernias as an alternative to conventional CD. Methods: A prospective controlled study was conducted from January 2015 to January 2017 to evaluate an elective new procedure (LIRA) performed on patients with midline ventral hernias (4-10 cm width). The posterior rectus aponeurosis was opened lengthwise around the hernia defect using a laparoscopic approach to create two flaps and was then sutured. The size of the flaps was estimated using a mathematical formula. An on-lay mesh was placed intraperitoneal overlapping the fascia defect. The data analyzed included patient demographics, operative parameters, and complications. A computerized tomography was performed preoperatively and postoperatively (1 month and 1 year) to evaluate recurrence, distance between rectus and seroma. Results: Twelve patients were included. Mean width of the defect was 5.5 cm. Average VAS (24 h) was 3.9, 1.1 (1 month), and 0 (1 year). Mean preoperative distance between rectus was 5.5 cm; postoperative was 2.2 cm (1 year). Radiological seroma at first month was detected in 50%. Mean follow-up was 15 months. Conclusion: The LIRA technique could be considered as an alternative to conventional CD or endoscopic component separation for medium defects under 10 cm in width. This technique obtained a "no tension" effect that could be related to a lower rate of postoperative pain with no recurrence or bulging, being a safe, feasible, and reproducible technique. abstract_id: PUBMED:32189143 Is mesh always necessary in every small umbilical hernia repair? Comparison of standardized primary sutured versus patch repair: retrospective cohort study. Purpose: A retrospective analysis was carried out to compare the results of patch repair using ready-made, synthetic mesh (PR) and sutured repair (SR) based on standard protocols. The accumulated recurrence rate was accepted as the primary outcome. Pain at rest and during exercise, cosmetic effect and treatment satisfaction were chosen as the secondary endpoints. Methods: Adult patients after elective, open surgical repair of a single, primary umbilical hernia < 2 cm in diameter were included. Patients with incarceration or strangulation, after previous umbilical hernia repair or other abdominal surgical interventions were excluded. In the SR group, single-layer sutures were placed using the short-stitch technique. In PR group, a 6.3-mm ready-made Parietene Ventral Patch (Medtronic) was used. Results: 161 patients (104 in PR and 57 in SR groups) were included in the study (22 months follow-up). Nine recurrences were observed [six in PR (5.8%) and three in SR group (5.2%)]. In PR group, three patients (2.9%) reported complaints at rest and none in SR group, while 18 patients (17.3%) in PR group reported pain during exercises and 7 (12.3%) in SR group. Conclusion: For the smallest umbilical hernias, the use of dense fascia suturing (short-stitch technique) may be an effective alternative to patch repair techniques in patients with no additional risk factors for recurrence. The mesh patch repair method is associated with a significantly higher risk of postsurgical pain. Diastasis recti is a factor favoring umbilical hernia recurrence after both pure tissue repair and patch repair. abstract_id: PUBMED:37378814 Quality of life, post-operative complications, and hernia recurrence following enhanced-view Totally Extra-Peritoneal (eTEP) Rives-Stoppa for incisional and primary ventral hernia repair. The purpose of this study was to evaluate the quality of life (QoL), early post-operative complications, and hernia recurrence rate following laparoscopic enhanced-view Totally Extra-Peritoneal (eTEP) Rives-Stoppa (RS) for incisional and primary ventral hernia repair. Retrospective review of a prospectively maintained database of all patients undergoing eTEP-RS between 2017 and 2020. Data retrieved included demographics, and clinical and operative variables. QoL was assessed using the EuraHS-QoL scale prior to- and following eTEP-RS. During the study period, 61 patients met the inclusion criteria. Age and BMI were 62 (60.4 ± 13.8) years and 29.7 (30.4 ± 6) kg/m2, respectively. Incisional hernia was the most common pathology (n = 40, 65%) followed by primary ventral hernia (n = 21, 35%), with 24 patients (39%) having a previous hernia repair. Diastasis-recti repair was undertaken in 34 patients (55%), a concomitant inguinal hernia was repaired in 6 patients (10%), and 13 patients (21%) underwent transversus abdominis release (TAR). Median follow-up time was 13 months and 15 patients (25%) had at least 2 years of follow-up. Hernia recurrence was found in 4 patients (6.5%). Pre-operative and post-operative EuraHS-QOL questionnaire scores were available for 46 patients (75%) and showed significant improvement in pain (7 vs. 0.5, p < 0.0001; 5 vs. 0.5, p < 0.0001; 5 vs. 1.5; p < 0.006), restrictions (median of 5 vs. 0.5, p < 0.0001; 5 vs. 0, p < 0.0001; median of 5 vs. 1, p < 0.0001, of 6.5 vs. 1.5, p < 0.0001), and cosmetic appearance (8 vs. 4, p < 0.0001). Abdominal wall repair using the eTEP-RS approach significantly improves subjective QoL variables with an acceptable post-operative complications and hernia recurrence rates in a short-term follow-up. abstract_id: PUBMED:34297175 Subxiphoid top-down endoscopic totally preperitoneal approach (eTPA) for midline ventral hernia repair. Purpose: Midline abdominal wall hernia repair is among the most common surgical interventions performed worldwide. However, the optimal surgical technique remains controversial. To overcome the disadvantages of both open and transabdominal procedures, we developed a totally endoscopic preperitoneal approach (eTPA) with placement of a large mesh. Methods: From December 2019 to October 2020, 20 consecutive patients with small to medium-sized midline ventral hernias underwent repair using a completely preperitoneal subxiphoid top-down approach. The preperitoneal space was entered directly below the xiphoid, and careful endoscopic development of the plane between the peritoneum and posterior sheath of the rectus fascia was then performed behind the linea alba. The hernia sac and its contents were identified and reduced. The hernia defect was closed with sutures, and a mesh with an adequate high defect: mesh ratio was placed in the newly created preperitoneal space. Results: Twenty patients were enrolled in this study, including 14 with primary umbilical hernias, 4 with primary epigastric hernias, and 2 with recurrent umbilical hernias. 15 patients suffered from a mild concomitant diastasis recti. All operations were successfully completed without conversion to open repair. The mean operative time was 105.3 min (range, 60-220 min). Postoperative pain was mild, and the mean visual analog scale score for pain was 1.8 on the first postoperative day. The average postoperative hospital stay was 1.8 days (range, 1-4 days). One patient developed a minor postoperative seroma, but it had no adverse impact on the final outcome. No patients developed recurrence during the 3- to 10-month follow-up period. Conclusions: The subxiphoid top-down totally endoscopic preperitoneal approach (eTPA) technique is feasible and effective. It may become a valuable alternative for the treatment of primary small- (defect size < 2 cm) and medium-sized (2-4 cm) midline ventral hernias, particularly in presence of a concomitant diastasis recti. abstract_id: PUBMED:36991266 Minimally invasive bilayer suturing technique for the repair of concomitant ventral hernias and diastasis recti. Background: According to EHS guidelines, mesh repair is recommended in case of concomitant diastasis recti (DR) and ventral hernia more than 1 cm in diameter. Since in this situation, the higher risk of hernia recurrence may be attributed to the weakness of aponeurotic layers, in our current practice, for hernias up to 3 cm, we use a bilayer suture technique. The study aimed at describing our surgical technique and evaluating the results of our current practice. Methods: The technique combines suturing repair of the hernia orifice and diastasis correction by suture, and includes an open step through periumbilical incision and an endoscopic step. The study is an observational report on 77 cases of concomitant ventral hernias and DR. Results: The median diameter of the hernia orifice was 1.5 cm (0.8-3). The median inter-rectus distance was 60 mm (30-120) at rest and 38 mm (10-85) at leg raise at tape measurement and 43 mm (25-92) and 35 mm (25-85) at CT scan respectively. Postoperative complications involved 22 seromas (28.6%), 1 hematoma (1.3%) and 1 early diastasis recurrence (1.3%). At mid-term evaluation, with 19 (12-33) months follow-up, 75 (97.4%) patients were evaluated. There were no hernia recurrences and 2 (2.6%) diastasis recurrences. The patients rated the result of their operation as excellent or good in 92% and 80% of the cases at global and esthetic evaluations, respectively. The result was rated bad at esthetic evaluation in 20% of the cases because the skin appearance was flawed, due to discrepancy between the unchanged cutaneous layer and the narrowed musculoaponeurotic layer. Conclusion: The technique provides effective repair of concomitant diastasis and ventral hernias up to 3 cm. Nevertheless, patients should be informed that the skin appearance can be flawed, because of the discrepancy between the unchanged cutaneous layer and the narrowed musculoaponeurotic layer. abstract_id: PUBMED:35859952 A Novel Modification of Subcutaneous Onlay Endoscopic Repair of Midline Ventral Hernias With Diastasis Recti: An Indian Experience. Introduction A ventral hernia is a common problem in the population. Many patients with umbilical/epigastric hernia often present with diastasis recti (DR) too. Diastasis recti is the thinning of the linea alba with an abnormal increase in the distance between the recti without a concomitant fascial defect. The presence of diastasis recti complicates the repair of the existing umbilical/epigastric hernia. Repair of only the umbilical/epigastric hernia in the presence of DR results in incomplete repair and predisposes to recurrence. There are various options available for the repair of umbilical hernia with diastasis recti. Open hernia repairs often have unsatisfactory cosmetic outcomes and, furthermore, involve complications frequently associated with large incisions such as surgical site occurrences (SSO), pain, dermal flap necrosis, and delayed postoperative recovery, to name a few. The era of minimal access surgery leaves us with a vast array of creative solutions to the same. Laparoscopic onlay repair has been given various names in literature, e.g., minimally invasive linea alba reconstruction (MILAR), pre-aponeurotic endoscopic repair (REPA), endoscopic linea alba reconstruction (ELAR), subcutaneous onlay laparoscopic approach (SCOLA), and totally endoscopic assisted linea alba reconstruction (TESLAR), with similar principles for all the procedures. The average rate of seroma formation in these procedures varies from 5% to 40%. SCOLA has been used in our study, with an added modification of the operating port and limiting the extent of lateral dissection with the aid of spinal needles, resulting in restrained dissection and creation of smaller lipocutaneous flaps, leading to reduced incidence of seroma formation. Methods Patients with symptomatic primary ventral hernia with concomitant diastasis recti were enrolled in the participating center from the period of May 2020 to December 2021. Thirty patients were enrolled for this prospective study. The patients underwent subcutaneous laparoscopic onlay repair of midline ventral hernia with diastasis recti, with plication of the defect and onlay placement of a polypropylene mesh. Results Six point sixty-six percent (6.66%) of the patients developed seroma and SSO. The incidence is congruent with the results available in current literature. None of the patients had necrosis of umbilical skin. There were no recurrences at the three months follow-up. Conclusion Our modification of SCOLA is an ergonomically favorable procedure and has comparable outcomes to other approaches, with minimal complications. abstract_id: PUBMED:34194919 Robotically assisted enhanced-view totally extraperitoneal repair (eTEP) of a recurrent umbilical hernia in a patient with peritoneal dialysis. Background: Abdominal wall hernias are frequent in patients with peritoneal dialysis. Guidelines recommend an open hernia repair with extraperitoneal mesh placement to avoid access to the abdominal cavity. Method: We performed a lateral docking robotically assisted enhanced-view totally extraperitoneal repair (eTEP) of a recurrent umbilical hernia with diastasis recti in a patient with peritoneal dialysis due to polycystic kidney disease. After suturing of the midline a 20 x 28 cm mesh was placed in the retrorectus space, covering the whole area of preparation while also overlapping all trocar sites. A drainage was left in the retrorectus space until the first session of PD did not sample any form of leakage. Result: Robotically assisted totally extraperitoneal hernia repair was feasible. The patient was able to continue peritoneal dialysis without intermittent hemodialysis. There was no leakage of the dialysate to the retrorectus space. Postoperative recovery was uneventful. 6 months after surgery the patient was free from pain and showed no signs of recurrence. Conclusion: Robotically assisted totally extraperitoneal hernia repair in patients with umbilical hernia and peritoneal dialysis could be a promising surgical technique to combine the advantages of minimally-invasive surgery with totally extraperitoneal mesh placement without access to the abdominal cavity. Answer: Yes, it is possible to repair diastasis recti and shorten the aponeurosis at the same time. A study assessing changes in the length of the musculoaponeurotic layer after diastasis recti repair using triangular mattress sutures found that this technique resulted in vertical shortening of the musculoaponeurotic layer immediately after the procedure and in the long term (PUBMED:24477520). This suggests that certain suture techniques can effectively address both the repair of diastasis recti and the shortening of the aponeurosis simultaneously.
Instruction: Tension-free vaginal tape for stress urinary incontinence: Is there a learning curve? Abstracts: abstract_id: PUBMED:35795113 A case of tape infection 19 years after insertion of a tension-free vaginal tape sling. Introduction: Tape infection after insertion of tension-free vaginal tape is a well-known but rare complication. We report a patient who experienced a subcutaneous abscess 19 years after the surgery. Case Presentation: A 41-year-old woman presented with fever and lower abdominal pain. She had undergone tension-free vaginal tape insertion for stress urinary incontinence 19 years prior. She had asymptomatic dysuria. After an abscess incision and 1-week treatment with antibiotics, she underwent surgery to remove the tape and the abscess without complications. Conclusion: Tension-free Vaginal Tape insertion could be a potential risk of asymptomatic dysuria, resulting in urinary tract infection. In this case, removal of tape was necessary for controlling subcutaneous abscess resulting from the presence of tension-free vaginal tape. abstract_id: PUBMED:24592004 Development of bladder stone after tension free vaginal tape procedure: a case report. We present a case of a bladder stone which formed on the intravesical portion of tension free vaginal tape material secondary to bladder perforation. 8 years previously, a tension free vaginal tape (TVT) operation was performed. The patient was referred to hospital with persistent urinary infection and urinary incontinence. An intravesical stone that had formed on the TVT sling material was detected by cystoscopy and it was removed with the sling material by a supra pubic cystostomy approach. When evaulating patients presenting with urinary symptoms after a TVT procedure, bladder complications should be kept in mind. abstract_id: PUBMED:31886710 Learning Curve for the Tension-Free Vaginal Tape-Obturator Procedure for Female Stress Urinary Incontinence: A Prospective Single-Surgeon Study. Objective: To evaluate the learning curve in an untrained resident surgeon for the initial case series of tension-free vaginal tape-obturator (TVT-O) to treat stress urinary incontinence. Materials and Methods: A retrospective observational study was conducted in Changhai Hospital, Shanghai, China, between March 2014 and June 2018. All consecutive women included were treated by the TVT-O procedure performed by one surgeon working under the supervision of an expert surgeon. Clinical features, estimated blood loss, operative time, postoperative hospital stay, total hospital stay, adverse events, and subjective and objective cure rates were recorded. Learning curve patterns were estimated to determine the number of cases to reach a plateau using the moving average method. Results: In total, 188 patients were included for analysis. Patients ranged from 39 to 91 years, with the average age of 57.5 ± 9.7 years. The mean operative time was 32.0 minutes (range 20-60). Operative time and blood loss decreased with increase in the level of expertise, whereas postoperative hospital stay and total hospital stay were not influenced by the number of procedures performed. The number of cases required to reach a plateau was ∼30. Objective cure rate and subjective cure rate were achieved in 88.7% and 88.2% at 12 months, respectively. Groin pain was the most common postoperation complication, which continued to be present in 11.7% patients at 12 months after surgery. Conclusions: The TVT-O procedure showed encouraging objective and subjective outcomes and low complication rates, even at the initial stage of the learning curve. Thirty cases were required for a naïve resident surgeon to learn TVT-O procedures. However, long-term outcome and complications caused by the synthetic sling still need further follow-up. abstract_id: PUBMED:27670709 Complete Mesh Infection Following Repeat Retropubic Tension-Free Vaginal Tape. Background: Midurethral slings are the most widely used surgical treatment for stress urinary incontinence. Complications include bladder injury, voiding dysfunction, mesh exposure/erosion, dyspareunia, and failure to correct the incontinence. Complete mesh infection is rare. Case: A 48-year-old woman underwent a repeat retropubic tension-free vaginal tape (TVT) procedure for stress urinary incontinence. She presented eight weeks postoperatively with a tender suprapubic mass and urinary retention. A CT scan demonstrated thickening of the bladder wall and cystoscopy revealed severe inflammation with no intramural injury. The sling appeared to be infected and was removed vaginally. The patient's symptoms resolved following removal of the sling. Conclusion: Mesh infection following a midurethral sling procedure is rare. In suspected cases, cystoscopy should be performed to rule out bladder injury. Antibiotic therapy may be successful, but removal of the sling may be required for resolution of symptoms. abstract_id: PUBMED:35242636 Educational value assessment of YouTube surgical videos of tension-free vaginal tape obturator (TVT-O) and trans-obturator vaginal tape (TOT). Background: YouTube, as a widely used video website around the world, contains a large number of surgical teaching videos, providing a good platform for doctors to learn surgery, but its content and quality are uneven. Tension-free vaginal tape obturator (TVT-O) and trans-obturator vaginal tape (TOT) are common surgical methods for the treatment of stress urinary incontinence (SUI), and there are many videos on YouTube teaching these procedures. We aimed to assess the educational value of surgical videos of TVT-O and TOT on YouTube. Methods: A comprehensive search was conducted for "tension-free vaginal tape obturator" and "trans-obturator vaginal tape" on YouTube on August 22. After referring to LAParoscopic surgery Video Educational GuidelineS (LAP-VEGaS) and previous studies, we developed a checklist containing 5 major items and 18 minor items. SPSS 26 was applied to data and correlation analysis. Results: A total of 36 videos were assessed. The average number of days available was 1,956.6 days (range, 190-4,152 days) and the average length was 9.7 min (range, 1.8-73.6 min, SD: 13.65). Video definition is divided into high, moderate and low, accounting for 22%, 36% and 42% respectively. The average score of the included videos was 7.39 (range, 3-16, SD: 3.57). The correlation analysis indicated that the video quality score has a certain correlation with the length of the video, but has no significant correlation with other factors and there was no significant correlation between audience likes and other factors. Conclusions: On YouTube, we still lack high educational value videos about TVT-O and TOT, and the existing videos are deficient in the explanation of critical steps, the key points of patients' perioperative management, and the application of auxiliary teaching tools. This further indicates the importance of improving educational videos of surgery, and an authoritative checklist for urologic surgery. abstract_id: PUBMED:32939345 An unusual late complication with tension-free vaginal tape (TVT): A case report. Background: Stress urinary incontinence is a common problem affecting women's quality of life. The retropubic mid-urethral sling accounts for the majority of surgical interventions for stress incontinence. Complications of the procedure are rare. We present an unusual late complication with tension-free vaginal tape (TVT). Case: Two years after insertion of a TVT for stress incontinence, a patient noticed a foreign body intermittently exiting through her anus. Initial clinical examinations and investigations were unremarkable. Five years after insertion she represented with faecal matter in the vagina and a TVT erosion. Imaging confirmed a colo-vaginal fistula. She underwent a vaginal excision of the TVT but this did not cure her problem and she subsequently had a laparoscopic sigmoid colectomy. Thereafter she remained well and asymptomatic. Conclusion: This case report describes an unusual late complication. It is likely that the left arm of the mesh entered the serosa of the sigmoid colon and eroded into it over time. Complications associated with TVT are rare, but when they occur they need to be recorded. Only surgeons who have experience and expertise in the procedure should perform it, such as subspecialty trained urogynaecologists. abstract_id: PUBMED:35459159 Case report of vaginal erosion and recurrence of stress urinary incontinence due to shallow placement of tension-free vaginal tape. Background: Suburethral sling with tension-free vaginal tape (TVT) has become a popular treatment for stress urinary incontinence (SUI). Erosion of the tape into the vaginal is rare. Very few patients present with vaginal tape erosion and recurrence of SUI. Case Presentation: A 49-year-old female patient with stress urinary incontinence was treated with a retropubic suburethral TVT sling. 2 months later, recurrent symptoms of SUI developed. 6 months later, the patient complained of repeated vaginal discharge and foreign body sensation. Body physical examination revealed a 1-cm-long tape extrusion at the left anterior vaginal wall beside the midline. Cystourethroscopy revealed no urethral mesh erosion. Surgical removal of the extrusion tape revealed that the left arm of the tape was in the vaginal mucosa layer rather than between the whole thickness of the vaginal mucosa and urethral. The tape around the urethral was dissected and removed. A new retropubic tape was placed simultaneously. At the 8-months follow-up after surgery, the patient was continent without tape vaginal exposure. Conclusions: Vaginal mesh erosion should be considered in a patient who presents with sustained vaginal discharge after being treated with a suburethral sling. It is important to place the tape between the whole thickness of the vaginal mucosa and the urethral in SUI surgery. A shallow placement of the tape may lead to vaginal tape erosion and recurrence of SUI. These complications can be avoided by following the correct manipulation procedure and referring to the tissue layer anatomy. abstract_id: PUBMED:32104906 Surgeon's experience and clinical outcome after retropubic tension-free vaginal tape-A case series. Introduction: The retropubic tension-free vaginal tape procedure has been the preferred method for primary surgical treatment of stress and stress-dominant mixed urinary incontinence in women for more than 20 years. In this study, we assessed associations between surgeon's experience with the primary tension-free vaginal tape procedure and both perioperative complications and recurrence rates. Material And Methods: Using a consecutive case-series design, we assessed 596 patients treated with primary retropubic tension-free vaginal tape surgery performed by 18 surgeons from 1998 through 2012, with follow up through 2015 (maximum follow-up time: 10 years per patient). Data on perioperative complications and recurrence of stress urinary incontinence from medical records was transferred to a case report form. Surgeon's experience with the tension-free vaginal tape procedure was defined as number of such procedures performed as lead surgeon (1-19 ["beginners"], 20-49 and ≥50 procedures). All analyses were done with a 5% level of statistical significance. We applied the Chi-square test in the assessment of perioperative complications. The regression analyses of recurrence rate by number of tension-free vaginal tape procedures performed were restricted to the three surgeons who performed ≥50 procedures. Results: We found a significantly higher rate of bladder perforations (P = .03) and a higher rate of urinary retentions among patients whose tension-free vaginal tape procedures were performed by "beginners" (P = .06). We observed a significant reduction in recurrence rates with increasing number of tension-free vaginal tape procedures for one surgeon (P = .03). Conclusions: Surgeon's experience with the tension-free vaginal tape procedure is associated with the risk of bladder perforation and urinary retention, and may be associated with the long-term effectiveness of the procedure. abstract_id: PUBMED:33111326 The tension-free vaginal tape operation: Is the inexperienced surgeon a risk factor? Learning curve and Swedish quality reference. Introduction: To reduce the risk of avoidable damage to the patient when training surgeons, one must predefine what standards to achieve, as well as supervise and monitor trainees' performance. The aim of this study is to establish a quality reference, to devise comprehensive tension-free vaginal tape (TVT) learning curves and to compare trainees' results to our quality reference. Material And Methods: Using the Swedish National Quality Register for Gynecologic Surgery, we devised TVT learning curves for all Swedish TVT trainees from 2009 to 2017, covering their first 50 operations. These outcomes were compared with the results of Sweden's most experienced TVT surgeons for 14 quality variables. Results: In all, 163 trainees performed 2804 operations and 40 experienced surgeons performed 3482 operations. For our primary outcomes - perioperative bladder perforations and urinary continence after 1 year - as well as re-admission, re-operation and days to all daily living activities, there was no statistically significant difference between trainees and experienced surgeons at any time. For the first 10 trainee operations only, there were small differences in favor of the experienced surgeons: patient-reported minor complications after discharge (14% vs 18.4%, P = .002), 1-year patient-reported improvement (95.9% vs 91.8%, P < .000), and patient satisfaction (90.9% vs 86.2%, P = .002). For both trainee operations 1-10 and 11-50, compared with experienced surgeons, operation time (33.8 vs 22.2 min, P < .000; 28.3 vs 22.2 min, P < .000) and hospital stay time (0.16 vs 0.06 days, P < .001; 0.1 vs 0.06 days, P < .001) were longer, perioperative blood loss was higher (27.7 vs 24.4 mL, P = .001; 26.5 vs 24.4 mL, P = .004), and patient-reported catheterization within 8 weeks was higher (3.9% vs 1.8%, P < .000; 2.5% vs 1.8%, P = .001). One-year voiding difficulties for trainee patients (operations 1-10:14.2%, P = .260; operations 11-50:14.5%, P = .126) were comparable to the experienced surgeons (12.4%). Conclusions: There is a learning curve for several secondary outcomes but the small effect size makes it improbable that the difference has clinical significance. Our national Swedish results show that it is possible to train new TVT surgeons without exposing patients to noteworthy extra risk and achieve results which are equivalent to the most experienced Swedish surgeons. abstract_id: PUBMED:25735602 Obturator Neuralgia: A Rare Complication of Tension-free Vaginal Tape Sling-Complete Resolution After Laparoscopic Tension-free Vaginal Tape Removal. Study Objective: To show a technique of retropubic tension-free vaginal tape (TVT) removal using both a transvaginal and laparoscopic approach in the treatment of a rare condition, obturator neuralgia. Design: A step-by-step explanation of the patient's condition, diagnosis, surgical technique, and postoperative results using video, pictures, and medical illustrations (education video). Setting: TVT retropubic slings have become the gold standard for the treatment of stress urine incontinence over the last decade. Despite high cure rates, the TVT is not without potential complications. Typical complications include urine retention, incomplete bladder emptying, frequency, urgency, urethral erosion, vaginal extrusion, vaginal pain, and dyspareunia. The most common complication for sling removal/revision is chronic pain. The TVT obturator neuralgia is a rare and specific type of chronic pain that is normally associated with transobturator tape slings. The purpose of this video is to present an extremely rare complication of TVT retropubic slings, present symptoms and signs of obturator nerve compression, show the normal and the actual position of this patient's TVT sling, describe the laparoscopic removal of the TVT sling, and present the postoperative course and resolution of the patient's pain. The patient signed a release for her video to be used for educational and teaching purposes. Intervention: A combined transvaginal and laparoscopic approach in a patient with lower abdominal, levator, and obturator-type pain after a TVT retropubic procedure. Conclusion: In patients suffering from obturator neuralgia after a retropubic sling, surgeons should include the sling as a potential causative factor in the differential diagnosis. Surgeons should consider removing the sling based on the patient's symptoms. If the patient suffers from only vaginal pain and dyspareunia, then the surgeon should consider only the removal of the vaginal portion of the sling. In patients with obturator neuralgia, retropubic, and/or lower abdominal pain, one should consider a combined transvaginal and laparoscopic approach in an attempt to remove the majority of the sling and release tension between 2 points of fixation. Answer: Yes, there is a learning curve associated with the tension-free vaginal tape (TVT) procedure for stress urinary incontinence. A study evaluating the learning curve in an untrained resident surgeon for the initial case series of tension-free vaginal tape-obturator (TVT-O) found that operative time and blood loss decreased with an increase in the level of expertise, indicating that experience plays a role in the efficiency and safety of the procedure. The study concluded that approximately 30 cases were required for a naïve resident surgeon to learn TVT-O procedures (PUBMED:31886710). Another study assessing the association between a surgeon's experience with the primary tension-free vaginal tape procedure and both perioperative complications and recurrence rates found that there was a significantly higher rate of bladder perforations and a higher rate of urinary retentions among patients whose TVT procedures were performed by "beginners." The study also observed a reduction in recurrence rates with increasing numbers of TVT procedures performed by a surgeon, suggesting that experience may be associated with the long-term effectiveness of the procedure (PUBMED:32104906). Furthermore, a study using the Swedish National Quality Register for Gynecologic Surgery to establish a quality reference and devise comprehensive TVT learning curves for trainees compared to experienced surgeons found that there is a learning curve for several secondary outcomes. However, the study concluded that it is possible to train new TVT surgeons without exposing patients to noteworthy extra risk and achieve results equivalent to the most experienced surgeons (PUBMED:33111326). These studies collectively indicate that there is indeed a learning curve for the TVT procedure, and surgeon experience influences both the immediate and long-term outcomes of the surgery for stress urinary incontinence.
Instruction: Can an airway challenge test predict respiratory diseases? Abstracts: abstract_id: PUBMED:23683511 Can an airway challenge test predict respiratory diseases? A population-based international study. Background: Evidence on the longitudinal association of airway responsiveness with respiratory diseases is scarce. The best indicator of responsiveness is still undetermined. Objective: We investigated the association of airway responsiveness with the incidence of asthma, chronic obstructive pulmonary disease (COPD), and allergic rhinitis. Methods: We studied 3851 subjects who underwent spirometry and methacholine challenge tests both at baseline (1991-1993), when they were 20 to 44 years old, and at follow-up (1999-2002) in the European Community Respiratory Health Survey. Airway responsiveness was defined based on the methacholine dose-response slope on both occasions. Incidence rate ratios for the association of airway responsiveness with disease occurrence were computed by using Poisson regression. Results: With respect to reference (slope of the fourth quintile or greater), subjects with the greatest degree of airway responsiveness (slope less than the first quintile) showed the greatest risk of developing asthma, COPD, and allergic rhinitis (incidence rate ratios of 10.82, 5.53, and 4.84, respectively; all P < .01). A low slope predicted disease occurrence, even in subjects who did not reach a 20% decrease in FEV1 at the cumulative dose of 1 mg of methacholine (PD20 >1 mg). A decrease in slope over time was an independent predictor of disease risk. Conclusion: Airway responsiveness predicted new-onset asthma, COPD, and allergic rhinitis. Our study supports the use of a continuous noncensored indicator of airway responsiveness, such as the slope of the methacholine dose-response curve, in clinical practice and research because it showed clear advantages over PD20. abstract_id: PUBMED:32735372 Airway cholinergic history modifies mucus secretion properties to subsequent cholinergic challenge in diminished chloride and bicarbonate conditions. New Findings: What is the central question of this study? What is the impact of airway cholinergic history on the properties of airway mucus secretion in a cystic fibrosis-like environment? What is the main finding and its importance? Prior cholinergic challenge slightly modifies the characteristics of mucus secretion in response to a second cholinergic challenge in a diminished bicarbonate and chloride transport environment. Such modifications might lead to retention of mucus on the airway surface, thereby potentiating exacerbations of airway disease. Abstract: Viral infections precipitate exacerbations in many airway diseases, including asthma and cystic fibrosis. Although viral infections increase cholinergic transmission, few studies have examined how cholinergic history modifies subsequent cholinergic responses in the airway. In our previous work, we found that airway resistance in response to a second cholinergic challenge was increased in young pigs with a history of airway cholinergic stimulation. Given that mucus secretion is regulated by the cholinergic nervous system and that abnormal airway mucus contributes to exacerbations of airway disease, we hypothesized that prior cholinergic challenge would also modify subsequent mucus responses to a secondary cholinergic challenge. Using our established cholinergic challenge-rechallenge model in pigs, we atomized the cholinergic agonist bethanechol or saline control to pig airways. Forty-eight hours later, we removed tracheas and measured mucus secretion properties in response to a second cholinergic stimulation. The second cholinergic stimulation was conducted in conditions of diminished chloride and bicarbonate transport to mimic a cystic fibrosis-like environment. In pigs previously challenged with bethanechol, a second cholinergic stimulation produced a mild increase in sheet-like mucus films; these films were scarcely observed in animals originally challenged with saline control. The subtle increase in mucus films was not associated with changes in mucociliary transport. These data suggest that prior cholinergic history might modify mucus secretion characteristics with subsequent stimulation in certain environmental conditions or disease states. Such modifications and/or more repetitive stimulation might lead to retention of mucus on the airway surface, thereby potentiating exacerbations of airway disease. abstract_id: PUBMED:24040180 Differential effects of allergen challenge on large and small airway reactivity in mice. The relative contributions of large and small airways to hyperresponsiveness in asthma have yet to be fully assessed. This study used a mouse model of chronic allergic airways disease to induce inflammation and remodelling and determine whether in vivo hyperresponsiveness to methacholine is consistent with in vitro reactivity of trachea and small airways. Balb/C mice were sensitised (days 0, 14) and challenged (3 times/week, 6 weeks) with ovalbumin. Airway reactivity was compared with saline-challenged controls in vivo assessing whole lung resistance, and in vitro measuring the force of tracheal contraction and the magnitude/rate of small airway narrowing within lung slices. Increased airway inflammation, epithelial remodelling and fibrosis were evident following allergen challenge. In vivo hyperresponsiveness to methacholine was maintained in isolated trachea. In contrast, methacholine induced slower narrowing, with reduced potency in small airways compared to controls. In vitro incubation with IL-1/TNFα did not alter reactivity. The hyporesponsiveness to methacholine in small airways within lung slices following chronic ovalbumin challenge was unexpected, given hyperresponsiveness to the same agonist both in vivo and in vitro in tracheal preparations. This finding may reflect the altered interactions of small airways with surrounding parenchymal tissue after allergen challenge to oppose airway narrowing and closure. abstract_id: PUBMED:28093105 Comparison of submaximal exercise test results and severity of brachycephalic obstructive airway syndrome in English bulldogs. Canine brachycephalic obstructive airway syndrome (BOAS) is a complex respiratory disease related to congenitally flattened facial and skull anatomy. BOAS causes respiratory distress, heat and exercise intolerance, and gastrointestinal signs. English bulldogs (EB) have a high prevalence of BOAS. Currently, the severity of BOAS signs in veterinary practice is assessed subjectively. To reduce BOAS in brachycephalic breeds, an objective and easy-to-use tool could help breeders select healthier animals. Exercise tests, such as the 6 min walk test (distance walked measured) or the 1000 m walk test (duration measured), could be used to assess the severity of BOAS, as exercise intolerance and impaired recovery are key features of BOAS. This study evaluated the severity of signs and anatomic components of BOAS in a group of prospectively recruited young adult EBs (n = 28) and investigated the correlations of the 6 min walk test or the 1000 m walk test with a veterinary assessment of BOAS severity, using an ordinal 4 level scale of respiratory signs. EBs with more severe BOAS walked a shorter distance, more slowly and their recovery from exercise took longer than those with only mild signs of BOAS. Control dogs of different breeds (n = 10) performed the exercise tests significantly better (i.e. longer distance, faster time and recovery) than EBs. Increases in body temperature during exercise were significantly higher in EBs than in controls. The results of this study support the use of exercise tests for objective evaluation of the severity of BOAS in EBs. abstract_id: PUBMED:16298308 Cold air challenge and specific airway resistance in preschool children. Important asthma outcomes such as lung function and bronchial hyperresponsiveness are probably determined in early childhood. Early and longitudinal objective assessment of lung function and bronchial hyperresponsiveness is necessary to enable early diagnosis, monitor intervention and improve prognosis in preschool children. Cold air challenge and plethysmographic measurement of specific airway resistance (sRaw) are feasible candidate methods for diagnosis, clinical monitoring and research during this critical period of lung growth and development. Methodology and practical aspects of cold air challenge and assessment of sRaw in preschool children are reviewed. Reference values are provided for sRaw and have allowed discrimination between health and respiratory disease, both in cross-sectional and longitudinal studies. Bronchial hyperresponsiveness can be determined with acceptable repeatability and provides good discrimination between asthmatics and healthy. The effects of classic anti-asthmatic therapies have also been documented with these techniques. The need for further standardisation and improvement of these methods and future perspectives are outlined. abstract_id: PUBMED:34362059 Airway Abnormalities in Adult Mucopolysaccharidosis and Development of Salford Mucopolysaccharidosis Airway Score. (1) Background: Mucopolysaccharidoses (MPS) are a heterogeneous group of lysosomal storage disorders caused by the absence of enzymes required for degradation of glycosaminoglycans (GAGs). GAGs deposition in tissues leads to progressive airway narrowing and/or tortuosity. Increased longevity of patients has posed newer problems, especially the airway. This study aims to characterise various airway abnormalities in adult MPS from a regional centre and proposes a method to quantify the severity of the airway disease. (2) Methods: Retrospective analysis by case notes review, clinical examination, endoscopy, cross-sectional imaging, 3-dimensional reconstruction, and physiological investigations were used to assess the airway abnormalities. Quantitative assessment of the airway severity was performed a validated questionnaire of 15 parameters to derive Salford Mucopolysaccharidosis Airway Score (SMAS). (3) Results: Thirty-one adult MPS patients (21M/ 9F; median 26.7 years; range 19-42 years) were reviewed. There were 9 MPS I, 12 MPS II, 2 MPS III, 5 MPS IV, 2 MPS VI, and 1 MPS VII. Airway abnormalities in each MPS type are described. Patients scoring more than 35 on SMAS had some form of airway intervention. The area under curve of 0.9 was noted at a score of 25, so SMAS more than 25 may predict a difficult airway and potential to have complications. Pearson's correlation between SMAS and height, weight, BMI were poor (p < 0.05). (4) Conclusions: Airway abnormalities in adult MPS are varied and complex. Assessment of the airway should be holistic and include multiple parameters. An objective multidimensional score such as SMAS may help to predict and manage difficult airways warranting further investigation and validation. abstract_id: PUBMED:12704089 How does airway inflammation modulate asthmatic airway constriction? An antigen challenge study. During the late-phase (LP) response to inhaled allergen, mediators from neutrophils and eosinophils are released within the airways, resembling what occurs during an asthma attack. We compared the distribution of obstruction and degree of reversibility that follows a deep inspiration (DI) during early-phase (EP) and LP responses in nine asthmatic subjects challenged with allergen. Heterogeneity of constriction was assayed by determining frequency dependence of dynamic lung resistance and elastance, airway caliber by tracking airway resistance during a DI, and airway inflammation by measuring inflammatory cells in induced sputum postchallenge. Despite a paucity of eosinophils in the sputum at baseline (<1% of nonsquamous cells), asthmatic subjects showed a substantial EP response with highly heterogeneous constriction and reduced capacity to maximally dilate airways. The LP was associated with substantial airway inflammation in all subjects. However, five subjects showed only mild LP constriction, whereas four showed more marked LP constriction characterized by heterogeneous constriction similar to EP. Bronchoconstriction during LP was fully alleviated by administration of a bronchodilator. These findings, together with the impaired bronchodilatory response during a DI, indicate a physiological abnormality in asthma at the smooth muscle level and indicate that airway inflammation in asthma is associated with a highly nonuniform pattern of constriction. These data support the hypothesis that variability in responsiveness among asthmatic subjects derives from intrinsic differences in smooth muscle response to inflammation. abstract_id: PUBMED:11512631 Occupational airway diseases. In this review we describe characteristics of occupational airway diseases, as well as physical and chemical characteristics of agents inducing airway disease. Occupational airway diseases include industrial bronchitis, reactive airway dysfunction syndrome, bronchiolitis obliterans, and occupational asthma. High- and low-molecular weight substances associated with occupational airway disease are listed. The importance of host factors is stressed. Diagnostic approaches, particularly indicators for specific challenge testing with occupational materials, are described. Preventive and control measures are presented. abstract_id: PUBMED:15725184 Airway remodelling and inflammation in sheep lungs after chronic airway challenge with house dust mite. Background: Remodelling of airway walls is a significant morbidity factor in patients suffering from chronic asthma. The relationship between airway remodelling and the inflammatory response is not well defined. Sheep have been used extensively to model airway disease in humans and represent a suitable model to examine airway remodelling. Objective: The aim of the present study was to develop a model for airway remodelling in sheep after repeated challenge with a relevant human allergen to assess the relationship of airway remodelling with inflammation. Methods: Repeated challenges of house dust mite (HDM) extract or saline (control) were administered to local lung segments of sheep for a period of 6 months. After the last challenge, lung tissues from both challenged and unchallenged lung compartments of the same sheep were compared using morphometric image analysis and (immuno) histological studies. Results: All HDM-challenged sheep developed significant bronchoalveolar lavage eosinophilia during challenge. At the end of the challenge period, significant increases in airway collagen and airway smooth muscle content were found in a proportion (3/7) of the HDM-challenged sheep. Hyperplasia of goblet cells and epithelial cells were observed in small bronchi and bronchioles exposed to allergen. Irrespective of airway remodelling changes, all HDM-challenged, but no saline-challenged sheep, displayed significant increases in mast cells in alveolar septa and airway walls of challenged lungs compared with untreated lung compartments of the same sheep. Significant increases were also observed in CD5 and gamma delta T cell subpopulations in all allergen-exposed lung parenchyma. Conclusion: A proportion of atopic sheep develop typical airway remodelling changes after chronic allergen challenge, which is not directly related to the level of allergic inflammation. abstract_id: PUBMED:25430430 Airway complications after lung transplantation. Airway complications after lung transplantation present a formidable challenge to the lung transplant team, ranging from mere unusual images to fatal events. The exact incidence of complications is wide-ranging depending on the type of event, and there is still evolution of a universal characterization of the airway findings. Management is also wide-ranging. Simple observation or simple balloon bronchoplasty is sufficient in many cases, but vigilance following more severe necrosis is required for late development of both anastomotic and nonanastomotic airway strictures. Furthermore, the impact of coexisting infection, rejection, and medical disease associated with high-level immunosuppression further complicates care. Answer: Yes, an airway challenge test can predict respiratory diseases. A study investigating the association of airway responsiveness with the incidence of asthma, chronic obstructive pulmonary disease (COPD), and allergic rhinitis found that subjects with the greatest degree of airway responsiveness showed the greatest risk of developing these diseases. The study utilized spirometry and methacholine challenge tests and concluded that airway responsiveness predicted new-onset asthma, COPD, and allergic rhinitis. The study supports the use of a continuous noncensored indicator of airway responsiveness, such as the slope of the methacholine dose-response curve, in clinical practice and research (PUBMED:23683511). Additionally, other studies have shown that airway challenge tests can provide insights into the characteristics of airway diseases. For example, airway cholinergic history was found to modify mucus secretion properties to subsequent cholinergic challenge in diminished chloride and bicarbonate conditions, which might lead to retention of mucus on the airway surface and exacerbations of airway disease (PUBMED:32735372). Another study demonstrated differential effects of allergen challenge on large and small airway reactivity in mice, indicating that airway challenge tests can reveal specific patterns of airway reactivity associated with inflammation and remodeling (PUBMED:24040180). Furthermore, airway challenge tests have been used to assess the severity of brachycephalic obstructive airway syndrome in English bulldogs, with results correlating with the severity of the disease (PUBMED:28093105). Cold air challenge and specific airway resistance measurements have been proposed as feasible methods for early and longitudinal assessment of lung function and bronchial hyperresponsiveness in preschool children, which is critical for early diagnosis and monitoring of respiratory diseases (PUBMED:16298308). In summary, airway challenge tests have been shown to be predictive of respiratory diseases and are useful tools for assessing the severity and characteristics of airway responsiveness, which is relevant for diagnosis, monitoring, and research in respiratory health.
Instruction: Is Your Neighborhood Designed to Support Physical Activity? Abstracts: abstract_id: PUBMED:34205544 Associations between Psychosocial Variables, Availability of Physical Activity Resources in Neighborhood Environment, and Out-of-School Physical Activity among Chinese Adolescents. This study aimed to evaluate the relationship between psychosocial variables (peer support, parental support, autonomous motivation, and controlled motivation), availability of physical activity resources in a neighborhood environment, and out-of-school moderate to vigorous physical activity (MVPA) among Chinese adolescents. The questionnaire of Family Life, Activity, Sun, Health, and Eating (FLASHE) Study was used to collect information on demographics, socioeconomic status, psychosocial variables, available physical activity resources in the neighborhood environment, and minutes of out-of-school MVPA. ANOVA analysis and multiple regression analysis were performed. The mean age of the 3833 adolescents included in our analysis was 14.7 years old (SD = 1.7). Peer support (b = 9.35, 95% CI: 7.55-11.15), autonomous motivation (b = 6.46, 95% CI: 4.09-8.82), parental support (b = 3.90, 95% CI: 1.75-6.07), and availability of physical activity resources in neighborhood environment (b = 3.18, 95% CI: 1.99-4.36) were significantly associated with out-of-school MVPA (p < 0.05). Controlled motivation was insignificantly related to minutes of out-of-school MVPA. Boys spent more time on out-of-school MVPA than girls (p < 0.001) and had a high level of peer support, parental support, and motivation (p < 0.05). Our findings suggest that interventions targeting the out-of-school MVPA among Chinese adolescents should focus on the psychosocial variables and neighborhood environment. abstract_id: PUBMED:25562756 Does social support mediate the relationship among neighborhood disadvantage, incivilities, crime and physical activity? Objective: Neighborhood disadvantage (ND), incivilities, and crime disproportionately impact minority women, discouraging physical activity (PA). Social support (SS) is a cultural tool promoting PA in minority women. Socially supportive environments may promote PA in disadvantaged neighborhoods, yet few studies have investigated the mediating role of social support among minority women. This study examined SS as a mediator among ND, incivilities, crime, and PA. Methods: The Health Is Power study aimed to increase PA in African American and Hispanic Latina women (N=410) in Houston and Austin, TX. ND and crime data were taken from the National Neighborhood Crime Study. Incivilities were measured using the Pedestrian Environment Data Scan (PEDS). SS was measured using the Family and Friend Support for Exercise Habits scale and physical activity was measured using the International Physical Activity Questionnaire. Linear regression analysis was used to examine SS as a mediator following the Baron and Kenny method. Results: ND was negatively associated with PA and SS. SS was not a mediator as it was not significantly associated with ND, crime, and incivilities (F(3,264)=2.02, p>.05) or PA (F(1,266)=3.8 p=.052). Conclusion: ND significantly discourages PA and limits SS. Future research should focus on developing strategies to overcoming these negative environmental factors. abstract_id: PUBMED:33231917 Influence of Neighborhood Environment on Physical Activity Participation among Children with Autism Spectrum Disorder. Despite the positive influence of environmental factors on physical activity (PA) levels of children, further investigation is necessary when considering the unique characteristics of children with autism spectrum disorder (ASD). The purpose of this study was to examine the relationship between neighborhood environment factors and PA among children with ASD by examining the extent to which (a) the built environment, (b) neighborhood safety, and (c) neighborhood support contributes to PA. A secondary data analysis using the 2016 and 2017 combined dataset of the National Survey of Children's Health was used. The total sample included 14,944 children between the ages of 6-17 years; of this, a sample of 494 children had a parent-reported diagnosis of ASD. Multivariate logistic regressions were conducted to explore the unique influences of the above three neighborhood environment factors on PA engagement. Among children with and without ASD, an association was found between feeling safe and engaging in PA, that is, the odds of engaging in PA are greater among children who were safe in their neighborhoods than children who were not in safe neighborhoods. Surprisingly, other neighborhood factors do not have a significant relationship with PA participation among children with ASD at this time. Future studies should consider how to incorporate the support of others and feeling of safety into the neighborhood along with how to improve accessibility of the built environment in order to increase PA for children with ASD. LAY SUMMARY: The results of this study showed that neighborhood factors influence physical activity (PA) among children with ASD differently compared to their peers without ASD. Children with ASD who live in safe neighborhoods are more likely to engage in PA, while other neighborhood factors, such as playgrounds and support, are not associated with an increased likelihood of engaging in PA at this time. Efforts should be made to improve neighborhood accessibility and design in order to increase PA among children with ASD. abstract_id: PUBMED:23745172 Housing, the Neighborhood Environment, and Physical Activity among Older African Americans. This study examines the association of neighborhood environment, as measured by housing factors, with physical activity among older African Americans. Context is provided on the effects of structural inequality as an inhibitor of health enhancing neighborhood environments. The study population included African Americans participating in the UAB Study of Aging (n=433). Participants demonstrated the ability to walk during a baseline in-home assessment. The strength and independence of housing factors were assessed using neighborhood walking for exercise as the outcome variable. Sociodemographic data, co-morbid medical conditions, and rural/urban residence were included as independent control factors. Homeownership, occupancy, and length of residency maintained positive associations with neighborhood walking independent of control factors. Housing factors appear to be predictive of resident engagement in neighborhood walking. Housing factors, specifically high rates of homeownership, reflect functional and positive neighborhood environments conducive for physical activity. Future interventions seeking to promote health-enhancing behavior should focus on developing housing and built-environment assets within the neighborhood environment. abstract_id: PUBMED:31581456 Associations among Neighborhood Socioeconomic Deprivation, Physical Activity Facilities, and Physical Activity in Youth during the Transition from Childhood to Adolescence. Background: This study aims to examine the longitudinal association of neighborhood socioeconomic deprivation (SED) with physical activity in youth during the transition from elementary to middle school, and to determine if access to physical activity facilities moderates this relationship. Methods: Data were obtained from the Transitions and Activity Changes in Kids (TRACK) study, which was a multilevel, longitudinal study designed to identify the factors that influence changes in physical activity as youth transition from elementary to middle school. The analytic sample for the current study included 660 youth with complete data in grades 5 (baseline) and 7 (follow-up). A repeated measures multilevel framework was employed to examine the relationship between SED and physical activity over time and the potential moderating role of elements of the built environment. Results: Decreases in physical activity varied by the degree of neighborhood SED with youth residing in the most deprived neighborhoods experiencing the greatest declines in physical activity. Access to supportive physical activity facilities did not moderate this relationship. Conclusion: Future research studies are needed to better understand how neighborhood SED influences youth physical activity over time. abstract_id: PUBMED:34090126 Measurement of neighborhood-based physical activity bouts. This study examined how buffer type (shape), size, and the allocation of activity bouts inside buffers that delineate the neighborhood spatially produce different estimates of neighborhood-based physical activity. A sample of 375 adults wore a global positioning system (GPS) data logger and accelerometer over 2 weeks under free-living conditions. Analytically, the amount of neighborhood physical activity measured objectively varies substantially, not only due to buffer shape and size, but by how GPS-based activity bouts are identified with respect to containment within neighborhood buffers. To move the "neighborhood-effects" literature forward, it is critical to delineate the spatial extent of the neighborhood, given how different ways of measuring GPS-based activity containment will result in different levels of physical activity across different buffer types and sizes. abstract_id: PUBMED:37595898 Association between neighborhood availability of physical activity facilities and cognitive performance in older adults. The existing evidence on the contextual influence of the availability of local facilities for physical activity on the cognitive health of elderly residents is sparse. This study examined the association between neighborhood physical activity facilities and cognitive health in older individuals. A cohort study of community-dwelling older adults was performed using baseline data and follow-up data from the Taiwan Biobank. Cognitive health was measured in 32,396 individuals aged 60-70 years using the Mini-Mental State Examination (MMSE) with follow-up information on 8025 participants. The district was used as the proxy for local neighborhood. To determine neighborhood physical activity facilities, school campuses, parks, activity centers, gyms, swimming pools, and stadiums were included. Multilevel linear regression models were applied to examine the associations of neighborhood physical activity facilities with baseline MMSE and MMSE decline during follow-up, with adjustment for individual factors and neighborhood socioeconomic characteristics. Multilevel analyses revealed that there was a neighborhood-level effect on cognitive health among older adults. After adjusting for compositional and neighborhood socioeconomic characteristics, baseline MMSE was higher in individuals living in the middle- (beta = 0.12, p-value = 0.140) and high-density facility (beta = 0.22, p-value = 0.025) groups than in the low-density group (p-value for trend-test = 0.031). MMSE decline during follow-up was slower in the middle- (beta = 0.15, p-value = 0.114) and high-density facility (beta = 0.27, p-value = 0.052) groups than in the low-density group (p-value for trend-test = 0.032). Greater neighborhood availability of physical activity facilities was associated with better cognitive health among older residents. These findings have implications for designing communities and developing strategies to support cognitive health of an aging population. abstract_id: PUBMED:33854906 A national examination of neighborhood socio-economic disparities in built environment correlates of youth physical activity. Adolescents in the U.S. do not meet current physical activity guidelines. Ecological models of physical activity posit that factors across multiple levels may support physical activity by promoting walkability, such as the neighborhood built environment and neighborhood socioeconomic status (nSES). We examined associations between neighborhood built environment factors and adolescent moderate-to-vigorous physical activity (MVPA), and whether nSES moderated associations. Data were drawn from a national sample of adolescents (12-17 years, N = 1295) surveyed in 2014. MVPA (minutes/week) were estimated from self-report validated by accelerometer data. Adolescents' home addresses were geocoded and linked to Census data from which a nSES Index and home neighborhood factors were derived using factor analysis (high density, older homes, short auto commutes). Multiple linear regression models examined associations between neighborhood factors and MVPA, and tested interactions between quintiles of nSES and each neighborhood factor, adjusting for socio-demographics. Living in higher density neighborhoods (B(SE): 9.22 (2.78), p = 0.001) and neighborhoods with more older homes (4.42 (1.85), p = 0.02) were positively associated with adolescent MVPA. Living in neighborhoods with shorter commute times was negatively associated with MVPA (-5.11 (2.34), p = 0.03). Positive associations were found between MVPA and the high density and older homes neighborhood factors, though associations were not consistent across quintiles. In conclusion, living in neighborhoods with walkable attributes was associated with greater adolescent MVPA, though the effects were not distributed equally across nSES. Adolescents living in lower SES neighborhoods may benefit more from physical activity interventions and environmental supports that provide opportunities to be active beyond neighborhood walkability. abstract_id: PUBMED:37333429 Three-Year Changes in Low-Income Children's Physical Activity: Prospective Influence of Neighborhood Environment and Parent Supportive Behavior. Objectives: To prospectively evaluate parent supportive behaviors (PSB) for child physical activity (PA) and neighborhood environment variables on changes in child PA over 3 years. Study Design: Secondary data analysis of the Now Everybody Together for Amazing and Healthful Kids-Works study with 534 parent-child (age 2-4 years) dyads randomized to a community-based pediatric obesity prevention intervention for 3 years (92% retention). PSB and neighborhood environmental variables were examined in relation to changes in child moderate-to-vigorous PA (MVPA), light and sedentary activity, and screen time. Child and parent accelerometry data were collected at visit 0, 12, 24, and 36 months. Mixed multivariate models were used to examine independent and interactive effects of parent-level and neighborhood-level variables on changes in child PA outcomes. Results: PSB significantly interacted with visit on change in child MVPA (β = 0.12) and sedentary behaviors (β = -0.18). Over 3 years, a 1-unit increase in PSB was associated with an average increase of 4.3 minutes/day of MVPA and an average decrease of 6.5 minutes/day of sedentary time. Significant main effects were observed for PSB and 3-year change in child screen time (β = -0.05). The children of parents with higher PSB at baseline watched an average of 1.8 fewer minutes/day of screen time compared with parents with lower baseline PSB. Neighborhood-level variables were not significantly associated with changes in child PA outcomes. Conclusions: Parents who increase their supportive behaviors for their child's PA have children who are more physically active and less sedentary over time. Interventions to increase preschool-age children's PA may enhance their effectiveness by targeting parents' supportive behaviors for their child's PA. abstract_id: PUBMED:33659234 Effects of Children's Outdoor Physical Activity in the Urban Neighborhood Activity Space Environment. The rapid development of cities results in many public health and built-up environmental problems, which have vital impacts on children's growth environment, the development of children, and city contradictions. There is a lack of children being a main concern when constructing new urban areas or reconstructing old districts. Children's activity spaces tend to be standardized and unified (kit, fence, and carpet) "KFC style" designs, which leads to the urban neighborhood space and the environment being insufficient to attract children to conduct activities. Therefore, starting from the urban neighborhood space environment, this paper explores what kind of spatial environment is needed for children's physical activity and its impact on children's physical activity. Taking six residential areas in the Changchun Economic Development Zone as the research object, based on the theory of children's ability development and game value, this paper uses the Woolley and Lowe evaluation tool to quantify the impact of the theory on the urban neighborhood space environment and children's physical activity. It can be confirmed that there is a significant correlation between the spatial characteristics of an urban neighborhood and the general signs of the environment on the duration and intensity of the physical activity of children. The results show that: (1) the differences in children's ages result in differences in the duration and intensity of children's physical activity in the urban neighborhood space environment; (2) the open space factor of the neighborhood space has the most significant influence on the duration of children's physical activity; (3) in terms of the environmental characteristics, whether children can be provided with education and learning opportunities has a significant impact on the duration of children's physical activity; (4) there is a significant positive correlation between children's age and the duration and intensity of the physical activity, exercise type, and imaginative activity. These results show that the urban neighborhood space environment can affect the duration of children's physical activity. In future urban residential area planning and design, urban children can meet the self-demand of physical activity in the neighborhood space through the reasonable balance and combination of neighborhood space characteristics and environmental characteristics. Answer: The design of a neighborhood can significantly influence physical activity levels among its residents. Research has shown that various factors within the neighborhood environment can either promote or inhibit physical activity. For instance, the availability of physical activity resources in the neighborhood environment is positively associated with out-of-school moderate to vigorous physical activity (MVPA) among Chinese adolescents (PUBMED:34205544). Similarly, older African Americans are more likely to engage in neighborhood walking for exercise when housing factors such as homeownership, occupancy, and length of residency are favorable, suggesting that these factors reflect functional and positive neighborhood environments conducive to physical activity (PUBMED:23745172). However, neighborhood socioeconomic deprivation (SED) has been found to negatively impact physical activity levels, with youth residing in the most deprived neighborhoods experiencing the greatest declines in physical activity (PUBMED:31581456). This suggests that socioeconomic factors can create barriers to physical activity. The built environment also plays a role, as children with autism spectrum disorder (ASD) are more likely to engage in physical activity if they feel safe in their neighborhoods, although other neighborhood factors such as playgrounds and support were not associated with increased likelihood of engaging in physical activity at the time of the study (PUBMED:33231917). Furthermore, the availability of physical activity facilities in a neighborhood has been associated with better cognitive health among older residents, indicating that neighborhood design can have broader health implications beyond physical activity alone (PUBMED:37595898). Interestingly, parent supportive behaviors (PSB) for child physical activity and neighborhood environment variables have been shown to influence changes in child physical activity over time, with higher PSB associated with increased physical activity and decreased sedentary time in children (PUBMED:37333429). In summary, neighborhoods designed with accessible physical activity resources, safe environments, supportive social structures, and consideration of socioeconomic factors can support and encourage physical activity among residents of various age groups and conditions. However, disparities exist, and not all neighborhoods are equally supportive of physical activity, highlighting the need for targeted interventions and urban planning that prioritize health-promoting designs (PUBMED:33854906, PUBMED:33659234).
Instruction: Undergraduate rheumatology: can peer-assisted learning by medical students deliver equivalent training to that provided by specialist staff? Abstracts: abstract_id: PUBMED:29708367 Defining "peerness": Developing peer supports for parents with mental illnesses. Topic: This article addresses critical considerations in the development of peer supports for parents with mental illnesses, focusing on the question of what makes a peer a peer in the parent peer specialist domain. Purpose: The implementation and testing of parent peer supports requires specification of the critical components of the model, including the qualities, characteristics, and unique contributions of the parent peer specialist. Sources Used: Themes emerged in Parent Peer Specialist Project Advisory Group discussions, with members drawing from lived experience, practice expertise, and conversations with experts. In addition to literature review, strategic stakeholder interviews were conducted. Conclusions And Implications For Practice: Lived experience of mental illness and family life, training, and ongoing support for parent peer specialists, along with key ingredients conveyed by perceived peer-parent similarity, will likely enhance the benefits of peer supports to parents and promote job satisfaction and career advancement for parent peer specialists. (PsycINFO Database Record abstract_id: PUBMED:10777974 Factors associated with delayed tobacco uptake among Vietnamese/Asian and Arabic youth in Sydney, NSW. Objective: To describe the smoking behaviour and possible reasons for delayed uptake of tobacco smoking among Arabic and Vietnamese/Asian speaking senior school students in Sydney Method: A descriptive study involving four adult in-depth interviews and five student focus groups plus a quantitative survey of 2,573 school students attending Years 10 and 11 from 12 high schools with high Vietnamese and Arabic populations was conducted in Sydney in 1998. Self-reported smoking behaviour and peer, parent, school and cultural background information was collected. Results: Students who smoke were more likely to have more than $20 a week pocket money, be from an English-speaking background, have no rules at home about smoking, have family members who smoke, not feel close to their father, spend three or more evenings a week out with friends, and have negative perceptions of the school environment and of the level of teacher support. They were less likely to smoke if they perceived their peers to be unsupportive. Conclusions: These results confirm the delayed uptake of smoking among students from a Vietnamese/Asian and Arabic-speaking backgrounds compared with those from an English-speaking background. A number of family and school factors were associated with smoking. Implications: Positive parental modelling, active parenting including awareness of or supervision of student leisure time, strict rules about not smoking and less pocket money are important strategies for preventing smoking among all adolescents. abstract_id: PUBMED:8500453 Decision-making orientation and AIDS-related knowledge, attitudes, and behaviors of Hispanic, African-American, and white adolescents. How adolescents' personal sense of directedness (i.e., peer, parent, or self-directed orientation) affects the decision-making processes of adolescent students regarding AIDS-related knowledge, attitudes, beliefs, behaviors, and skills (KABBS) is examined. The sample consisted of 10th-grade students in 8 public high schools (N = 2,515) in Dade County (greater Miami), Florida. The findings showed that decision-making orientation and directedness was a significant predictor of AIDS-related KABBS of adolescents. Overall, the level of AIDS-related KABBS that were associated with low risk was found significantly more often among self-directed students and least often among peer-directed students. The findings of this study suggest that future preadult health-risk research should incorporate the concept of differences of information processing across adolescents. abstract_id: PUBMED:23524992 Personal attitudes, perceived social norms, and health-risk behavior among female adolescents with chronic medical conditions. To examine whether perceived peer/parent norms or personal beliefs about adolescent substance use influence substance use among female adolescents with chronic medical conditions. Sixty-eight females reported on substance use, personal beliefs, and perceived peer/parent norms. Personal beliefs and perceived peer/parent norms were associated with adolescent's current and future substance use. Although perceived peer norms accounted for variance in current substance use, only personal beliefs accounted for variance in future alcohol use. Targeting perceived peer norms may be effective for intervention efforts among adolescents endorsing current substance use, whereas alcohol use prevention efforts should target personal beliefs. abstract_id: PUBMED:10446768 Trends in adolescent alcohol and other substance use: relationships to trends in peer, parent, and school influences. The aim of this study is to improve understanding of trends in adolescent substance use by comparing them with trends in peer substance use, school problem behavior, parental disapproval of alcohol and drugs, and exposure to school prevention information. These trends were determined using data from three large comparable representative surveys of 7-12th grade students in New York State conducted in 1983, 1990, and 1994. Analysis of variance was used to test the significance of the trends and to identify meaningful differences in trends by demographic subgroups (gender, grace level, and ethnicity). Adolescent alcohol and substance use declined in the 1980s, then increased from 1990 to 1994. Trends in friend's substance use and school problem behavior paralleled the alcohol and other substance use trends, consistent with their being part of the same adolescent problem behavior syndrome. Parental disapproval also followed a trend consistent with the substance use trends, i.e., parental disapproval increased in the 1980s but then decreased in the 1990s. However, the trend in school prevention influences did not parallel these substance use trends: student exposure to school primary prevention programs continued to increase from 1990 to 1994. Use of alcohol, cigarettes, marijuana, and other illicit drugs from 1990 to 1994, increased fastest among the younger students, despite increased school prevention exposure. Other factors must be sought as possible explanations of the increasing alcohol and substance use among adolescents. abstract_id: PUBMED:24853361 The use of an ecodevelopmental approach to examining substance use among rural and urban Latino/a youth: peer, parental, and school influences. Using an ecodevelopmental framework, we examined how peer, parent, and student variables influence substance (tobacco, alcohol, and marijuana) use among rural and urban Latino/a adolescents (N = 2,500). Generally speaking, Latino/a adolescents in rural areas endorsed higher levels of substance use. Among the primary variables of study, there were a number of nuanced differences noted based on location, gender, and type of substance. Peer use was related to individual substance use in both rural and urban areas. However, peer use was a significantly stronger predictor of tobacco use among rural Latinas than urban dwelling. Parental monitoring was not predictive of urban marijuana use, yet was negatively associated with substance use for all subgroups and was especially pronounced for female alcohol use. Parental emotional involvement predicted higher alcohol use among urban boys. School achievement was negatively associated with substance use for all subgroups while, conversely, school involvement was associated with higher alcohol use for rural boys. Cultural and contextual implications for intervention and prevention are discussed. abstract_id: PUBMED:38460913 ROUTE-T1D: A behavioral intervention to promote optimal continuous glucose monitor use among racially minoritized youth with type 1 diabetes: Design and development. Background: Type 1 diabetes management is often challenging during adolescence, and many youth with type 1 diabetes struggle with sustained and optimal continuous glucose monitor (CGM) use. Due to racial oppression and racially discriminatory policies leading to inequitable access to quality healthcare and life necessities, racially minoritized youth are significantly less likely to use CGM. Methods: ROUTE-T1D: Research on Optimizing the Use of Technology with Education is a pilot behavioral intervention designed to promote optimal CGM use among racially minoritized youth with type 1 diabetes. Intervention strategies include problem solving CGM challenges and promoting positive caregiver-youth communication related to CGM data. Results: This randomized waitlist intervention provides participants with access to three telemedicine sessions with a Certified Diabetes Care and Education Specialist. Caregiver participants are also connected with a peer-parent coach. Conclusion: Hypothesized findings and anticipated challenges are discussed. Future directions regarding sustaining and optimizing the use of diabetes technology among racially minoritized pediatric populations are reviewed. abstract_id: PUBMED:30668133 Peer-to-Peer Human Milk Sharing: Recipient Mothers' Motivations, Stress, and Postpartum Mental Health. Background: Some mothers who cannot breastfeed-partially or completely-choose to feed their infants human milk donated from a peer. Few studies have examined mothers' experiences with using donor milk; none has examined whether or not mothers' stress and mental health are associated with using donor milk from a peer. Methods: Researchers conducted semistructured individual interviews with mothers from the United States and Canada (N = 20) to answer the following questions: (a) what are recipient mothers' motivations for participation in peer-to-peer breast milk sharing and (b) what is the relationship between receiving donated milk and mothers' stress and mental health postpartum? Transcripts were coded using an inductive approach and principles of grounded theory were used to analyze data. Results: Data were organized under two themes: (a) motivations for using milk from a peer and (b) milk-sharing and stress-related experiences. Motivations included health benefits, medical need, and preference for human milk over formula. Factors inducing stress were as follows: logistical stressors of securing donor milk and fear of running out of milk. Factors reducing stress were as follows: donor milk provided relief and comfort and its use reduced mothers' self-reported symptoms of postpartum depression and anxiety. Conclusions: Mothers participated in peer-to-peer breast milk sharing primarily because of health benefits for children. However, participation also had important psychological benefits for some mothers. Additional research and open discourse are needed to support mothers who choose to use milk from a peer and to promote safety of this practice. abstract_id: PUBMED:11512489 The relationship of adolescent perceptions of peer norms and parent involvement to cigarette and alcohol use. This investigation assessed the relative influence of peer norms and parental involvement on adolescent cigarette and alcohol use. An anonymous questionnaire was administered to 2,017 seventh- to 12th-grade students in two Ohio public school districts. Cigarette and alcohol use rates in the sample were comparable to those found in national probability surveys. Results indicated that the relative balance of peer-parent influences did not differ across grade level. At all grade levels, perceived peer norms had substantially greater correlations with cigarette and alcohol use than did measures of perceived parental involvement. The findings are interpreted from an efficiency perspective. Optimal use of prevention resources suggest that programming for seventh- to 12th-graders should focus on shaping the perceptions of peer smoking and drinking practices rather than on parent interventions. Social norms marketing or other forms of normative education should be tested in this population. abstract_id: PUBMED:4088288 The neighborhood walk: sources of support in middle childhood. This report documents children's perception of their involvement with self-development, family members, and members of the community and begins to test the relation between that network and aspects of social-emotional functioning during middle childhood. Support was conceptualized to include experiences of both relatedness to and autonomy from others. Three major types of reported support in this study using the Neighborhood Walk were considered: others as resources (e.g., persons in the peer, parent, and grandparent generation; pets), intrapersonal sources of support (e.g., hobbies; fantasies--structured and unstructured; skill development), and environmental sources of support (e.g., places to get off to by oneself; formally sponsored organizations with structured and unstructured activities; informal, unsponsored meeting places). One hundred sixty-eight children (72 7-year-olds and 96 10-year-olds), residing in nonmetropolitan and rural northern California and representing all but the lowest Hollingshead socioeconomic status, participated in this study. To assess their sources of support at home and in the neighborhood/community, each of these 168 children was taken on a Neighborhood Walk, and then several measures of social-emotional functioning were administered. Cross-sectional data form the empirical basis for a developmental perspective on sources of support, the structure of social-emotional functioning, and the relationship between sources of support and social-emotional functioning during middle childhood. The child's perception of support was found relevant to predicting the social-emotional functioning of children growing up in relatively secure and low-stress conditions in modern American society. Furthermore, a broad-based as opposed to a limited network and informal as opposed to formal sources of support were more predictive of social-emotional functioning. The empirical and theoretical relevance of considering middle childhood as a period of active development involving expansion and integration of social and affective phenomena was underscored by the results. First, it appears that the 7-year-olds have not yet developed the underlying response or habit clusters that characterize the 10-year-olds. Second, with respect to reported sources of support, 10-year-olds appear to have more elaborated sources of support than do 7-year-olds. Third, the findings confirm that developing a bridge to extended family and neighborhood resources is related to expressions of social-emotional functioning during middle childhood and that 10-year-olds appear to make effective use of more social support factors than do 7-year-olds.(ABSTRACT TRUNCATED AT 400 WORDS) Answer: The provided abstracts do not directly address the question of whether peer-assisted learning by medical students can deliver equivalent training in undergraduate rheumatology to that provided by specialist staff. The abstracts focus on various aspects of peer influence, parental involvement, and peer support in different contexts such as mental health, substance use, smoking behavior, and diabetes management, but they do not discuss peer-assisted learning in medical education or its effectiveness compared to specialist-led training. To answer the question, research specifically examining the outcomes of peer-assisted learning programs in medical education, particularly in the field of rheumatology, would be needed. Such research would ideally compare the knowledge, skills, and competencies acquired by medical students taught by their peers to those taught by specialist staff. It would also consider the satisfaction and confidence of students in both groups, as well as any differences in their clinical performance and long-term retention of knowledge. Since the provided abstracts do not contain information on peer-assisted learning in the context of undergraduate rheumatology education, they cannot be used to support any conclusions about the equivalence of training provided by medical students versus specialist staff in this specific area.
Instruction: Post-mortem analysis of bone marrow osteoclasts using tartrate-resistant acid phosphatase staining: does histochemistry work and correlate with time since death? Abstracts: abstract_id: PUBMED:22844067 Post-mortem analysis of bone marrow osteoclasts using tartrate-resistant acid phosphatase staining: does histochemistry work and correlate with time since death? Aims: In bone marrow (BM) biopsies, tartrate-resistant acid phosphatase (TRAP) staining represents the gold standard for the characterisation of osteoclasts. TRAP is one of the few enzymes that is histochemically detectable on formalin-fixed paraffin-embedded tissue. This study investigated whether TRAP is also able to visualise BM osteoclasts in autopsy tissue. It was hypothesised that, due to a progressive loss of enzymatic activity in osteoclasts post-mortem, TRAP staining could allow the time of death of a patient to be determined. Methods: TRAP-stained BM slides of 96 cases including 51 pathology and 23 forensic autopsies and 22 biopsies were histologically evaluated and their staining intensity (SI) semi-quantitatively graded. In the autopsy cases, the results were correlated with the post-mortem interval (PMI, time span in days between death and autopsy). Results: TRAP staining intensities (TRAP-SIs) did not differ between men and women and showed a steady decrease with age. TRAP-SIs were significantly stronger in biopsies than in autopsy cases. Among the autopsies, TRAP-SIs were highly variable and not dependent on PMI, except for three forensic cases with PMI ≥7 days which showed a complete loss of TRAP stainability. On the whole, the TRAP-SIs of pathology and forensic cases did not differ significantly. Conclusions: This study clearly shows that BM osteoclasts stay TRAP-positive for 7 days post-mortem, although with markedly reduced TRAP-SIs compared with biopsies. Since TRAP-SIs were not correlated with the duration of PMI, TRAP staining of BM osteoclasts cannot serve as a tool to determine the time of death of a patient. abstract_id: PUBMED:38427117 The novel cytotoxic polybisphosphonate osteodex decreases bone resorption by enhancing cell death of mature osteoclasts without affecting osteoclastogenesis of RANKL-stimulated mouse bone marrow macrophages. It has previously been demonstrated that the polybisphosphonate osteodex (ODX) inhibits bone resorption in organ-cultured mouse calvarial bone. In this study, we further investigate the effects by ODX on osteoclast differentiation, formation, and function in several different bone organ and cell cultures. Zoledronic acid (ZOL) was used for comparison. In retinoid-stimulated mouse calvarial organ cultures, ODX and ZOL significantly reduced the numbers of periosteal osteoclasts without affecting Tnfsf11 or Tnfrsf11b mRNA expression. ODX and ZOL also drastically reduced the numbers of osteoclasts in cell cultures isolated from the calvarial bone and in vitamin D3-stimulated mouse crude bone marrow cell cultures. These data suggest that ODX can inhibit osteoclast formation by inhibiting the differentiation of osteoclast progenitor cells or by directly targeting mature osteoclasts. We therefore assessed if osteoclast formation in purified bone marrow macrophage cultures stimulated by RANKL was inhibited by ODX and ZOL and found that the initial formation of mature osteoclasts was not affected, but that the bisphosphonates enhanced cell death of mature osteoclasts. In agreement with these findings, ODX and ZOL did not affect the mRNA expression of the osteoclastic genes Acp5 and Ctsk and the osteoclastogenic transcription factor Nfatc1. When bone marrow macrophages were incubated on bone slices, ODX and ZOL inhibited RANKL-stimulated bone resorption. In conclusion, ODX does not inhibit osteoclast formation but inhibits osteoclastic bone resorption by decreasing osteoclast numbers through enhanced cell death of mature osteoclasts. abstract_id: PUBMED:25974241 Differential β3 and β1 Integrin Expression in Bone Marrow and Cortical Bone of Estrogen Deficient Rats. Integrin-based (β3 ) attachments to the extracellular matrix (ECM) on osteocyte cell processes have recently been proposed to play an important role in facilitating osteocyte mechanosensation. However, it is not yet known whether integrin expression is altered in the mechanoregulatory osteocytes during osteoporosis. The objective of this study was to test the hypothesis that the expression of integrin-based mechanosensory complexes (β1 and β3 integrins) is altered as a direct response to estrogen deficiency, in an estrogen deficient animal model of osteoporosis. Four weeks post-operatively, immunohistochemistry was used to detect for β1 and β3 integrin subunits in bone tissue and marrow of ovariectomized (OVX; N = 4) and SHAM (N = 4) operated animals. A tartrate resistant acid phosphatase (TRAP) control stain was performed to quantify the presence of osteoclasts in the bone marrow and bone surfaces. Image analysis was performed to quantify expression patterns in different biological compartments, that is, bone marrow, endosteum, and cortical bone. Our results showed that β1 integrins were ubiquitously expressed throughout the bone and marrow, for both OVX and SHAM groups. β3 integrin subunit expression was lower in bone cells from osteoporotic animals compared to controls, whereas β3 expression in marrow cells did not differ significantly between groups. At the endosteum no difference was observed in β3 integrin subunit expression. As expected, the number of osteoclasts was higher in the OVX group validating an imbalance in bone remodeling. We propose that a reduction in β3 integrin expression in osteocytes might impair mechanosensation by bone cells during estrogen deficiency. abstract_id: PUBMED:17523074 Beta-galactosidase staining on bone marrow. The osteoclast pitfall. The enzyme beta-galactosidase, encoded by the bacterial gene lac-Z, is commonly used as a histochemical reporter to track transplanted cells in vivo or to analyze temporospatial gene expression patterns by coupling expression of specific target genes to beta-galactosidase activity. Previously, endogenous beta-galactosidase activity has been recognized as a confounding factor in the study of different soft tissues, but there is no description of the typical background on bone marrow sections when using the chromogenic substrate 5-Bromo-4-chloro-3-indolyl beta-D-Galactoside (X-Gal). In this report, we show that osteoclasts in bone marrow sections specifically and robustly stain blue with X-Gal. This leads to a typical background when bone marrow is examined that is present from the first day post partum throughout the adult life of experimental mice and can be confused with transgenic, bacterial beta-galactosidase expressing hematopoietic or stromal cells. Experimental variations in the X-Gal staining procedure, such as pH and time of exposure to substrate, were not sufficient to avoid this background. Therefore, these data demonstrate the need for strenuous controls when evaluating beta-galactosidase positive bone marrow cells. Verifiable bacterial beta-galactosidase positive bone marrow cells should be further identified using immunohistological or other approaches. Specifically, beta-galactosidase positive hematopoietic or stromal cells should be proven specifically not to be osteoclasts by co-staining or staining adjacent sections for specific markers of hematopoietic and stromal cells. abstract_id: PUBMED:23263655 The role of bone marrow-derived cells during the bone healing process in the GFP mouse bone marrow transplantation model. Bone healing is a complex and multistep process in which the origin of the cells participating in bone repair is still unknown. The involvement of bone marrow-derived cells in tissue repair has been the subject of recent studies. In the present study, bone marrow-derived cells in bone healing were traced using the GFP bone marrow transplantation model. Bone marrow cells from C57BL/6-Tg (CAG-EGFP) were transplanted into C57BL/6 J wild mice. After transplantation, bone injury was created using a 1.0-mm drill. Bone healing was histologically assessed at 3, 7, 14, and 28 postoperative days. Immunohistochemistry for GFP; double-fluorescent immunohistochemistry for GFP-F4/80, GFP-CD34, and GFP-osteocalcin; and double-staining for GFP and tartrate-resistant acid phosphatase were performed. Bone marrow transplantation successfully replaced the hematopoietic cells into GFP-positive donor cells. Immunohistochemical analyses revealed that osteoblasts or osteocytes in the repair stage were GFP-negative, whereas osteoclasts in the repair and remodeling stages and hematopoietic cells were GFP-positive. The results indicated that bone marrow-derived cells might not differentiate into osteoblasts. The role of bone marrow-derived cells might be limited to adjustment of the microenvironment by differentiating into inflammatory cells, osteoclasts, or endothelial cells in immature blood vessels. abstract_id: PUBMED:9839895 In vitro generation of equine osteoclasts from bone marrow cells using a novel culture system. We report on preliminary results of a novel in vitro culture system designed to generate equine osteoclasts in large numbers. Osteoclast generation, as determined by the expression of tartrate resistant acid phosphatase (TRAP) and ability to resorb bone, was enhanced in equine bone marrow cultures supplemented with fibroblastic cell (L929) conditioned medium (L929-CM). Bone marrow was collected from a total of 12 horses and ponies and TRAP-positive cells with bone resorbing ability were generated in significant numbers in the last seven. TRAP-positive mononuclear cells appeared after three days in culture and steadily increased in number and size up to seven days when multinuclear TRAP-positive cells started to appear. L929-CM caused a significant increase in the number of TRAP-positive mononuclear cells generated after four and seven days, after which time the number of TRAP-positive cells decreased. The development of this in vitro system provides a reproducible means of generating large numbers of equine osteoclasts necessary for the study of bone remodelling and endochondral ossification in the horse. abstract_id: PUBMED:33921836 Antibodies to Citrullinated Proteins (ACPA) Associate with Markers of Osteoclast Activation and Bone Destruction in the Bone Marrow of Patients with Rheumatoid Arthritis. Normalizing bone metabolism is a challenge in rheumatoid arthritis (RA). Studies in mice suggest that anti-citrullinated protein antibodies (ACPAs) can trigger osteoclast activation and bone resorption in the bone marrow. However, data on the presence and role of ACPAs in human bone marrow are scarce. We investigated whether ACPAs can contribute to osteoclast activation and bone erosion in RA bone marrow. Anti-cyclic citrullinated peptide antibodies (anti-CCP Abs), osteoclast activation indicators-the tartrate-resistant acid phosphatase 5b (TRAP5b) and cathepsin K, and bone degradation marker-C-terminal telopeptide of type I collagen (CTX-I) were measured in the bone marrow and peripheral blood of RA patients using ELISAs. We found that ACPAs present in RA bone marrow was associated with increased amounts of TRAP5b, cathepsin K and CTX-I in this location. Levels of IL-8, the key mediator of anti-citrullinated protein antibody (ACPA)-induced bone resorption, were also elevated in bone marrow containing anti-CCP Abs and positively correlated with TRAP5b and cathepsin K concentrations. Higher levels of TRAP5b, cathepsin K, CTX-I and IL-8 in bone marrow compared to peripheral blood indicate local generation of these molecules. Our results complement data from animal studies and highlight the relevance of ACPAs and bone marrow in bone resorption in RA. abstract_id: PUBMED:26984386 Palatal fibroblasts reduce osteoclastogenesis in murine bone marrow cultures. Background: Preclinical studies support the assumption that connective tissue grafts preserve the alveolar bone from resorption; the underlying cellular mechanisms, however, remain unknown. The cellular mechanisms may be attributed to the paracrine activity of the palatal fibroblasts. It was thus reasonable to suggest that palatal connective tissue grafts reduce the formation of osteoclasts. Methods: To test this hypothesis, human palatal fibroblasts were examined for their capacity to modulate the formation of osteoclasts in murine bone marrow cultures exposed to RANKL, M-CSF and TGF-β1. Osteoclastogenesis was determined by tartrate-resistant acid phosphatase (TRAP) staining and gene expression analysis. The formation of antigen presenting cells was based on the expression of CD14 and costimmulatory molecules of antigen presenting cells. The paracrine interaction of fibroblasts and the bone marrow was modeled in vitro with inserts of cell-occlusive membranes. Results: In cocultures without cell-to-cell contact, palatal fibroblasts caused a decrease in the expression of the osteoclast marker genes in bone marrow cells; calcitonin receptors, cathepsin K, TRAP, and osteoclast-associated receptor. Also the number of TRAP positive multinucleated cells was decreased in the presence of fibroblasts. Notably, palatal fibroblasts increased the expression of CD14 and the co-stimulatory proteins CD40, CD80, and CD86 in bone marrow cells. Bone marrow cells had no considerable impact on fibroblast viability and proliferation marker genes. With regard to cell distribution, osteoclasts were most prominent in the center of the membranes, while fibroblasts accumulated immediately adjacent to the border of the insert forming a ring-like structure on the surface of the culture plate. Conclusion: The data suggest that palatal fibroblasts provide a paracrine environment that reduces osteoclastogenesis and increases markers of antigen presenting cells. Morover, the paracrine model revealed a joint activity between palatal fibroblasts and bone marrow cells visualized by the characteristic cell distribution in the two separated compartments. abstract_id: PUBMED:26254598 Mouse anti-RANKL antibody delays oral wound healing and increases TRAP-positive mononuclear cells in bone marrow. Objectives: Denosumab, a human monoclonal antibody (mAb) that neutralizes receptor activator for nuclear factor κB ligand (RANKL), is associated with osteonecrosis of the jaw. However, the effect of denosumab on oral wounds is unclear. The aim was to determine the effect of anti-RANKL mAb on oral wounds and bone marrow. Materials And Methods: The direct effect of the mAb on fibroblasts, macrophages, and osteoclasts were assessed in vitro. In vivo, mouse anti-RANKL mAb was administered to mice for 9 weeks prior to palatal bone denudation surgery. Mice were euthanized 3 weeks post-surgery, and wound healing was histomorphometrically analyzed. Long bones were assessed using micro-computed tomography, quantitative real-time polymerase chain reaction, and flow cytometry. Results: The mAb had no effect on macrophages and fibroblasts but significantly suppressed osteoclast proliferation in vitro. The mAb treatment significantly increased bone mass by suppressing osteoclasts in vivo. The expression of pro-osteoclastic genes was promoted in the bone marrow of the mAb-administered animals. Consistently, the mAb significantly induced the development of tartrate-resistant acid phosphatase (TRAP)-positive mononuclear cells (MNCs) but not osteoclasts in bone marrow. The mAb treatment had no effect on gross healing of the palatal wounds. However, significant inflammation was retained in the connective tissue facing the once denuded bone surface. Conclusions: Repair of the damaged palate was delayed, and significant inflammation was sustained in the connective tissue by anti-RANKL mAb treatment. Clinical Relevance: Denosumab impairs osteoclastic bone repair. Care should be exercised to minimize osseous trauma when invasive procedures are performed on patients taking denosumab. abstract_id: PUBMED:31418372 Expression of IL-17 in Bone Marrow of Patients with Multiple Myeloma and Its Effect in Pathogenesis Objective: To investigate the expression and pathogenesis of IL-17 in bone marrow blood of multiple myeloma (MM) patients. Methods: Expression levels of IL-6, TNF-α and IL-17 in bone marrow serum of 20 MM patients and 20 control subjects were detected by ELISA, and correlation analysis was performed to analyze the correlation IL-17 with IL-6 and TNF-α. The effect of IL-17 on the proliferation of MM cells treated with different concentration of IL-17 was detected by cell prollferation and toxicity tesis. The morphological changes of RAW264.7 cells treated with IL-17 were observed by tartrate resistant acid phosphatase (TRAP) staining. Results: The levels of IL-17, IL-6 and TNF- in the bone marrow of MM patients were all higher than those of the normal control group (P<0.05). The IL-17 level positively correlated with IL-6 and TNF-α levels (r=0.6045, P<0.01 and r=0.627, P<0.01). Cell proliferation and toxicity tests confirmed that IL-17 can promote the proliferation of multiple myeloma cells. TRAP staining revealed that IL-17 could induce differentiate of RAW264.7 cells into multinuclear giant cells. Conclusion: IL-17 may be involved in the pathogenesis of MM and promotes the proliferation of tumor cells, and induces the activation of osteoclasts leading to increased bone destruction. Answer: Post-mortem analysis of bone marrow osteoclasts using tartrate-resistant acid phosphatase (TRAP) staining was investigated to determine if TRAP could visualize bone marrow osteoclasts in autopsy tissue and if it could be correlated with the time since death. The study found that TRAP staining intensities (TRAP-SIs) did not differ between men and women and showed a steady decrease with age. TRAP-SIs were significantly stronger in biopsies than in autopsy cases. However, among the autopsy cases, TRAP-SIs were highly variable and not dependent on the post-mortem interval (PMI), except for three forensic cases with PMI ≥7 days which showed a complete loss of TRAP stainability. Overall, the TRAP-SIs of pathology and forensic cases did not differ significantly. The study concluded that bone marrow osteoclasts remain TRAP-positive for up to 7 days post-mortem, albeit with reduced TRAP-SIs compared to biopsies. Since TRAP-SIs were not correlated with the duration of PMI, TRAP staining of bone marrow osteoclasts cannot serve as a tool to determine the time of death of a patient (PUBMED:22844067).
Instruction: Do children born after assisted conception have less risk of developing infantile autism? Abstracts: abstract_id: PUBMED:17456530 Do children born after assisted conception have less risk of developing infantile autism? Background: A Danish population based matched case-control study of perinatal risk factors in children with infantile autism has provided some interesting and surprising observations regarding infantile autism and children born after assisted conception. Methods And Results: The cases (461) consisted of all children born between 1990 and 1999 and diagnosed with infantile autism in the Danish Psychiatric Central Register before February 2001. Matched controls were identified in the Danish Civil Registration System. The main exposure measures included obstetric risk factors for infantile autism. We found a 59% decreased risk for developing infantile autism among children conceived after assisted conception (odds ratio [OR] 0.41, 95% [0.19-0.89]) and a 63% decreased risk after adjusting for known risk factors for assisted conception and infantile autism (OR 0.37, 95% [0.14-0.98]). Conclusion: We found that children born after assisted conception had a lower risk of developing infantile autism then their matched controls. Our observations could possibly be explained by the mother's health status before and during early pregnancy. Our findings require further investigation in larger studies. abstract_id: PUBMED:20584728 Risk of autism spectrum disorders in children born after assisted conception: a population-based follow-up study. Objectives: To assess the risk of autism spectrum disorders (ASD) in children born after assisted conception compared with children born after natural conception. Design: Population-based follow-up study. Setting: All children born alive in Denmark 1995-2003. Participants: 588,967 children born in Denmark from January 1995 to December 2003. Assisted conception was defined as in vitro fertilisation (IVF) with or without intracytoplasmic sperm injection and ovulation induction (OI) with or without subsequent insemination. Children exposed to IVF or OI were identified in the IVF Register and in the Danish Drug Prescription Register. Main Outcome Measures: A diagnosis of ASD in the Danish Psychiatric Central Register. Results: 33,139 (5.6%) of all children born in Denmark in 1995-2003 resulted from assisted conception, 225 of whom (0.68%) had a diagnosis of ASD. Of the 555,828 children born in this period after natural conception, 3394 (0.61%) had a diagnosis of ASD. The follow-up time was 4-13 years (median 9 years). In crude analyses, children born after assisted conception had an increased risk of a diagnosis of ASD: crude hazard rate ratio (HRR) 1.25 (95% CI 1.09 to 1.43). In analyses adjusting for maternal age, educational level, parity, smoking, birth weight and multiplicity, the risk disappeared: adjusted HRR 1.13. (95% CI 0.97 to 1.31). However, subgroup analyses that suggest possible associations in women who received follicle stimulating hormone indicate the need for further study. Discussion: This population-based follow-up study found no risk of ASD in children born after assisted conception. abstract_id: PUBMED:19124707 Cerebral palsy, autism spectrum disorders, and developmental delay in children born after assisted conception: a systematic review and meta-analysis. Objective: To assess the existing evidence of associations between assisted conception and cerebral palsy (CP), autism spectrum disorders (ASD), and developmental delay. Data Sources: Forty-one studies identified in a systematical PubMed and Excerpta Medica Database (EMBASE) search for articles published from January 1, 1996, to April 1, 2008. Study Selection: Studies written in English comparing children born after assisted conception with children born after natural conception assessing CP, ASD, and developmental delay, based on original data with a follow-up of 1 year or more. Main Exposures In vitro fertilization (IVF) with or without intracytoplasmic sperm injection or ovulation induction with or without subsequent intrauterine insemination. Main Outcome Measures: Cerebral palsy, ASD, and developmental delay. Results: Nine CP studies showed that children born after IVF had an increased risk of CP associated with preterm delivery. In our meta-analysis including 19 462 children exposed to IVF, we estimated a crude odds ratio of 2.18 (95% confidence interval, 1.71-2.77). Eight ASD studies and 30 studies on developmental delay showed inconsistent results. No studies assessed the risk of CP, ASD, or developmental delay in children born after ovulation induction exclusively. Conclusions: Methodological problems were revealed in the identified studies, and the gaps in our knowledge about the long-term outcomes of children born after assisted conception are considerable, including a lack of information on the long-term consequences of ovulation induction. Possible associations with ASD and developmental delay need assessment in larger studies. Studies on assisted conception and CP from countries outside of Scandinavia are needed, including detailed information on time to pregnancy, underlying cause of infertility, and type of IVF treatment. abstract_id: PUBMED:24129612 Are children born after assisted reproductive technology at increased risk of autism spectrum disorders? A systematic review. Study Question: Are children born after assisted reproductive technology (ART) at increased risk of autism spectrum disorders (ASD)? Summary Answer: There is no evidence that ART significantly increases the risk of ASD in the offspring. What Is Known Already: A few systematic reviews have explored the correlation between assisted conception and ASD with inconclusive results, partly due to the heterogeneity of diagnostic criteria and methodology in the different studies. Study Design, Size, Duration: Systematic review of 7 observational studies (2 cohort and 5 case-control) encompassing 9216 subjects diagnosed with ASD published since 2000. Materials, Setting, Methods: Literature searches were conducted to retrieve observational studies on the risk of ASD in ART population. Databases searched included PubMed, EMBASE and PsycINFO. In order to obtain more consistent results, we only included the studies in which (i) subjects with either infantile autism or ASD could be identified according to international classification systems and (ii) the diagnosis was obtained from hospital records. Seven studies matched the inclusion criteria. Main Results And The Role Of Chance: Four out of seven studies, including the two with the best quality scores, did not show an association between ART and ASD. The two papers supporting an increased risk of autism following ART had the lowest quality scores, due to major methodological limitations. Only one paper showed a protective role of ART. Limitations, Reasons For Caution: In spite of the strict inclusion criteria applied as to the diagnosis of ASD, the papers selected are heterogeneous in many aspects including study design, definitions of ART, data source and analysed confounders. Wider Implications Of The Findings: At present, there is no evidence that ART is significantly associated with ASD and hence that current health policies should be modified. The divergent results of some of the studies suggest that further prospective, large and high-quality studies are still needed. Study Funding/competing Interest(s): This work was supported, in part, by the Italian Ministry of Health and by Tuscany Region. The authors have no competing interests to declare. Trial Registration Number: N/A. abstract_id: PUBMED:24793634 The risk of neurodisability and other long-term outcomes for infants born following ART. Children born after assisted reproductive technologies (ART) have an increased morbidity. The risk of developing cerebral palsy is nearly doubled and the risk of developing epilepsy is also higher. Behavioural problems including attention deficit/hyperactivity disorder may be more common in children born following ART than among naturally conceived children but the finding is uncertain. Data on autism are difficult to interpret. There may exist a small increase in the incidence of childhood cancer and there is greater evidence of an elevated risk of asthma. To some extent, these risks are mediated by neonatal complications including prematurity and low birth weight but some effects such as cerebral palsy are likely to be linked to the increased rate of multiple births after ART. Many of the neonatal complications after ART are most likely linked to parental subfertility and are less an effect of the ART technology. The possibility exists that imprinting errors, associated with subfertility and/or ART, may result in long-term morbidity. abstract_id: PUBMED:21543984 Month of conception and risk of autism. Background: Studies of season of birth or season of conception can provide clues about etiology. We investigated whether certain months or seasons of conception are associated with increased risk of autism spectrum disorders, for which etiology is particularly obscure. Methods: The study population comprises 6,604,975 children born from 1990 to 2002 in California. Autism cases (n = 19,238) were identified from 1990 through 2008 in databases of the California Department of Developmental Services, which coordinates services for people with developmental disorders. The outcome in this analysis was autism diagnosed before the child's sixth birth date. The main independent variables were month of conception and season of conception (winter, spring, summer, and fall). Multivariate logistic regression models were used to estimate odds ratios (ORs) with their 95% confidence intervals (CIs) for autism by month of conception. Results: Children conceived in December (OR = 1.09 [95% CI = 1.02-1.17]), January (1.08 [1.00-1.17]), February (1.12 [1.04-1.20]), or March (1.16 [1.08-1.24]) had higher risk of developing autism compared with those conceived in July. Conception in the winter season (December, January, and February) was associated with a 6% (OR = 1.06, 95% CI = 1.02-1.10) increased risk compared with summer. Conclusions: Higher risks for autism among those conceived in winter months suggest the presence of environmental causes of autism that vary by season. abstract_id: PUBMED:36291447 Neurodevelopmental Outcomes of Pregnancies Resulting from Assisted Reproduction: A Review of the Literature. The term infertility is defined as the lack of conception within 1 year of unprotected intercourse. It affects more than 80 million individuals worldwide. It is estimated that 10-15% of couples of reproductive age are challenged by reproductive issues. Assisted reproduction techniques (ART) are responsible for more than 4% of live births. Our aim is to review the research on neurodevelopmental outcomes of newborns born after the implementation of assisted reproduction methods compared to those conceived naturally. We conducted a comprehensive search of the PubMed, Crossref and Google Scholar electronic databases for related articles up to June 2022 using the PRISMA guidelines. Our research revealed a large number of long term follow-up studies between 2 and 18 years of age, with comparable developmental outcomes. Many studies compared the effects of different infertility treatments against natural conception. The review of the literature revealed that ART is safe, as the majority of studies showed no effect on the neurodevelopmental outcomes of the offspring. In most cases when such an effect was observed, it could be attributed to confounding factors such as subfertility, multiple pregnancies and gestational age at delivery. Finally, the increase in the prevalence of neurodevelopmental disorders after ART, as described in studies with statistically significant results, is predominantly marginal, and given the low incidence of neurodevelopmental disorders in the general population, its clinical significance is debatable. abstract_id: PUBMED:24842673 No increase in autism-associated genetic events in children conceived by assisted reproduction. Objective: To understand the rate of genetic events in patients with autism spectrum disorder (ASD) who were exposed to assisted reproduction. Design: Case control study using genetics data. Setting: Twelve collaborating data collection sites across North America as part of the Simons Simplex Collection. Patient(s): 2,760 children with ASD, for whom 1,994 had published copy number variation data and 424 had published gene mutation status available. Intervention(s): None. Main Outcome Measure(s): Rates of autism-associated genetic events in children with ASD conceived with assisted reproduction versus those conceived naturally. Result(s): No statistically significant differences in copy number variations or autism-associated gene-disrupting events were found when comparing ASD patients exposed to assisted reproduction with those not exposed to assisted reproduction. Conclusion(s): This is the first large genetic association to concurrently examine the genotype of individuals with ASD in relation to their exposure to ART versus natural conception, and it adds reassuring evidence to the argument that ART does not increase the risk of ASD. abstract_id: PUBMED:33135961 How Are They Doing? Neurodevelopmental Outcomes at School Age of Children Born Following Assisted Reproductive Treatments. Objective: The purpose of this study was to assess major neurodevelopmental aspects of children conceived by assisted reproductive treatments compared to spontaneously conceived children during the early school years. Material & Methods: In this follow-up study, mothers of 358 children born following assisted reproductive treatments and 401 spontaneously-conceived children were interviewed by telephone regarding their children's health and development, when the children were 7-8 years old. The main outcomes were maternal responses to 4 questionnaires: Developmental Coordination Disorder Questionnaire, Short Sensory Profile, Autism Spectrum Screening Questionnaire, and the Attention-deficit hyperactive disorder (ADHD) Child Symptom Inventory-4 subscale. Mothers reported diagnoses of ADHD and autism spectrum disorder. Results: No significant differences were found between the groups in Developmental Coordination Disorder Questionnaire or Short Sensory Profile scores upon univariate or multivariable analyses. There was a slightly higher but nonsignificant rate of diagnosed ADHD among children in the assisted reproductive treatment group (9.6% vs 5.5%; P = .18); on multivariable analysis, a nonsignificant increase in ADHD was also found for assisted reproductive treatment children (hazard ratio 1.45, 95% confidence interval 0.81-2.61). Regarding the Child Symptom Inventory-4 criteria for ADHD among the children who had never been diagnosed, there was also a slightly higher but nonsignificant rate among the assisted reproductive treatments compared to spontaneously-conceived children on univariate (2.4% vs 1.8%; P = .50) and multivariable analysis (odds ratio 0.88, 95% confidence interval 0.27-2.86). Autism spectrum disorder diagnosis or Autism Spectrum Screening Questionnaire scores were not significantly different; however, 5 of the 6 children with autism spectrum disorder diagnoses were in the assisted reproductive treatment group. Conclusions: Neurodevelopmental measures were similar in both groups, although nonconclusive regarding ADHD and autism spectrum disorder risk. These findings contribute to the knowledge regarding long-term assisted reproductive treatment outcomes. abstract_id: PUBMED:32101068 Long-term health of children conceived after assisted reproductive technology. The aim of this narrative review is to summarize the present knowledge on long-term outcome of children born after assisted reproductive technologies (ART). The main outcomes covered are neurodevelopment including cerebral palsy, cognitive development, attention deficit hyperactivity disorder and autism spectrum disease, growth, cardiovascular function, diabetes type 1, asthma, malignancies, and reproductive health. Results have mainly been obtained from systematic reviews/meta-analyses and large registry studies. It has been shown that children born after ART, when restricted to singletons, have a similar outcome for many health conditions as their spontaneously conceived peers. For some outcomes, particularly cardiovascular function and diabetes, studies show some higher risk for ART singletons or subgroup of ART singletons. The fast introduction of new ART techniques emphasizes the importance of continuous surveillance of children born after ART. Answer: The evidence regarding the risk of developing infantile autism in children born after assisted conception is mixed, with some studies suggesting a decreased risk, while others indicate no significant difference compared to naturally conceived children. A Danish population-based matched case-control study found a 59% decreased risk for developing infantile autism among children conceived after assisted conception (PUBMED:17456530). However, a population-based follow-up study in Denmark found no increased risk of autism spectrum disorders (ASD) in children born after assisted conception when adjusting for confounding factors (PUBMED:20584728). A systematic review and meta-analysis assessing the associations between assisted conception and cerebral palsy (CP), ASD, and developmental delay showed inconsistent results for ASD, with no studies assessing the risk exclusively in children born after ovulation induction (PUBMED:19124707). Another systematic review found no evidence that assisted reproductive technology (ART) significantly increases the risk of ASD in offspring (PUBMED:24129612). Research on the long-term outcomes of children born after ART indicates an increased morbidity, with some studies suggesting a possible small increase in the incidence of autism, but the clinical significance of this increase is debatable due to the low incidence of neurodevelopmental disorders in the general population (PUBMED:24793634). A study examining genetic events in children with ASD exposed to assisted reproduction found no statistically significant differences compared to those conceived naturally (PUBMED:24842673). A follow-up study assessing neurodevelopmental outcomes at school age found no significant differences in neurodevelopmental measures between children born following assisted reproductive treatments and spontaneously conceived children, although the findings were nonconclusive regarding ADHD and autism spectrum disorder risk (PUBMED:33135961). In summary, while some studies suggest a decreased risk of infantile autism in children born after assisted conception, the overall evidence does not conclusively support a significant difference in risk compared to natural conception. Further research is needed to clarify these findings (PUBMED:32101068).
Instruction: Do patient and nurse outcome differences exist between 2 negative pressure wound therapy systems? Abstracts: abstract_id: PUBMED:22552107 Do patient and nurse outcome differences exist between 2 negative pressure wound therapy systems? Purpose: We prospectively compared the effectiveness of foam- and gauze-based negative pressure wound therapy (NPWT) systems on wound healing, pain, cost, and hospital length of stay. We also compare ease of use and time in performing dressing changes reported by nurses. Design: Randomized, controlled clinical trial comparing foam- and gauze-based NPWT systems. Subjects And Setting: Eleven adult subjects with physician orders for NPWT participated in the study. Subjects were middle-aged, white, and male. Methods: Six subjects were randomly allocated to foam-based and 5 to gauze-based NPWT systems. Wound healing rates and pain at the first dressing change were measured using a centimeter ruler and a visual analog scale, respectively. Wound care costs were tabulated from a checklist of supplies used and nurse perceptions were measured by responses to Likert-type surveys. Relationships between NPWT system and selected variables were measured using Wilcoxon rank sum tests. Results: Median wound healing rates did not differ significantly between foam-based and gauze-based NPWT systems when measured in centimeters at first dressing change for length (10.6 vs 16.5, P = .58), width (2.7 vs 4.2, P = .41), depth (2.2 vs 2.5, P = .78), and tunneling and undermining (both 0 vs 0, P = .82 and .79, respectively). No differences were detected in pain rating at first dressing change (3.2 vs 2.4, P = .77), cost of wound care ($510.18 vs $333.54 P = .86), or hospital length of stay (26.33 vs 14.8 days; P = .58), respectively. There were no differences in nurses' experiences in ease of performing dressing changes and mean time to perform the first dressing change for foam- or gauze-based NPWT systems: 32.3 vs 38.8 minutes; P = .52, respectively. Conclusions: In a pilot study comparing the effectiveness of foam- and gauze-based NPWT systems, no statistically significant differences were found in patient wound healing, pain, length of stay, or cost of wound care. Nursing time and perceptions about the ease of preparing and completing dressing changes did not differ between systems. Additional research is needed to more definitively determine any differences in wound healing or nurse satisfaction using gauze- versus foam-based NPWT systems. abstract_id: PUBMED:28385111 Negative Pressure Wound Therapy for Treatment of Mesh Infection After Abdominal Surgery: Long-Term Results and Patient-Reported Outcome. Background And Aims: Treatment of synthetic mesh infections has previously often resulted in mesh explantation. Negative pressure wound therapy has been used in these situations with encouraging results. The aims of this study were to evaluate wound healing, mesh preservation, and patient-reported outcome after negative pressure wound therapy of mesh infections. Material And Methods: Medical records of patients treated with negative pressure wound therapy for mesh infection and age-matched mesh-operated controls without postoperative complications were scrutinized in a retrospective study. An abdominal wall complaints questionnaire was used to evaluate patient-reported outcome. Results: Of 722 mesh operations performed 2005-2012, negative pressure wound therapy was used for treating postoperative mesh infections in 48 patients. A total of 48 age-matched controls were recruited from patients without wound complications. No differences were found between groups regarding preoperative characteristics. The following peroperative characteristics were significantly more frequent in the negative pressure wound therapy group: emergency operation, dirty/infected surgical field, surgical techniques other than laparoscopic intraperitoneal onlay mesh repair, implantation of more than one mesh, larger mesh size, longer duration of surgery, and mesh not completely covered with anterior rectus fascia. The entire mesh was salvaged in 92%, while four meshes were partly excised. Wounds healed in 88% after a median of 110 (range 3-649) days. In total, 85% in the negative pressure wound therapy group and 75% in the control group answered the questionnaire. There were no significant differences regarding pain, other abdominal wall symptoms, and satisfaction with the final result in favor of the controls. Conclusion: No mesh had to be explanted and wound healing was achieved in the majority of patients when negative pressure wound therapy was used for treatment of mesh infections. However, time to healing was long, and numerous procedures were sometimes needed. Positive long-term outcome was more frequently reported among controls. abstract_id: PUBMED:21633272 Navigating new technologies in negative pressure wound therapy. Plastic surgeons and their support staff are tasked with proficient management of a wide variety of complex wounds. Since its introduction, negative pressure wound therapy (NPWT) has increasingly been used within the plastic surgery specialty to improve and simplify wound management. Increased usage of the therapy has prompted the development of a myriad of new NPWT systems. While an expanded product selection allows greater choice to maximize patient outcomes, sound decision-making also requires a clear understanding of the characteristics of various NPWT systems and applications. Wound-specific NPWT systems of varying size are available for low- to moderate-severity wounds, clean closed incisions, and acute abdominal wounds. Wound size and severity, amount of exudate, and patient mobility issues have become important considerations when choosing an NPWT device. The purpose of this article is to familiarize the reader with the latest sophistications in NPWT systems to guide decision making and usage. abstract_id: PUBMED:29235669 Portable negative-pressure wound therapy for pyoderma gangrenosum: Report of two cases. Pyoderma gangrenosum is a chronic non-infectious neutrophilic dermatosis that causes undermining ulcers. Topical therapies for the deep ulcers of pyoderma gangrenosum have not been established. To investigate whether negative-pressure wound therapy is effective for a pyoderma gangrenosum ulcer, we used the PICO single use negative-pressure wound therapy system (Smith & Nephew, London, UK) for two pyoderma gangrenosum patients. In these cases, the ulcers decreased in size and necrolytic tissue was removed notably. Moreover, there were no secondary infections nor was there Koebner phenomena. Our cases suggest that portable negative-pressure wound therapy can be a treatment option for deep, intractable ulcers caused by pyoderma gangrenosum. Because portable negative-pressure wound therapy devices afford increased mobility to patients, they can give the patient a better quality of life than standard negative-pressure wound therapy systems do. abstract_id: PUBMED:36199166 Patient experience of negative pressure wound therapy: A qualitative study. Aim: This study aimed to clarify the treatment experience of patients undergoing negative pressure wound therapy (NPWT). Design: This study used a qualitative design. Methods: Seventeen inpatients were semi-structured interviewed about their experiences of treatment with negative pressure wound therapy. Results: Inpatients' answers were categorized into seven themes: pain and discomfort associated with treatment, physical limitations owing to attached device, mental burden owing to the odour and noises of the attached device, social limitations owing to the attached device, advances in medical care and science, device personification and mixed feelings towards medical staff. The patients were able to tolerate the aforementioned limitations while feeling attachment and gratitude towards the device created through advances in medical care and science, and towards medical staff who helped them heal. In the future, we plan to develop an NPWT care guide. abstract_id: PUBMED:29153209 Using negative pressure therapy to prevent and treat wounds The offer of Negative Pressure Wound Therapy (NPWT) has considerably increased these last years. Nowadays, one can choose between different tools such as motorised systems - or not - like pumps; devices which are particularly portables; automated therapies allowing regular instillations; or finally systems adapted to specific indications like "open abdominal wounds". This diversity of devices implies more than ever that prescriptions from the medical doctor be detailed and a good trackability of the nursing follow-up. abstract_id: PUBMED:29087679 APPLICATION OF NEGATIVE PRESSURE THERAPY Negative pressure therapy is gradually taking an increasingly important role in the treatment of chronic wound healing because of its simple application in hospital or outpatient setting and good comfort with no pain for the patient. Chronic wound healing is accelerated in comparison with other conservative treatments. The level of negative pressure is between 40 and 125 mm Hg below ambient. Direct and indirect effect of the negative pressure therapy helps in wound healing and provides good preparation for definitive surgical management of wounds. abstract_id: PUBMED:35146830 The impact of incisional negative pressure wound therapy on scar quality and patient-reported outcomes: A within-patient-controlled, randomised trial. Literature provides a moderate level of evidence for the beneficial effects of incisional negative pressure wound therapy (iNPWT) on scar quality. The purpose of this study was to establish if iNPWT results in improved scar outcomes in comparison to the standard of care. Therefore, a within-patient randomised controlled, open-label trial was conducted in transgender men undergoing gender-affirming mastectomies. A unilateral side was randomised to receive iNPWT (PICO™, Smith&Nephew) without suction drains and contrastingly the standard dressing (Steri-Strips™) with suction drain. Scar quality and questionnaires were bilaterally measured by means of objective assessments and patient-reported outcome measures (PROM) at 1, 3 and 12 months. Objective scar outcomes were scar pliability (Cutometer®), colouration (DSM-II) and scar width (3-D imaging). PROM outcomes were related to scars (POSAS and SCAR-Q) and body satisfaction (BODY-Q). From 85 included patients, 80 were included for analyses. No significant difference between treatments was seen in the quantitative outcomes of scar pliability, colour, and width. For qualitative scar outcomes, several significant findings for iNPWT were found for several subscales of the POSAS, SCAR-Q, and BODY-Q. These effects could not be substantiated with linear mixed-model regression, signifying no statically more favourable outcome for either treatment option. In conclusion, this study demonstrated that some PROM outcomes were more favourable for the iNPWT compared to standard treatment. In contrast, the quantitative outcomes showed no beneficial effects of iNPWT on scar outcomes. This suggests that iNPWT is of little benefit as a scar-improving therapy. abstract_id: PUBMED:23697051 Clinical audit of negative pressure therapy Negative pressure therapy is used to encourage the healing of a wound. In Bichart Hospital, in Paris, a clinical audit was carried out to assess how the therapy is managed. It emerged that the procedure is used in accordance with recommendations and that the information given to the patient is sufficient. However, nurses' professional knowledge must be consolidated and the traceability of the procedure in the patient's notes does not allow for optimal monitoring of the wound. Improvement actions are envisaged. abstract_id: PUBMED:24060024 High pressure injection injury of the foot: a role of negative pressure wound therapy. High pressure injection injuries (HPII) of the foot are not common in every-day practice. We present a 50-year-old patient with a HPII of the left foot caused by water-gun in a self-inflicted accident working as a technical diver four meters under the sea surface. Surgical treatment included extensive debridement of necrotic tissue and fibrin deposits and removal of foreign material. A negative pressure wound therapy (NPWT) device was applied resulting in a good wound base for subsequent skin grafting leading to a good functional and cosmetic outcome. Answer: According to the study reported in PUBMED:22552107, no statistically significant differences were found between foam-based and gauze-based negative pressure wound therapy (NPWT) systems in terms of patient wound healing, pain, length of stay, or cost of wound care. Additionally, nursing time and perceptions about the ease of preparing and completing dressing changes did not differ between the two systems. This suggests that, at least within the context of this study, there were no notable patient and nurse outcome differences between the two NPWT systems. However, the authors of the study do recommend additional research to more definitively determine any differences in wound healing or nurse satisfaction using gauze- versus foam-based NPWT systems.
Instruction: Does coronary calcium score predict future cardiac function? Abstracts: abstract_id: PUBMED:19615766 Can total cardiac calcium predict the coronary calcium score? Background: Mitral annular calcification (MAC) shares the same risk factors as atherosclerosis and is associated with coronary artery disease as well as cardiovascular events. However, sensitivity and positive predictive value are low. We hypothesized that a global echocardiographic calcium score would better predict coronary atherosclerotic burden, as assessed by coronary artery calcium score (CAC), than MAC alone. Methods: An echocardiographic score was devised to measure global cardiac calcification in a semi-quantitative manner; this included calcification in the aortic valve and root, the mitral valve and annulus, and the sub-mitral apparatus. This score, and a simplified version, were compared with a similar calcification score by CT scan, as well as the CAC. Results: There was a good correlation between the two global calcification scores; the echocardiographic score also correlated with CAC. Using CAC >400 as a measure of severe coronary atherosclerosis, an echocardiographic score ≥5 had a positive predictive value of 60%. Importantly, the simplified score performed equally well (≥3 had a positive predictive value of 62%). Conclusions: Global cardiac calcification, assessed by CT scan or echocardiography, correlates with the extent of coronary calcium. A semi-quantitative calcium score can be easily applied during routine echocardiographic interpretation and can alert the reader to the possibility of severe coronary atherosclerosis. abstract_id: PUBMED:23341839 A head-to-head comparison of the coronary calcium score by computed tomography with myocardial perfusion imaging in predicting coronary artery disease. Objectives: The coronary artery calcium (CAC) score has been shown to predict future cardiac events. However the extent to which the added value of a CAC score to the diagnostic performance of myocardial perfusion imaging (MPI) by single photon emission computed tomography (SPECT) is unclear. The purpose of this study is to investigate the correlation between CAC score and SPECT in patients with suspected coronary artery disease. Methods: A retrospective review of the CAC scores by use of the Agatston calcium scoring method and cardiac SPECT diagnostic reports was conducted in 48 patients, who underwent both coronary computed tomography (CT) and SPECT examinations due to suspected coronary artery disease. A Pearson correlation test was used to determine the relation between CAC scores and MPI-SPECT assessments with regard to the evaluation of the extent of disease. Results: Forty-seven percent of the patients had CAC scores more than 100, while 42% of these patients demonstrated abnormal, or probably abnormal, MPI-SPECT. Of the 23% of patients with a zero CAC score, only 7% had normal MPI-SPECT findings. No significant correlation was found between the CAC scores and MPI-SPECT assessments (r value ranged from 0.012 to 0.080), regardless of the degree of coronary calcification. Conclusions: There is a lack of correlation between the CAC scores and the MPI-SPECT findings in the assessment of the extent of coronary artery disease. CAC scores and MPI-SPECT should be considered complementary approaches in the evaluation of patients with suspected coronary artery disease. abstract_id: PUBMED:33614311 Coronary Artery Calcium Score: Assessment of SYNTAX Score and Prediction of Coronary Artery Disease. Background With the advent of modern imaging technologies, non-invasive assessment of the coronary system is not only possible but its complexity and plaque burden can be quantified. This study aims to determine whether calcium score on computed tomography coronary angiography (CTCA) can be associated with the complexity of coronary artery disease (CAD), which is determined by the SYNTAX score on coronary angiography, as well as to determine which cut-off value of coronary artery calcium (CAC) score can predict severe CAD in our population. Methodology This was a cross-sectional study conducted at the Rawalpindi Institute of Cardiology, Pakistan from January 2019 to March 2020. The calcium score of all patients with low-to-intermediate pretest probability of CAD was calculated on CTCA. All patients who had significant disease on CTCA were subjected to conventional coronary angiography and SYNTAX score was calculated, which was later used to determine the association between calcium and SYNTAX score. Results A total of 90 patients were included in the study. CAC and SYNTAX score were found to be positively correlated (Pearson coefficient [r] = 0.354; p = 0.001). The total CAC score with a cut-off value of 212 recognized patients with the SYNTAX score of >27. The sensitivity was 66.7% and specificity was 70.5% with an area under the curve of 0.743. The mean calcium score of patients in our study group was 223, with the maximum score of 1,216 and the minimum score of zero. Conclusion A CAC score greater than 212 is associated with a high SYNTAX score indicating complex disease. Only age is an independent predictor of calcium score. abstract_id: PUBMED:29383526 Leveraging the coronary calcium scan beyond the coronary calcium score. Non-contrast cardiac computed tomography in order to obtain the coronary artery calcium score has become an established diagnostic procedure in the clinical setting, and is commonly employed in clinical and population-based research. This state-of-the-art review paper highlights the potential gain in information that can be obtained from the non-contrast coronary calcium scans without any necessary modifications to the scan protocol. This includes markers of cardio-metabolic health, such as the amount of epicardial fat and liver fat, but also markers of general health including bone density and lung density. Finally, this paper addresses the importance of incidental findings and of radiation exposure accompanying imaging with non-contrast cardiac computed tomography. Despite the fact that coronary calcium scan protocols have been optimized for the visualization of coronary calcification in terms image quality and radiation exposure, it is important for radiologists, cardiologists and medical specialists in the field of preventive medicine to acknowledge that numerous additional markers of cardio-metabolic health and general health can be readily identified on a coronary calcium scan. Key Points: • The coronary artery calcium score substantially increased the use of cardiac CT. • Cardio-metabolic and general health markers may be derived without changes to the scan protocol. • Those include epicardial fat, aortic valve calcifications, liver fat, bone density, and lung density. • Clinicians must be aware of this potential additional yield from non-contrast cardiac CT. abstract_id: PUBMED:36852285 Coronary calcium score in COVID-19 survivors: Association with cardiac injury and cardiac function after 6 weeks. Aims: Cardiac manifestations are common in COVID-19, often elevated serum troponin levels or myocardial dysfunction on trans-thoracic echocardiography (TTE) is observed. Both parameters are associated with increased in-hospital mortality. Possibly, subclinical coronary atherosclerosis plays a role, of which severity can be assessed by calculating the coronary artery calcium (CAC) score. This study aims to determine the relation between coronary atherosclerosis and cardiac manifestations in COVID-19 survivors. Methods: This study was conducted at the Leiden University Medical Center. All patients admitted for COVID-19 were included and scheduled for a 6-week follow-up visit with trans-thoracic echocardiography (TTE). CAC was assessed according to an ordinal score on non-gated, non-contrast enhanced computed tomography of the chest. Patients with and without CAC were compared on cardiac injury as reflected by elevated serum troponin levels and impaired cardiac function assessed through TTE. Results: In total, 146 patients were included. Mean age was 62 years and 62 % of the patients were male. During admission, patients with CAC showed significantly higher levels of troponin (19 ng/L vs 10 ng/L; p < 0.01). Overall, mild echocardiographic abnormalities were seen; 12 % showed reduced left ventricular function (left ventricular ejection fraction of <50 %) and 14 % reduced right ventricular function (tricuspid annular planar systolic excursion ≤17 mm). Following multivariable adjustments, there was no significant relation between CAC and myocardial function at 6 weeks. Conclusion: The present study shows that coronary atherosclerosis is associated with cardiac injury in COVID-19 survivors. However, no significant relation with impaired cardiac function was demonstrated. abstract_id: PUBMED:34495426 Cardiac calcium score in systemic sclerosis. Cardiac coronary Ca score (CCS), and extra coronary Ca score (ECCS) estimation in asymptomatic systemic sclerosis (SSc) patients and their relation to different disease and patients' variables. The CCS and ECCS were estimated in asymptomatic 20 SSc patients compared to 20 age and sex-matched healthy control using non-contrast cardiac computed tomography. All were applied for cardiac history taking, examination, echocardiography, body mass index (BMI), complete blood picture, erythrocyte sedimentation rate, and lipid profile estimation. The SSc patients were 11 females and 9 males with a mean age of (42.55 ± 9.145) and mean disease duration (12.9 ± 6.774). CCS was reported in 9 (45%) SSc cases and 2 (10%) of the control; (p = 0.013) and was significantly greater in SSc patients (58.4 ± 175.443) than in the control group (0.7 ± 2.25); (p = 0.01). The ECCS was significantly higher in SSc cases (194.45 ± 586.511) than control group (2.8 ± 7.8); (p = 0.001) and reported in 16 (80%) SSc cases and 3 (15%) of controls; (p = 0.000). Limited scleroderma cases had higher scores than diffuse type. Patients with total ca score (> 100) were older (p = 0.016), had longer disease duration (p = 0.001) and greater BMI (p = 0.002). Significant correlation was found between the log-transformed CCS and disease duration, age, BMI, left ventricular mass, and mass index. Systemic sclerosis patients are at increased risk of subclinical cardiovascular disease determined by cardiac Ca scoring as a noninvasive and reliable method. Extra coronary calcification may be an earlier indicator for this. Disease duration is a determinant risk factor for cardiac calcification in SSc. Key Points • Although the association between interleukin-6 (IL-6) promoter polymorphism and rheumatic arthritis (RA) has been discussed in the previous meta-analysis, their conclusions are inconsistent. • Systemic sclerosis patients are at high risk of accelerated atherosclerosis and cardiovascular diseases. Coronary atherosclerosis was previously estimated in SSc patients through coronary angiography. A novel method of assessing coronary artery disease is the coronary calcium score, as determined by multidetector computed tomography, it measures coronary artery calcification that occurs in atherosclerotic plaque. In this study, the cardiac coronary and extra coronary Ca score were evaluated in relation to disease characteristics in asymptomatic SSC patients for early detection of coronary artery disease. abstract_id: PUBMED:38336618 Cardiac troponins and coronary artery calcium score: a systematic review. An early diagnosis of atherosclerosis, particularly in subclinical status, can play a remarkable role in reducing mortality and morbidity. Because of coronary artery calcification (CAC) nature in radiation exposure, finding biomarkers associated with CAC could be useful in identifying individuals at high risk of CAC score. In this review, we focused on the association of cardiac troponins (hs-cTns) and CAC to achieve insight into the pathophysiology of CAC. In October 2022, we systematically searched Web of Science, Scopus, PubMed, and Embase databases to find human observational studies which have investigated the association of CAC with cardiac troponins. To appraise the included articles, we used the Newcastle Ottawa scale (NOS). Out of 520 records, 10 eligible studies were included. Based on findings from longitudinal studies and cross-sectional analyses, troponin T and I were correlated with occurrence of CAC and its severity. Two of the most important risk factors that affect the correlation between hs-cTns serum levels and CAC were age and gender. The elevation of cardiac troponins may affect the progression of CAC and future cardiovascular diseases. Verifying the association between cardiac troponins and CAC may lead to identify individuals exposed to enhanced risk of cardiovascular disease (CVD) complications and could establish innovative targets for pharmacological therapy. abstract_id: PUBMED:23110244 Coronary artery calcium score: Re-evaluation of its predictive value for coronary artery disease. Coronary artery disease is the leading cause of death in advanced countries and its prevalence is increasing among the developing countries. Cardiac computed tomography (CT) has been increasingly used in the diagnosis of coronary artery disease due to its rapid improvements in multislice CT scanners over the last decade, and this less-invasive technique has become a potentially effective alternative to invasive coronary angiography. Quantifying the amount of coronary artery calcium with cardiac CT has been widely accepted as a reliable non-invasive technique for predicting risk of future cardiovascular events. However, the main question that remains uncertain is whether routine, widespread coronary artery calcium scoring in an individual patient will result in an overall improvement in quality of care and clinical outcomes. In this commentary, we discuss a current issue of the clinical value of coronary artery calcium scoring with regard to its value of predicting adverse cardiac events. We also discuss the applications of coronary artery calcium scores in patients with different risk groups. abstract_id: PUBMED:37660746 Interpreting the Coronary Artery Calcium Score - Critical Information for the Practicing Physician. Coronary artery calcium scanning is a routine test for assessing the severity of atherosclerosis in asymptomatic individuals. This inexpensive, noninvasive test quantifies the calcium deposition in the 4 principal coronary arteries. Correct interpretation is important to the physician (for recommending therapy) and to the patient (for determining his or her lifetime risk of a cardiovascular event). A score of 0 indicates that a cardiovascular event is extremely unlikely in the next 5 years. In contrast, a score greater than 0 portends a coronary event. The higher the score, the greater the risk. Both the arterial location of the calcium and the number of coronary arteries involved alter the interpretation of the calcium score. At any given age, females have significantly lower scores than males. One-third of individuals with scores greater than 1000 will have a cardiovascular event within 3 years. For all elevated calcium scores, aggressive treatment is warranted, including significant lifestyle changes and medications to reduce low-density lipoprotein cholesterol. Understanding the importance of the coronary artery calcium score will result in improved therapy and patient compliance. abstract_id: PUBMED:25585030 Scoring of coronary artery calcium scans: history, assumptions, current limitations, and future directions. Coronary artery calcium (CAC) scanning is a reliable, noninvasive technique for estimating overall coronary plaque burden and for identifying risk for future cardiac events. Arthur Agatston and Warren Janowitz published the first technique for scoring CAC scans in 1990. Given the lack of available data correlating CAC with burden of coronary atherosclerosis at that time, their scoring algorithm was remarkable, but somewhat arbitrary. Since then, a few other scoring techniques have been proposed for the measurement of CAC including the Volume score and Mass score. Yet despite new data, little in this field has changed in the last 15 years. The main focus of our paper is to review the implications of the current approach to scoring CAC scans in terms of correlation with the central disease - coronary atherosclerosis. We first discuss the methodology of each available scoring system, describing how each of these scores make important indirect assumptions in the way they account (or do not account) for calcium density, location of calcium, spatial distribution of calcium, and microcalcification/emerging calcium that might limit their predictive power. These assumptions require further study in well-designed, large event-driven studies. In general, all of these scores are adequate and are highly correlated with each other. Despite its age, the Agatston score remains the most extensively studied and widely accepted technique in both the clinical and research settings. After discussing CAC scoring in the era of contrast enhanced coronary CT angiography, we discuss suggested potential modifications to current CAC scanning protocols with respect to tube voltage, tube current, and slice thickness which may further improve the value of CAC scoring. We close with a focused discussion of the most important future directions in the field of CAC scoring. Answer: The coronary calcium score (CAC) is a non-invasive measure used to estimate the burden of coronary artery calcification, which is a marker of atherosclerosis and has been associated with the risk of future cardiac events. However, the predictive value of CAC for future cardiac function specifically is not directly addressed in the provided abstracts. Some studies suggest that CAC is associated with the severity of coronary artery disease (CAD) and can be used to predict the presence of severe CAD. For instance, one study found that a global cardiac calcification score, which includes CAC, correlated with the extent of coronary calcium and could alert to the possibility of severe coronary atherosclerosis (PUBMED:19615766). Another study indicated that a CAC score greater than 212 is associated with a high SYNTAX score, indicating complex disease, and only age is an independent predictor of calcium score (PUBMED:33614311). However, the relationship between CAC and cardiac function, particularly after a cardiac event such as COVID-19, is less clear. One study showed that while coronary atherosclerosis was associated with cardiac injury in COVID-19 survivors, there was no significant relation with impaired cardiac function at 6 weeks (PUBMED:36852285). This suggests that while CAC may be indicative of cardiac injury, it does not necessarily predict ongoing cardiac dysfunction. Moreover, the coronary calcium score has been compared with other diagnostic tools such as myocardial perfusion imaging (MPI) by single photon emission computed tomography (SPECT), and no significant correlation was found between the CAC scores and MPI-SPECT assessments in the evaluation of the extent of coronary artery disease (PUBMED:23341839). This indicates that CAC scores and MPI-SPECT may be complementary but not interchangeable in assessing cardiac health. In summary, while CAC is a valuable tool for assessing the risk of future cardiovascular events and the presence of coronary artery disease, the abstracts provided do not offer conclusive evidence that CAC can predict future cardiac function. It may be associated with cardiac injury, but its relation to ongoing cardiac function requires further investigation.
Instruction: Are ataques de nervios in Puerto Rican children associated with psychiatric disorder? Abstracts: abstract_id: PUBMED:16239868 Are ataques de nervios in Puerto Rican children associated with psychiatric disorder? Objective: To provide the first empirical analysis of a cultural syndrome in children by examining the prevalence and psychiatric correlates of ataques de nervios in an epidemiological study of the mental health of children in Puerto Rico. Method: Probability samples of caretakers of children 4-17 years old in the community (N = 1,892; response rate: 90%) and in clinical services (N = 761; response rate 72%) were administered structured interviews to assess the presence and correlates of ataques de nervios. Results: Nine percent of children in the community sample and 26% of children in the clinical sample had a reported history of an ataque de nervios. In contrast to the overall community and clinical samples, which had more boys in them, the ataque de nervios groups in both samples had more girls in them. Family history of ataques de nervios was associated with ataques de nervios in children in both samples. Across a wide range of depression, anxiety, and disruptive disorders, children who reported an ataque de nervios were more likely to meet research criteria for psychiatric disorder in both samples. Conclusions: Ataques de nervios are a frequently reported cultural syndrome among children in Puerto Rico. Adolescent girls are more likely to report this experience. Ataques de nervios have a significant relationship with psychiatric disorder and impairment in Puerto Rican children. abstract_id: PUBMED:20010029 Ataques de Nervios and their psychiatric correlates in Puerto Rican children from two different contexts. Among Latino adults and children, ataques de nervios has been associated with an array of psychiatric disorders. Using data from a probability sample of Puerto Rican children, aged 5 to 13 years (N = 2491), we assessed the lifetime prevalence and psychiatric correlates of ataques in youth residing in the South Bronx, New York and San Juan, Puerto Rico. Baseline site comparisons indicated that between 4% and 5% of children had a lifetime prevalence of ataques (either by child or parent report) and that ataques were associated with greater global impairment and a host of childhood disorders within the previous twelve months. Ataques were also correlated with greater exposure to violence, as well as more stressful life events for the South Bronx sample. After controlling for several covariates, ataques continued to be significantly associated with psychopathology. Ataques are, therefore, a significant correlate of global impairment and childhood psychopathology among Puerto Rican youth. abstract_id: PUBMED:8445374 The prevalence of ataques de nervios in the Puerto Rico disaster study. The role of culture in psychiatric epidemiology. This paper presents one of the few epidemiological studies of a popular category of distress, ataques de nervios (attacks of nerves), in the cross-cultural psychiatric literature. As part of a major study of the psychological consequences of the 1985 floods and mudslides which caused considerable damage and death in Puerto Rico, a question was added to the Diagnostic Interview Schedule/Disaster Supplement concerning ataques de nervios. This additional item provided the opportunity to carry out the first study of this important Puerto Rican popular category of distress using a representative, community-based sample. This paper addresses several key questions about ataques de nervios which come from previous psychiatric and anthropological literatures concerning the social correlates of who experiences an ataque de nervios and the relationship of ataques to social distress and psychiatric disorder. People who reported an ataque de nervios were more likely to be female, older, less educated, and formerly married. They were also more likely to meet criteria for anxiety and depressive disorders than those who had not experienced an ataque. The picture that emerges from our analyses is that those who suffer from a combination of social disadvantage, psychiatric disorder, and poor perceived health are more likely to experience an ataque de nervios. abstract_id: PUBMED:12202287 Dissociation, childhood trauma, and ataque de nervios among Puerto Rican psychiatric outpatients. Objective: This study examined the relationships of dissociation and childhood trauma with ataque de nervios. Method: Forty Puerto Rican psychiatric outpatients were evaluated for frequency of ataque de nervios, dissociative symptoms, exposure to trauma, and mood and anxiety psychopathology. Blind conditions were maintained across assessments. Data for 29 female patients were analyzed. Results: Among these 29 patients, clinician-rated dissociative symptoms increased with frequency of ataque de nervios. Dissociative Experiences Scale scores and diagnoses of panic disorder and dissociative disorders were also associated with ataque frequency, before corrections were made for multiple comparisons. The rate of childhood trauma was uniformly high among the patients and showed no relationship to dissociative symptoms and disorder or number of ataques. Conclusions: Frequent ataques de nervios may, in part, be a marker for psychiatric disorders characterized by dissociative symptoms. Childhood trauma per se did not account for ataque status in this group of female outpatients. abstract_id: PUBMED:12211325 Comparative phenomenology of ataques de nervios, panic attacks, and panic disorder. This article examines a clinical sample of 66 Dominican and Puerto Rican subjects who reported ataques de nervios and also psychiatric disorder, and disentangles the phenomenological experiences of ataque de nervios, panic attacks, and panic disorder. In-depth cultural interviews assessed the symptomatic phenomenology of ataque episodes from the local perspective as well as in terms of key panic features, such as recurrence, rapid peaking of symptoms, and lack of provocation. Independent diagnostic assessments of panic attacks and disorder were also used to establish the phenomenological overlap between ataque and panic. Our findings indicate that 36 percent of ataques de nervios fulfill criteria for panic attacks and between 17 percent and 33 percent for panic disorder, depending on the overlap method used. The main features distinguishing ataques that fulfill panic criteria from ataques that do not include whether the episodes were provoked by an upsetting event in the person's life and the rapidity of crescendo of the actual attack. A key finding is that ataques often share individual phenomenological features with panic episodes, but that these features usually do not "run together" during the ataque experience. This confirms previous findings that ataque is a more inclusive construct than panic disorder. The importance of these findings for the clinical diagnosis and treatment of persons with ataques is discussed. abstract_id: PUBMED:9693876 Subtypes of ataques de nervios: the influence of coexisting psychiatric diagnosis. The current study assesses the relationship between presenting symptomatology of the self-labeled Hispanic popular diagnosis of ataques de nervios and the specific co-morbid psychiatric diagnoses. Hispanic subjects seeking treatment at an anxiety disorders clinic (n = 156) were assessed with a specially designed self-report instrument for both traditional ataque de nervios and panic symptoms, and with structured or semistructured psychiatric interviews for Axis-I disorders. This report focuses on 102 subjects with ataque de nervios who also met criteria for panic disorder, other anxiety disorders, or an affective disorder. Distinct ataque symptom patterns correlated with co-existing panic disorder, affective disorders, or other anxiety disorders. Individuals with both ataque and panic disorder reported the most asphyxia, fear of dying, and increased fear during their ataques. People with ataques who also met criteria for affective disorder reported the most anger, screaming, becoming aggressive, and breaking things during ataques. Ataque positive subjects with other anxiety disorders were less salient for both panic-like and emotional-anger symptoms. The findings suggest that (a) ataque de nervios is a popular label referring to several distinct patterns of loss of emotional control, (b) the type of loss of emotional control is influenced by the associated psychiatric disorder, and (c) ataque symptom patterns may be a useful clinical marker for detecting psychiatric disorders. Further study is needed to examine the relationship between ataque de nervios and psychiatric disorders, as well as the relationship to cultural, demographic, environmental, and personality factors. abstract_id: PUBMED:12664233 Association of psychiatric disorders and different indicators of asthma in island Puerto Rican children. Background: We examined the relationship between three different indicators of childhood asthma (asthma diagnosis, ever had an asthma attack, and asthma hospitalization) and having any psychiatric disorder, comorbid disorders, or specific disorders. Three study hypotheses were examined: 1) there will be subject variability in responses to the asthma indicators; 2) there will be different observed associations between the three asthma indicators and psychiatric disorders; and 3) maternal mental health, family income, and maternal education will confound the associations between childhood asthma and psychiatric disorders. Method: Data were drawn from a community-based, random sample of 1,891 island Puerto Rican children aged 4-17 years. Information was collected through direct interview with children and adolescents and their primary caretakers. The Diagnostic Interview Schedule for Children (DISC) was used to determine DSM-IV diagnoses. Results: Thirty-two percent of the children had been diagnosed with asthma but only 22 % had ever experienced an asthma attack. Seventeen percent of the children had been hospitalized for asthma. Having been hospitalized for asthma was not associated with any of the psychiatric disorders, having a diagnosis was associated with some of the disorders, and having experienced an asthma attack was associated with almost all the disorders, after controlling for family income and maternal education and mental health. Conclusions: Determining and measuring asthma may be difficult because of confusion and differing perceptions of what constitutes asthma or an asthma attack. Future studies should consider the problems in capturing perceptions of asthma and severity in Puerto Rican children and should continue to explore the relationship between asthma and mental illness. abstract_id: PUBMED:3403750 Folk hero modeling therapy for Puerto Rican adolescents. Puerto Ricans are concentrated in the Northeastern United States in predominantly low socioeconomic status, urban Hispanic communities. Due to a constellation of stressors associated with their minority status, bilingualism and bicultural conflicts, Puerto Rican adolescents are at high risk of mental disorder. Research has shown that factors such as migration experiences, low socioeconomic status, and Hispanic values conflicting with Anglo culture (e.g., familism, spiritualistic and folk beliefs, orientation to time) are associated with higher rates of psychiatric symptomatology in the Hispanic population. Community mental health resources are under utilized, and traditional therapy modalities have had limited success in remedying the emotional and behavioural problems of Hispanics. This paper reviews several approaches to the delivery of culturally sensitive mental health services to Hispanic populations and describes the development of a new modality for Puerto Rican adolescents. The modality presents Puerto Rican folk heroes and heroines in a modelling therapy targeted towards enhancing adolescents' pride in their ethnic heritage, self-esteem, and adaptive coping with stress. The therapy was implemented on a small-group basis with 21 Puerto Rican adolescents, some of whom participated with their mothers. A clinical evaluation of the therapy was conducted by summarizing therapists' progress reports on each participant and by interviewing the participants about their impressions of the therapy experience. Progress reports and participants' self-reports indicated that the adolescents increased in self-disclosure and self-confidence; they gained pride in being Puerto Rican; they learned adaptive mechanisms for coping with stress; and they enjoyed learning about famous Puerto Ricans and their culture. abstract_id: PUBMED:1922846 Clinical features of young adult Hispanic psychiatric in-patients: the so-called "Puerto Rican syndrome". Among 118 young Hispanic (101 Puerto Rican) psychiatric inpatients, somatic complaints, violence, suicidal tendency, hallucinations, sudden onset of symptoms, and dissociative behavior each occurred in 46% or more of patients. There was no difference in clinical presentation between island Puerto Rican patients and Puerto Rican patients raised on the mainland. There was no relationship between these behaviors and any specific psychiatric disorder. Although there is a suggestion that more of these behaviors may occur among Puerto Rican Hispanics, further study with larger control groups is needed. abstract_id: PUBMED:15264963 Parental mental health, childhood psychiatric disorders, and asthma attacks in island Puerto Rican youth. Objective: Previous research documents an association of poor parental mental health with asthma in children. This study aims to determine whether the associations between parental mental health problems and childhood asthma attacks persist after controlling for childhood anxiety and depression and other confounding factors. Design/methods: A community household sample of youth ages 4 to 17 years and their primary caregivers from the US Commonwealth of Puerto Rico was studied to determine the associations between parental mental health and childhood asthma attacks. Regression models that predicted asthma attacks in youth controlled for parental mental health problems, childhood anxiety and depression, zone of residence, and parents' age, education, and perception of poverty. Results: After adjusting for children's depressive and anxiety disorders as well as other important confounders, associations between parental depression, suicide attempts, ataque de nervios, and history of mental health treatment and asthma attacks in offspring, by parental report, persisted. Additionally, the frequency of parental mental health problems was associated with children's asthma attacks. Conclusion: Parents with mental health problems were more likely to report histories of asthma attacks in their children compared with parents without mental health problems in Puerto Rico. These associations were not attributable to internalizing disorders in youth but persisted independent of childhood psychopathology and other confounding factors. Clinicians and researchers should recognize the relations between poor parental mental health and childhood asthma and explore the potential role of family psychosocial and behavioral factors related to the manifestation of the disease. Answer: Yes, ataques de nervios in Puerto Rican children are associated with psychiatric disorders. Studies have found that children who reported an ataque de nervios were more likely to meet research criteria for psychiatric disorder in both community and clinical samples. Specifically, ataques de nervios have been linked to a significant relationship with psychiatric disorder and impairment in Puerto Rican children (PUBMED:16239868). Additionally, ataques de nervios have been associated with greater global impairment and a host of childhood disorders within the previous twelve months (PUBMED:20010029). Furthermore, ataques de nervios often share individual phenomenological features with panic episodes, but these features usually do not "run together" during the ataque experience, indicating that ataque is a more inclusive construct than panic disorder (PUBMED:12211325). The presence of ataques de nervios has also been correlated with different patterns of loss of emotional control influenced by the associated psychiatric disorder (PUBMED:9693876). Therefore, ataques de nervios are not only a frequently reported cultural syndrome among children in Puerto Rico but also a significant correlate of global impairment and childhood psychopathology among this population.
Instruction: Are plant formins integral membrane proteins? Abstracts: abstract_id: PUBMED:11104517 Are plant formins integral membrane proteins? Background: The formin family of proteins has been implicated in signaling pathways of cellular morphogenesis in both animals and fungi; in the latter case, at least, they participate in communication between the actin cytoskeleton and the cell surface. Nevertheless, they appear to be cytoplasmic or nuclear proteins, and it is not clear whether they communicate with the plasma membrane, and if so, how. Because nothing is known about formin function in plants, I performed a systematic search for putative Arabidopsis thaliana formin homologs. Results: I found eight putative formin-coding genes in the publicly available part of the Arabidopsis genome sequence and analyzed their predicted protein sequences. Surprisingly, some of them lack parts of the conserved formin-homology 2 (FH2) domain and the majority of them seem to have signal sequences and putative transmembrane segments that are not found in yeast or animals formins. Conclusions: Plant formins define a distinct subfamily. The presence in most Arabidopsis formins of sequence motifs typical or transmembrane proteins suggests a mechanism of membrane attachment that may be specific to plant formins, and indicates an unexpected evolutionary flexibility of the conserved formin domain. abstract_id: PUBMED:23317636 Plant formins: membrane anchors for actin polymerization. In plants, the actin cytoskeleton plays a fundamental role in intracellular transport, cell growth, and morphology. Formins are central regulators of actin polymerization and actin-based processes in many eukaryotes. Plants have a diverse family of formins and this diversity arose early in land plant evolution, probably deriving from family expansion and domain acquisition. Recently, formins from different plant lineages have been studied and the focus of these studies is beginning to shift from biochemical characterization to in vivo function. In vivo studies have shown that distinct biochemical activities confer specific cellular functions. Despite these differences, many plant formins have in common a direct link to the plasma membrane, suggesting that formins in plants are important links between the plasma membrane and actin remodeling. abstract_id: PUBMED:34622613 Formins: the key regulators of plant cell morphology and development Formins are widely distributed in eukaryotes such as fungi, plants and animals. They play crucial roles in regulating the polymerization of actin, coordinating the synergistic interactions between actin and microtubules, and determining cell growth and morphology. Unlike formins from fungi and animals, plant formins have been evolved into two plant-specific types. Generally, type Ⅱ formins are believed to regulate the polarized growth of cells, and type Ⅰ formins may regulate the cell expansion and division processes. Recent studies on the function of plant formins suggest it is inappropriate to classify the function of formins purely based on their structures. This review summarizes the domain organization of formins and their corresponding functions, as well as the underpinning mechanisms. Furthermore, the unsolved or unexplored issues along with future perspectives on plant formins are proposed and discussed. abstract_id: PUBMED:12417149 Formins: intermediates in signal-transduction cascades that affect cytoskeletal reorganization. The control of cell growth and polarity depends on a dynamic actin cytoskeleton that has the ability to reorganize in response to developmental and environmental stimuli. In animals and fungi, formins are just one of the four major classes of poly-L-proline-containing (PLP) proteins that form part of the signal-transduction cascade that leads to rearrangement of the actin cytoskeleton. Analysis of the Arabidopsis genome sequence indicates that, unlike animals and fungi, formins are the only class of conserved profilin-binding PLP proteins in plants. Moreover, plant formins show significant structural differences compared with their animal and fungal counterparts, raising the possibility that plant formins are subject to novel mechanisms of control or perform unique roles in plants. abstract_id: PUBMED:18977251 Plant formins: diverse isoforms and unique molecular mechanism. The completed genome from the model plant Arabidopsis thaliana reveals the presence of a diverse multigene family of formin-like sequences, comprising more than 20 isoforms. This review highlights recent findings from biochemical, cell biological and reverse-genetic analyses of this family of actin nucleation factors. Important advances in understanding cellular function suggest major roles for plant formins during cytokinesis and cell expansion. Biochemical studies on a subset of plant formins emphasize the need to examine molecular mechanisms outside of mammalian and yeast systems. Notably, a combination of solution-based assays for actin dynamics and timelapse, single-filament imaging with TIRFM provide evidence for the first non-processive formin (AtFH1) in eukaryotes. Despite these advances it remains difficult to generate a consensus view of plant formin activities and cellular functions. One limitation to summarizing formin properties relates to the enormous variability in domain organization among the plant formins. Generating homology-based predictions that depend on conserved domains outside of the FH1 and FH2 will be virtually impossible for plant formins. A second major drawback is the lack of facile techniques for examining dynamics of individual actin filaments within live plant cells. This constraint makes it extremely difficult to bridge the gap between biochemical characterization of particular formin and its specific cellular function. There is promise, however, that recent technical advances in engineering appropriate fluorescent markers and new fluoresence imaging techniques will soon allow the direct visualization of cortical actin filament dynamics. The emergence of other model systems for studying actin cytoskeleton in vivo, such as the moss Physcomitrella patens, may also enhance our knowledge of plant formins. abstract_id: PUBMED:23439982 Investigating the role of viral integral membrane proteins in promoting the assembly of nepovirus and comovirus replication factories. Formation of plant virus membrane-associated replication factories requires the association of viral replication proteins and viral RNA with intracellular membranes, the recruitment of host factors and the modification of membranes to form novel structures that house the replication complex. Many viruses encode integral membrane proteins that act as anchors for the replication complex. These hydrophobic proteins contain transmembrane domains and/or amphipathic helices that associate with the membrane and modify its structure. The comovirus Co-Pro and NTP-binding (NTB, putative helicase) proteins and the cognate nepovirus X2 and NTB proteins are among the best characterized plant virus integral membrane replication proteins and are functionally related to the picornavirus 2B, 2C, and 3A membrane proteins. The identification of membrane association domains and analysis of the membrane topology of these proteins is discussed. The evidence suggesting that these proteins have the ability to induce membrane proliferation, alter the structure and integrity of intracellular membranes, and modulate the induction of symptoms in infected plants is also reviewed. Finally, areas of research that need further investigation are highlighted. abstract_id: PUBMED:16313636 Arabidopsis group Ie formins localize to specific cell membrane domains, interact with actin-binding proteins and cause defects in cell expansion upon aberrant expression. The closely related proteins AtFH4 and AtFH8 represent the group Ie clade of Arabidopsis formin homologues. The subcellular localization of these proteins and their ability to affect the actin cytoskeleton were examined. AtFH4 protein activity was identified using fluorimetric techniques. Interactions between Arabidopsis profilin isoforms and AtFH4 were assayed in vitro and in vivo using pull-down assays and yeast-2-hybrid. The subcellular localization of group Ie formins was observed with indirect immunofluorescence (AtFH4) and an ethanol-inducible green fluorescent protein (GFP) fusion construct (AtFH8). AtFH4 protein affected actin dynamics in vitro, and yeast-2-hybrid assays suggested isoform-specific interactions with the actin-binding protein profilin in vivo. Indirect immunofluorescence showed that AtFH4 localized specifically to the cell membrane at borders between adjoining cells. Expression of an AtFH8 fusion protein resulted in GFP localization to cell membrane zones, similar to AtFH4. Furthermore, aberrant expression of AtFH8 resulted in the inhibition of root hair elongation. Taken together, these data suggest that the group Ie formins act with profilin to regulate actin polymerization at specific sites associated with the cell membrane. abstract_id: PUBMED:25546384 Formins: linking cytoskeleton and endomembranes in plant cells. The cytoskeleton plays a central part in spatial organization of the plant cytoplasm, including the endomebrane system. However, the mechanisms involved are so far only partially understood. Formins (FH2 proteins), a family of evolutionarily conserved proteins sharing the FH2 domain whose dimer can nucleate actin, mediate the co-ordination between actin and microtubule cytoskeletons in multiple eukaryotic lineages including plants. Moreover, some plant formins contain transmembrane domains and participate in anchoring cytoskeletal structures to the plasmalemma, and possibly to other membranes. Direct or indirect membrane association is well documented even for some fungal and metazoan formins lacking membrane insertion motifs, and FH2 proteins have been shown to associate with endomembranes and modulate their dynamics in both fungi and metazoans. Here we summarize the available evidence suggesting that formins participate in membrane trafficking and endomembrane, especially ER, organization also in plants. We propose that, despite some methodological pitfalls inherent to in vivo studies based on (over)expression of truncated and/or tagged proteins, formins are beginning to emerge as candidates for the so far somewhat elusive link between the plant cytoskeleton and the endomembrane system. abstract_id: PUBMED:19824476 At the plant side of formins--organizers of the actin cytoskeleton Rearrangements of actin cytoskeleton enable proper functioning of the cells under normal conditions, and also cellular adaptations to changes in the direct surroundings. Formins are actin binding proteins, responsible for actin nucleation and further elongation of microfilaments. The distinguishing feature of formins is the presence of conserved FH2 (formin homology domain 2) domain, as well as other domains typical for distinct formin classes. In animal cells formins are involved in cytokinesis and determination and maintenance of the cell shape and polarity, but also in the formation of filopodia, endocytosis and many other processes. The presence of proteins from the formin family in plant cells, and their involvement in the tip growth and cytokinesis, has been determined only recently. As the functional organization of plant and animal cells is different, one can assume that the range of putative functions of plant formins might also be diverse. One of such proposed functions for formins in plants is the role of linker protein within WMC continuum (cell wall-plasma membrane-cytoskeleton). Unfortunately, for that moment the state of knowledge about plant formins in comparison with animal or fungal ones is much poorer. abstract_id: PUBMED:19631698 Formins and microtubules. Formins have recently been recognized as prominent regulators of the microtubule (MT) cytoskeleton where they modulate the dynamics of selected MTs in interphase and mitosis. The association of formins with the MT cytoskeleton and their action on MT dynamics are relatively unexplored areas, yet growing evidence supports a direct role in their regulation of MT stability independent of their activity on actin. Formins regulate MT stability alone or in combination with accessory MT binding proteins that have previously been implicated in the stabilization of MTs downstream of polarity cues. As actin and MT arrays are typically remodeled downstream of signaling pathways that orchestrate cell shape and division, formins are emerging as excellent candidates for coordinating the responses of the cytoskeletal in diverse regulated and homeostatic processes. Answer: Yes, plant formins are suggested to be integral membrane proteins. The presence of sequence motifs typical of transmembrane proteins in most Arabidopsis formins indicates a mechanism of membrane attachment that may be specific to plant formins (PUBMED:11104517). This is supported by the fact that many plant formins have a direct link to the plasma membrane, which suggests that they are important connectors between the plasma membrane and actin remodeling (PUBMED:23317636). Additionally, some plant formins contain transmembrane domains and participate in anchoring cytoskeletal structures to the plasmalemma, and possibly to other membranes (PUBMED:25546384). Furthermore, Arabidopsis group Ie formins have been shown to localize specifically to the cell membrane at borders between adjoining cells, which supports their role as integral membrane proteins (PUBMED:16313636).
Instruction: Can residents accurately abstract their own charts? Abstracts: abstract_id: PUBMED:19240454 Can residents accurately abstract their own charts? Purpose: To assess the accuracy of residents' record review, using trained abstractors as a gold standard comparison. Method: In 2005, the authors asked 74 residents to review their own charts (n = 392) after they received brief instruction on both how to locate data on the medical record and how to use a data abstraction form. Trained abstractors then re-reviewed these charts to assess performance of preventive health care measures in medicine (smoking screening, smoking cessation advice, mammography, colon cancer screening, lipid screening, and pneumonia vaccination) and pediatrics (parent smoking screening, parent smoking cessation advice, car seat safety, car restraint use, eye alignment, and immunizations up to date). The authors then quantified agreement between the two record reviews and assessed the sensitivity and specificity of the residents versus the trained abstractors. Results: Overall resident-measured performance was similar (within 5%) to that of the trained abstractor for five of six measures in medicine and four of six in pediatrics. For the various measures, sensitivity of resident-measured performance ranged from 100% to 15% and specificity from 100% to 33% compared with the trained abstractors. Relative to the trained abstractor record review, residents did not overestimate their performance. Most residents' (81%) relative performance rankings did not change when the basis for the ranking was resident measured versus trained abstractor measured. Conclusions: Residents' self-abstraction can be an alternative to costly trained abstractors. Appropriate use of these data should be carefully considered, acknowledging the limitations. abstract_id: PUBMED:25301787 National operative case log growth charts in otolaryngology-head and neck surgery training. Objective: To report national standard case log growth curves for operative procedures in otolaryngology and to describe a method by which program directors can chart surgical case numbers over resident training to longitudinally assess sufficiency of cases and parity between residents. Study Design: Data visualization and analysis. Setting: American Council for Graduate Medical Education (ACGME) national case log data for otolaryngology residency. Subjects: National data set; no individual subjects. Methods: National statistical case log reports for otolaryngology were obtained from the ACGME for each postgraduate year (PGY) level in 2009, 2010, and 2011. Estimated means and standard deviations were calculated. The mean and increments of standard deviation were graphed against time to create case log growth charts, similar to pediatric growth charts. Results: Case log growth charts were made for each ACGME Otolaryngology Residency Review Committee key indicator procedure. Progress of an individual resident or of a cohort of residents may be graphed against this growth chart background over their training time. Conclusions: National operative case log growth charts allow residents and program directors to graphically assess progress in obtaining a sufficient variety and number of operative procedures over time throughout training. This can provide early identification when residents begin to fall below the growth curve during training. abstract_id: PUBMED:26497296 Is it time to confine Snellen charts to the annals of history? Purpose: A paper published by the author in 1988 in this journal provided some important findings about the lack of precision of visual acuity (VA) measures made with commonly used Snellen charts and the advantages of using letter charts designed using the principles proposed by Bailey and Lovie in 1976. That 1988 paper has been cited a number of times since, mostly supporting the findings. The purpose of this review is to examine the changes that have occurred in VA measurement in research and clinical practice since that earlier study. Recent Findings: While precise measures of VA using Bailey-Lovie or ETDRS charts are now commonly used in major, multi-centre research studies, it is disappointing to see that many research papers still report VA measured with Snellen charts and even use Snellen fractions, invalidly converted to logMAR notation, in parametric analyses of VA. Many studies have examined the test-retest variability (TRV) of VA measures in groups and individuals, but it is difficult to determine if clinicians or researchers determine patients' individual TRVs to more accurately detect real changes in VA for each individual. Summary: This paper summarises the findings of the 1988 study: (1) Snellen charts and VA notations are not appropriate for accurate clinical and research measures of VA; (2) Charts employing the Bailey-Lovie design principles should be used to provide precise measures of VA. (3) Test-retest variability should be used to determine the limit for detecting significant change in VA. This author suggests that it is time for Snellen charts, Snellen fractions and decimal notation to be confined to the teaching of the history of VA measurement. A request is also made to stop the use of the redundant term 'best corrected' VA (BCVA). Recommended procedures are given for precise measures of VA and accurately monitoring changes in VA in clinical practice and research. abstract_id: PUBMED:29368106 Field Testing of IAP2015 Charts. Objectives: To field test the IAP2015 (Indian Academy of Pediatrics) charts in different socioeconomic classes; to compare standard deviation scores (Z-scores) of children's height, weight and body mass index (BMI) for age as computed based on IAP2007 or WHO charts against the IAP2015 references. Methods: A cross-sectional observational study was conducted in the district of Pune on apparently healthy 5-18 y old children from urban affluent, urban underprivileged and rural areas. Anthropometric measurements were performed and parameters were converted into Z-scores. Results: Urban affluent children were tallest and heaviest whereas urban underprivileged children were shortest. Z-score comparison showed that all children were shortest on WHO references and taller as per IAP2007 references. BMI Z-score was higher on IAP2007 charts and lower on WHO references. Highest number of children were within the reference range on IAP2015 charts among urban affluent. More children were classified stunted on WHO references and least on IAP2007 references. WHO references overestimated wasting and as per IAP2007 charts, less number were wasted. Less number of children were classified as being obese/overweight by WHO and IAP2007 references. Conclusions: IAP2015 charts are better representative of healthy children of current generation and may be used to prevent inappropriate classification in present scenario. abstract_id: PUBMED:35251898 A comparison of four fetal biometry growth charts within an Australian obstetric population. Purpose: The aim of this study was to investigate the applicability of four existing fetal growth charts to a local tertiary hospital obstetric population. Method: Four existing fetal growth charts (the Raine study reference charts, INTERGROWTH-21st charts, World Health Organization (WHO) fetal growth study charts and Australasian Society for Ultrasound in Medicine (ASUM) endorsed Campbell Westerway charts were compared using data from 11651 singleton pregnancy ultrasound scans at King Edward Memorial Hospital (KEMH). The 3rd, 10th, 50th 90th and 97th percentile curves for abdominal circumference (AC) biometry for the KEMH data were calculated and the four primary correlation parameters from fitted 3rd order polynomials (a, b, c and d) were used to generate like-for-like comparisons for all charts. Results: The overall comparisons showed a significant variation with different growth charts, giving different percentiles for the same fetal AC measurement. INTERGROWTH-21st percentile curves tended to fall below those of other charts for AC measurements. Both the Raine Study charts and ASUM charts were the charts of closest overall fit to the local data. Conclusion: Our data show the Raine Study charts are the most appropriate for our population compared with the other three charts assessed suggesting the 'one size fits all' model may not be appropriate. However, additional analysis of biometry measurements, primarily AC, is needed to address the deficiency of data at 14-18 weeks gestation which exists for the Raine Study data.A reasonable alternative may be to adopt the WHO charts with local calibration (including the 14 - 18 week gestation period). abstract_id: PUBMED:33914307 Polish growth charts for preterm infants - comparison with reference Fenton charts. Objectives: Proper infant classification, particularly a preterm infant, as small or large for gestational age, is crucial to undertake activities to improve postnatal outcomes. This study aimed to assess the usability of the Fenton preterm growth charts to evaluate the anthropometric parameters of Polish preterm neonates. Material And Methods: In this single-center, retrospective study data extracted from the medical documentation of preterm neonates born 2002-2013 were analyzed. Body weight, body length, and head circumference were evaluated and used to develop growth charts, which were compared with the reference Fenton growth charts. Results: This study included 3,205 preterm neonates, of whom 937 were born before 30 weeks of pregnancy. Overall, 11.04%, 3.3%, and 5.2% of neonates were below the 10th percentile on the Fenton charts for birth weight, body length, and head circumference, respectively. Only 26 (6.67%) of 390 analyzed anthropological parameters differed significantly between the study and the Fenton groups. Statistically significant differences between the study and the Fenton populations were found only in body length for both sexes, and in head circumference for female neonates. Conclusions: The growth charts developed in this study for a population of Polish preterm neonates corresponded to the Fenton charts in terms of birth weight but differed in terms of body length and head circumference. Our findings suggest the need to evaluate growth charts for Polish preterm newborns. abstract_id: PUBMED:38386583 ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language. The use of natural language interfaces (NLIs) to create charts is becoming increasingly popular due to the intuitiveness of natural language interactions. One key challenge in this approach is to accurately capture user intents and transform them to proper chart specifications. This obstructs the wide use of NLI in chart generation, as users' natural language inputs are generally abstract (i.e., ambiguous or under-specified), without a clear specification of visual encodings. Recently, pre-trained large language models (LLMs) have exhibited superior performance in understanding and generating natural language, demonstrating great potential for downstream tasks. Inspired by this major trend, we propose ChartGPT, generating charts from abstract natural language inputs. However, LLMs are struggling to address complex logic problems. To enable the model to accurately specify the complex parameters and perform operations in chart generation, we decompose the generation process into a step-by-step reasoning pipeline, so that the model only needs to reason a single and specific sub-task during each run. Moreover, LLMs are pre-trained on general datasets, which might be biased for the task of chart generation. To provide adequate visualization knowledge, we create a dataset consisting of abstract utterances and charts and improve model performance through fine-tuning. We further design an interactive interface for ChartGPT that allows users to check and modify the intermediate outputs of each step. The effectiveness of the proposed system is evaluated through quantitative evaluations and a user study. abstract_id: PUBMED:32396901 A Comparison of UK Preterm Anthropometric Charts and INTERGROWTH-21st: Is It Time to Change Growth Charts? Background: Current practice in the UK is to plot premature infant anthropometric measurements on Neonatal and Infant Close Monitoring (NICM) reference charts. These charts have several known limitations. The INTERGROWTH-21st Project has recently produced international ante- and postnatal growth standards. Exact knowledge of growth centiles allows clinicians to accurately assess infant nutritional requirements. Objective: To compare target centile measurements between INTERGROWTH-21st and UK NICM growth charts for premature infants. Method: Anthropometric measurements (weight and head circumference) of a convenience sample of neonates born between 24 and 32 weeks of gestation were analysed retrospectively. Measurements were collected across three time points and plotted on both the NICM and INTERGROWTH-21st growth charts. The respective centiles were compared and analysed by paired-sample t test, Wilcoxon rank test analysis, and multilevel mixed-effect linear regression models. Results: Centiles for weight and head circumference measurements of 96 infants plotted on INTERGROWTH-21st charts were significantly greater than their corresponding UK charts at all three time points. For weight, the average difference between the two charts varied from 9.1 to 16.4 centiles. The difference between the two charts was greater for female than male infants by up to 6.9 centiles (95% CI 10.1-3.8). Conclusion: Existing UK NICM reference charts are significantly different to the growth standards of INTERGROWTH-21st. The choice of which growth chart to adopt in the UK could have important consequences on premature infants' future adult health and therefore requires further prospective observational studies with larger data sets including length measurements and more comprehensive population characteristics. abstract_id: PUBMED:34660242 Comparison of Nutritional Status of Under-Five Indian Children (NFHS 4 Data) Using WHO 2006 Charts and 2019 Indian Synthetic Charts. Introduction: There is a growing body of evidence against using World Health Organization (WHO) charts for developing nations. Our objectives were: 1) To compare nutritional status of <5-year-old Indian children using WHO charts and synthetic Indian charts (SC) 2019. 2) To study nutritional status across wealth index categories. 3) To study nutritional status of predominantly breast-fed infants <6 months of age using both charts. Materials And Methods: Data from 4th National Family Health Survey (n ~ 236117, 0--59-month-old children) were used for assessing nutritional status using the WHO charts and SC. Z-scores were calculated for length/height, weight, and weight-for-height (WAZ) using both charts. Children were classified into degrees of malnutrition using appropriate cutoffs. Results: Stunting, wasting, and underweight were significantly higher using WHO charts. The prevalence of stunting (height for age) and wasting (WHZ) changed from high to medium and critical to poor when the reference changed from WHO to SC. All Z-scores showed an improving trend with increasing wealth index. On SC, almost all WHZ (wasting) from the richest to poorer were >-0.5 (clinically significant), whereas on WHO charts all wealth classes had WHZ <-0.5. For children under the age of 6 months, WHZ from richest to poorest was between -0.97 and -0.89 by WHO and 0.27 and 0.38 by SC. Conclusions: Use of Indian synthetic growth charts for growth monitoring of under-five children may be more appropriate; infants under 6 months and children from well off families performed well on these charts. abstract_id: PUBMED:33631905 The relevance and irrelevance of charts in clinical research Charts are a visual aid that is used in articles in order to highlight the results of an investigation. They allow illustrating the results with the purpose of making them clearer. Charts, just like statistical tests, are selected based on the objective of the study, the types of variable, and the statistical analyzes to be illustrated. Some of the most commonly used charts in clinical practice are frequency histograms, which illustrate qualitative variables or frequencies; also error charts, that are used for normally distributed quantitative variables; box plots or violin plots are used for distribution-free quantitative variables, and survival curves are for variables that include the person-time variable. The aforementioned charts can be used to illustrate the comparisons between maneuvers and outcome depending on the type of variable that is being analyzed. When two groups are compared and the dependent variable is dichotomous, forest plots are used; for multivariate models, the chart depends on the type of analysis. As for logistic regression and linear regression, tree diagrams are used; and scatter plots are used for linear regression. Survival plots are used for Cox proportional hazards. Although charts can be very useful, if they are misused, they can show differences where there are none, which leads to a misinterpretation of the studies. In this article, we will use examples to complement the topics that were previously addressed in the articles of this series. Answer: Residents can accurately abstract their own charts to a certain extent. A study conducted in 2005 asked 74 residents to review their own charts (n = 392) after receiving brief instruction on how to locate data in the medical record and how to use a data abstraction form. Trained abstractors then re-reviewed these charts to assess the performance of preventive health care measures. The results showed that overall resident-measured performance was similar (within 5%) to that of the trained abstractor for five of six measures in medicine and four of six in pediatrics. The sensitivity of resident-measured performance ranged from 100% to 15% and specificity from 100% to 33% compared with the trained abstractors. Importantly, residents did not overestimate their performance relative to the trained abstractor record review, and most residents' (81%) relative performance rankings did not change when the basis for the ranking was resident-measured versus trained abstractor-measured. These findings suggest that residents' self-abstraction can be an alternative to costly trained abstractors, but the appropriate use of these data should be carefully considered, acknowledging the limitations (PUBMED:19240454).
Instruction: Are early cannulation arteriovenous grafts (ecAVG) a viable alternative to tunnelled central venous catheters (TCVCs)? Abstracts: abstract_id: PUBMED:27032450 Are early cannulation arteriovenous grafts (ecAVG) a viable alternative to tunnelled central venous catheters (TCVCs)? An observational "virtual study" and budget impact analysis. Background: Early cannulation arteriovenous grafts (ecAVGs) are advocated as an alternative to tunnelled central venous catheters (TCVCs). A real-time observational "virtual study" and budget impact model was performed to evaluate a strategy of ecAVG as a replacement to TCVC as a bridge to definitive access creation. Methodology: Data on complications and access-related bed days was collected prospectively for all TCVCs inserted over a six-month period (n = 101). The feasibility and acceptability of an alternative strategy (ecAVGs) was also evaluated. A budget impact model comparing the two strategies was performed. Autologous access in the form of native fistula was the goal wherever possible. Results: We found 34.7% (n = 35) of TCVCs developed significant complications (including 17 culture-proven bacteraemia and one death from line sepsis). Patients spent an average of 11.9 days/patient/year in hospital as a result of access-related complications. The wait for TCVC insertion delayed discharge in 35 patients (median: 6 days). The ecAVGs were a practical and acceptable alternative to TCVCs in over 80% of patients. Over a 6-month period, total treatment costs per patient wereGBP5882 in the TCVC strategy and GBP4954 in the ecAVG strategy, delivering potential savings ofGBP927 per patient. The ecAVGs had higher procedure and re-intervention costs (GBP3014 vs. GBP1836); however, these were offset by significant reductions in septicaemia treatment costs (GBP1322 vs. GBP2176) and in-patient waiting time bed costs (GBP619 vs. GBP1870). Conclusions: Adopting ecAVGs as an alternative to TCVCs in patients requiring immediate access for haemodialysis may provide better individual patient care and deliver cost savings to the hospital. abstract_id: PUBMED:25885054 Immediate access arteriovenous grafts versus tunnelled central venous catheters: study protocol for a randomised controlled trial. Background: Autologous arteriovenous fistulae (AVF) are the optimal form of vascular access for haemodialysis. AVFs typically require 6 to 8 weeks to "mature" from the time of surgery before they can be cannulated. Patients with end-stage renal disease needing urgent vascular access therefore traditionally require insertion of a tunnelled central venous catheter (TCVC). TCVCs are associated with high infection rates and central venous stenosis. Early cannulation synthetic arteriovenous grafts (ecAVG) provide a novel alternative to TCVCs, permitting rapid access to the bloodstream and immediate needling for haemodialysis. Published rates of infection in small series are low. The aim of this study is to compare whether TCVC ± AVF or ecAVG ± AVF provide a better strategy for managing patients requiring immediate vascular access for haemodialysis. Methods/design: This is a prospective randomised controlled trial comparing the strategy of TCVC ± AVF to ecAVG ± AVF. Patients requiring urgent vascular access will receive a study information sheet and written consent will be obtained. Patients will be randomised to receive either: (i) TCVC (and native AVF if this is anatomically possible) or (ii) ecAVG (± AVF). 118 patients will be recruited. The primary outcome is systemic bacteraemia at 6 months. Secondary outcomes include culture-proven bacteraemia rates at 1 year and 2 years; primary and secondary patency rates at 3, 6, 12 and 24 months; stenoses; re-intervention rates; re-admission rate; mortality and quality of life. Additionally, treatment delays, impact on service provision and cost-effectiveness will be evaluated. Discussion: This is the first randomised controlled trial comparing TCVC to ecAVG for patients requiring urgent vascular access for haemodialysis. The complications of TCVC are considered an unfortunate necessity in patients requiring urgent haemodialysis who do not have autologous vascular access. If this study demonstrates that ecAVGs provide a safe and practical alternative to TCVC, this could instigate a paradigm shift in nephrology thinking and access planning. Trial Registration: This study has been approved by the West of Scotland Research Ethics Committee 4 (reference no. 13/WS/0087, 28 August 2013) and is registered with the International Standard Randomised Controlled Trial Number Register (reference no. ISRCTN80588541 , 27 May 2014). abstract_id: PUBMED:28236919 A randomized controlled trial and cost-effectiveness analysis of early cannulation arteriovenous grafts versus tunneled central venous catheters in patients requiring urgent vascular access for hemodialysis. Objective: Early cannulation arteriovenous grafts (ecAVGs) are proposed as an alternative to tunneled central venous catheters (TCVCs) in patients requiring immediate vascular access for hemodialysis (HD). We compared bacteremia rates in patients treated with ecAVG and TCVC. Methods: The study randomized 121 adult patients requiring urgent vascular access for HD in a 1:1 fashion to receive an ecAVG with or without (+/-) an arteriovenous fistula (AVF; n = 60) or TCVC+/-AVF (n = 61). Patients were excluded if they had active systemic sepsis, no anatomically suitable vessels, or an anticipated life expectancy <3 months. The primary end point was the culture-proven bacteremia rate at 6 months, with the trial powered to detect a reduction in bacteremia from 24% to 5% (α = .05, β = .8). Secondary end points included thrombosis, reintervention, and mortality. A cost-effectiveness analysis was also performed. Results: Culture-proven bacteremia developed in 10 patients (16.4%) in the TCVC arm ≤6 months compared with two (3.3%) in the ecAVG+/-AVF arm (risk ratio, 0.2; 95% confidence interval, 0.12-0.56; P = .02). Mortality was also higher in the TCVC+/-AVF cohort (16% [n = 10] vs 5% [n = 3]; risk ratio, 0.3; 95% CI, 0.08-0.45; P = .04). The difference in treatment cost between the two arms was not significant (£11,393 vs £9692; P = .24). Conclusions: Compared with TCVC+/-AVF, a strategy of ecAVG+/-AVF reduced the rate of culture-proven bacteremia and mortality in patients requiring urgent vascular access for HD. The strategy also proved to be cost-neutral. abstract_id: PUBMED:34031434 Patient characteristics predict patency of early-cannulation arteriovenous grafts. There is a new emphasis on tailoring appropriate vascular access for hemodialysis to patients and their life-plans, but there is little known about the optimal use of newer devices such as early-cannulation arteriovenous grafts (ecAVG), with studies utilising them in a wide variety of situations. The aim of this study was to determine if the outcome of ecAVG can be predicted by patient characteristics known pre-operatively. This retrospective analysis of 278 consecutive ecAVG with minimum one-year follow-up correlated functional patency with demographic data, renal history, renal replacement and vascular access history. On univariate analysis, aetiology of renal disease, indication for an ecAVG, the number of previous tunnelled central venous catheters (TCVC) prior to insertion of an ecAVG, peripheral vascular disease, and BMI were significant associates with functional patency. On multivariate analysis the number of previous TCVC, the presence of peripheral vascular disease and indication were independently associated with outcome after allowing for age, sex and BMI. When selecting for vascular access, understanding the clinical circumstances such as indication and previous vascular access can identify patients with differing outcomes. Importantly, strategies that result in TCVC exposure have an independent and cumulative association with decreasing long-term patency for subsequent ecAVG. As such, TCVC exposure is best avoided or minimised particularly when ecAVG can be considered. abstract_id: PUBMED:33349144 Conventional central venous catheters as tunnelled mid-clavicular midline catheters: Description of novel application and outcome analysis. Background: Long peripheral catheters are the ideal devices for intermediate venous access (1-4 weeks). However due to non-availability and cost constraints, these lines are not widely adapted in developing countries. In this clinical report we describe a technique of using conventional Central Venous Catheters as Midclavicular midlines for intermediate venous access and present the outcome analysis of such catheters in the last 2 years from our institute. Methods: A single lumen conventional central venous catheter (5 Fr, 20 cm) was introduced through the deep veins of the upper arm, 1 to 1.5 cm distal to the axillary crease. The catheter was tunnelled for a distance of 5 to 7 cm to exit from Dawson's green zone to reduce the incidence of Midline Associated Blood Stream Infection. Patients were followed up by trained staff nurses and outcome parameters were recorded. Results: Seventy six patients were enrolled and 72 patients underwent successful cannulation and completed the outcome analysis. Administration of intravenous antibiotics and chemotherapy were the commonest indications. The device served the intended duration of therapy in 66 (92%) patients for a median number of 12 (6-20) catheter days. Device related blood stream infection was 1.24/1000 catheter days and catheter related thrombosis rate was 8.3%. Conclusion: Conventional Central Venous Catheters can be effectively used as tunnelled Midclavicular midline catheters in low resource settings. The successful outcome analysis has to be confirmed by larger studies. abstract_id: PUBMED:27709306 Axillo-iliac arteriovenous hemodialysis graft creation with an early cannulation device. Exhaustion of superficial veins coupled with the presence of intrathoracic central venous occlusions remains a significant obstacle for hemodialysis access creation; complex arteriovenous graft (AVG) configurations have been described. The axillary-iliac AVG was first reported in 1987, and few authors have explored this access. We evaluated our experience with this AVG configuration utilizing the early cannulation (EC) graft Flixene™ (Atrium ™, Hudson, NH, USA). Eight patients (75 % men; mean age 37 ± 10 years) with End-Stage Renal Disease (ESRD) underwent axillo-iliac AVG creation with Flixene™ grafts; all had exhausted peripheral veins, occluded thoracic central veins, and inadequate femoral veins. Inflow from the axillary artery and outflow in iliocaval system was assessed prior to access creation. An axillary-to-common iliac AVG was constructed using a 6 mm (mm) EC graft and tunneled in the chest and abdominal wall. Eight grafts were implanted; all were patent after placement. Seven (88 %) were successfully used for hemodialysis within 72 h and one (12 %) within 96. During the mean follow-up of 6 months, 5 (62 %) patients underwent thrombectomy, 1 (12 %) of them had balloon angioplasty at the vein anastomosis, and 2 (25 %) grafts were removed secondary to infection. The remaining grafts are still functioning. Complications as high-output heart failure, steal syndrome and venous hypertension were not observed. Construction of axillo-iliac AVG with EC grafts in the setting of exhausted veins, occluded intrathoracic central veins and hostile groins, is a viable arteriovenous access alternative while avoiding central venous catheters. abstract_id: PUBMED:33819582 Early-Cannulation Arteriovenous Grafts Are Safe and Effective in Avoiding Recurrent Tunneled Central Catheter Infection. Objectives: Tunneled central venous catheter infection (TCVCi) is a common complication that often necessitates removal of the TCVC and replacement by a further TCVC. Theoretically, insertion of an early - cannulation graft (ecAVG) early after TCVC infection is possible but not widely practiced with concerns over safety and infection in the ecAVG. With 8 years of ecAVG experience, the aim of this study was to compare the outcomes following TCVC infection, comparing replacement with TCVC (TCVCr) versus immediate ecAVG (ecAVGr). Design: Retrospective comparison of 2 cohorts, who underwent replacement of an infected TCVC either by an early cannulation graft (n = 18) or by a further central catheter (n = 39). Methods: Data were abstracted from a prospectively completed electronic patient record and collected on patient demographics, TCVC insertion, duration and infection, including culture proven bacteriaemia and subsequent access interventions. Results: Eighteen of 299 patients identified from 2012 to 2020 had an ecAVG implanted as treatment for a TCVCi. In a 1-year time-period (January 1, 2015-December 31, 2015) out of 222 TCVC inserted, 39 were as a replacement following a TCVCi. No patient with an ecAVGr developed an immediate infection, nor complication from the procedure. The rate of subsequent vascular access infection was significantly more frequent for those with a TCVCr than with an ecAVGr (0.6 vs. 0.1/patient/1000 HD days, P< 0.000). The number of further TCVC required was significantly higher in the TCVCr group (7.1 vs. 0.4/patient/1000 HD days, P= 0.000). Conclusions: An ecAVG early following a TCVC infection is safe, reduces the incidence of subsequent infectious complications and reduces the number of TCVC required, with a better functional patency. abstract_id: PUBMED:25751550 Is there an association between central vein stenosis and line infection in patients with tunnelled central venous catheters (TCVCs)? Purpose: Central vein stenosis (CVS) and line infection are well-recognized complications of tunnelled central venous catheters (TCVCs) in patients on haemodialysis. The aim of this study was to evaluate any relationship between CVS and line infection. Methods: Analysis of 500 consecutive patients undergoing TCVC insertion was undertaken. Data were collected on patient demographics, details of line insertion and duration, culture-proven bacteraemia and presence of symptomatic CVS. Logistic regression analysis was used to determine risk factors for CVS and bacteraemia. Results: Mean patient age was 59.0 years (range: 17-93). Mean number of catheter days was 961.1 ± 57.6 per TCVC; 39.4% of TCVCs were associated with culture-proven bacteraemia and 23.6% developed symptomatic CVS. Bacteraemia and CVS were inevitable complications of all TCVCs. The time to symptomatic CVS was longer in patients with bacteraemia than without (1230.91 ± 101.29 vs. 677.49 ± 61.59 days, p<0.001). Patients who had early infection within 90 days of TCVC insertion were less likely to develop CVS (5.9% vs. 22.8%, p<0.001). There was no difference in the bacteraemia rate per 1,000 catheter days between patients with and without CVS (2.62 ± 1.41 vs. 2.35 ± 0.51; p = 0.98). Number of line days (odds ratio (OR) 1.02, p = 0.003), age (OR 1.04, p = 0.04) and culture-proven line infection (OR 0.59, p = 0.014) were all independently associated with CVS. Conclusions: Our results suggest that early line infection may be protective against CVS. Alternatively, there may be two distinct predisposition states for CVS and line infection. Further studies are needed to confirm our association and investigate causation. abstract_id: PUBMED:29344711 Adult 'PICC' Device May be Used as a Tunnelled Central Venous Catheter in Children. Purpose: Central venous access in children, in particular small children and infants, is challenging. We have developed a technique employing adult peripherally inserted central venous catheters (PICCs) as tunnelled central venous catheters (TCVCs) in children. The principal advantage of this novel technique is that the removal technique is less complex than that of conventional cuffed TCVCs. The catheter can be removed simply by being pulled out and does not require general anaesthesia. The purpose of this study is to determine the success, safety and utility of this technique and to identify the rate of late complications. We describe the 6-year experience in our unit. Materials And Methods: Electronic and paper medical records were reviewed for consecutive paediatric patients who had a PICC device inserted as a TCVC over a 6-year period (September 2009 through July 2015). The following data were recorded-patient demographics, setting for PICC as TCVC insertion, use of ultrasound and fluoroscopy, PICC device type, early or late complications and date of and reason for removal. Results: Twenty-one PICCs were inserted as TCVCs in 19 children, all aged less than 10 years. Mean patient age at the time of placement was 3.7 years. Average patient weight was 15.7 kg. All insertions were successful with no significant immediate complications recorded. The most common indication for insertion in our patient sample was pseudo-obstruction secondary to gastrointestinal dysmotility disorder (24%), with cystic fibrosis infective exacerbation being the second most frequent diagnosis (14%). Suspected catheter-related infection led to early device removal in one case (4.8%). Inadvertent dislodgement occurred in one case (4.8%). Nineteen of the 21 devices (90.4%) lasted for the total intended duration of use. Conclusion: Using a PICC device as a TCVC in small children appears to be a safe technique, with an acceptable complication profile. abstract_id: PUBMED:22911269 Central venous catheters as access for acute and long-term dialysis Central venous dialysis catheters are indispensible as a rapid large lumen access to the blood compartment. If such a central venous catheter is necessary for longer than 2-3 weeks it is better to implant a tunnelled cuffed catheter initially or to switch early from the non-tunnelled acute catheter to a tunnelled cuffed catheter. Tunnelled cuffed catheters can be used for many weeks or even years and the complication rate is less than that of non-tunnelled acute catheters. The proportion of dialysis patients with long-term dialysis using tunnelled cuffed catheters has increased rapidly in recent years and now stands at approximately 20 % in Germany. These catheters are, however, prone to more infectious complications and more thromboses than native arteriovenous fistulas or prosthetic shunts. The mortality of patients with long-term dialysis catheters is also higher than those with arteriovenous shunts. For these reasons central venous catheters will always be regarded as the third choice dialysis access when arteriovenous fistulas are not possible. Catheters are available in a wide variety of designs but the individual advantages are still unclear. In order to avoid short-term and long-term complications a variety of measures for implantation and use during dialysis treatment have been developed which make the use safer. Answer: Early cannulation arteriovenous grafts (ecAVGs) appear to be a viable alternative to tunnelled central venous catheters (TCVCs) for patients requiring immediate access for hemodialysis. Several studies and analyses have indicated that ecAVGs may offer advantages over TCVCs in terms of complications, cost, and patient outcomes. One observational "virtual study" and budget impact analysis found that ecAVGs were a practical and acceptable alternative to TCVCs in over 80% of patients. The study reported that ecAVGs had higher procedure and re-intervention costs but these were offset by significant reductions in septicaemia treatment costs and in-patient waiting time bed costs, resulting in potential savings per patient (PUBMED:27032450). A randomized controlled trial aimed to compare whether TCVC ± AVF or ecAVG ± AVF provide a better strategy for managing patients requiring immediate vascular access for hemodialysis. The primary outcome was systemic bacteraemia at 6 months, with secondary outcomes including bacteraemia rates at longer terms, patency rates, stenoses, re-intervention rates, re-admission rate, mortality, and quality of life (PUBMED:25885054). Another randomized controlled trial and cost-effectiveness analysis showed that ecAVGs reduced the rate of culture-proven bacteremia and mortality compared to TCVCs, and the strategy proved to be cost-neutral (PUBMED:28236919). A retrospective analysis of 278 consecutive ecAVGs indicated that patient characteristics such as the number of previous TCVCs, the presence of peripheral vascular disease, and the indication for an ecAVG were independently associated with outcomes. The study suggested that strategies resulting in TCVC exposure have an independent and cumulative association with decreasing long-term patency for subsequent ecAVGs, implying that TCVC exposure should be minimized (PUBMED:34031434). Furthermore, a study comparing outcomes following TCVC infection found that an ecAVG early following a TCVC infection is safe, reduces the incidence of subsequent infectious complications, and reduces the number of TCVCs required, with better functional patency (PUBMED:33819582). In conclusion, the evidence suggests that ecAVGs are a viable and potentially advantageous alternative to TCVCs for patients requiring immediate vascular access for hemodialysis, with benefits including lower infection rates, cost savings, and better patient outcomes.
Instruction: Liver transplantation for familial amyloid polyneuropathy non-VAL30MET variants: are cardiac complications influenced by prophylactic pacing and immunosuppressive weaning? Abstracts: abstract_id: PUBMED:15964382 Liver transplantation for familial amyloid polyneuropathy non-VAL30MET variants: are cardiac complications influenced by prophylactic pacing and immunosuppressive weaning? Background: Cardiac complications represent a cause of morbidity and mortality after liver transplantation among patients with familial amyloid polyneuropathy (FAP), especially for the non-VAL30MET variant types. Methods: We retrospectively evaluated 11 recipients from a nonendemic area including 90.9% affected by FAP variants. Preoperative cardiovascular symptoms were present in 81% of patients. An intraoperative pacemaker was placed prophylactically in 90.9% of all recipients. Since tacrolimus has been reported in the international literature to display cardiac toxicity, we evaluated the influence of intraoperative prophylactic pacing and rapid postoperative weaning from tacrolimus, mainly allowed by thymoglobulin on the occurrence of posttransplantation cardiac complications. Results: One patient received a combined heart-liver transplant, another, living donor liver transplantation. We did not observe any significant intraoperative cardiac complications. Postoperatively, the pacemaker was removed from all patients but 1. Five patients received tacrolimus and steroids; a subsequent, second group of 6 patients (54.5%) was treated with thymoglobulin followed by tacrolimus. At discharge the mean tacrolimus level was 10.6 ng/mL, whereas after 1 month it was 7.5 ng/mL. We observed a case of acute cellular rejection before discharge, which was successfully treated with intravenous steroids and OKT3. After a mean follow-up of 17.4 months (range, 1-31), 2 patients had died (18.1%): 1 due to sepsis and another, to MI. Two recipients experienced cardiac complications (18.1%), namely, the patient who died due to an myocardial infarction and a second one with a tachyarrhythmia, which was treated successfully with beta-blockers and amiodarone. Conclusion: Prophylactic pacing and rapid weaning from immunosuppression are still associated with a significant rate of postoperative cardiac complications. abstract_id: PUBMED:21280184 Continuous development of arrhythmia is observed in Swedish transplant patients with familial amyloidotic polyneuropathy (amyloidogenic transthyretin Val30Met variant). In patients with familial amyloidotic polyneuropathy (FAP), heart complications are prognostic factors for mortality and morbidity after liver transplantation (LT). However, only a few studies have analyzed the development of arrhythmia in transplant patients with FAP. We investigated the development of arrhythmia requiring pacemaker insertion (PMI) in Swedish transplant patients with FAP, and we related the findings to gender, age at disease onset, and survival. One hundred four transplant patients with the amyloidogenic transthyretin Val30Met mutation were included in the study. Twenty-six (25%) received a pacemaker during the observation period (a median of 11 years after disease onset). This frequency was comparable to that noted in a previous study describing the natural course of FAP. No significant differences in PMI between early-onset cases (<50 years old) and late-onset cases (≥ 50 years old) or between genders were observed. PMI was not significantly related to patient survival. Our study confirms our previously reported short-time observation: LT does not prevent the development of heart arrhythmia necessitating PMI. The development of arrhythmia is unrelated to gender or age at disease onset, and the yearly risk does not appear to decrease with time after LT. abstract_id: PUBMED:20209591 Liver transplantation and combined liver-heart transplantation in patients with familial amyloid polyneuropathy: a single-center experience. Liver transplantation (LT) is the only curative option for patients with familial amyloid polyneuropathy (FAP) at present. Twenty patients with FAP underwent LT between May 1998 and June 2007. Transthyretin mutations included predominantly the Val30Met mutation but also 10 other mutations. Seven patients received a pacemaker prior to LT, and because of impairment of mechanical cardiac function, 4 combined heart-liver transplants were performed, 1 simultaneously and 3 sequentially. The first patient, who underwent simultaneous transplantation, died. Seven patients died after LT, with 5 dying within the first year after transplantation. The causes of death were cardiac complications (4 patients), infections (2 patients), and malnutrition (1 patient). One-year survival was 75.0%, and 5-year survival was 64.2%. Gly47Glu and Leu12Pro mutations showed an aggressive clinical manifestation: 2 patients with the Gly47Glu mutation, the youngest patients of all the non-Val30Met patients, suffered from severe cardiac symptoms leading to death despite LT. Two siblings with the Leu12Pro mutation, who presented only with grand mal seizures, died after LT because of sepsis. In conclusion, the clinical course in patients with FAP is very variable. Cardiac symptoms occurred predominantly in patients with non-Val30Met mutations and prompted combined heart-liver transplantation in 4 patients. Although early LT in Val30Met is indicated in order to halt the typical symptoms of polyneuropathy, additional complications occurring predominantly with other mutations may prevail and lead to life-threatening complications or a fatal outcome. Combined heart-liver transplantation should be considered in patients with restrictive cardiomyopathy. abstract_id: PUBMED:32705768 Transthyretin amyloidosis with cardiomyopathy after domino liver transplantation: Results of a cross-sectional study. Domino liver transplantation (DLT) has been used widely in patients with hereditary amyloid transthyretin (ATTR) amyloidosis. New-onset polyneuropathy in recipients of DLT has been reported, but there are few cases of cardiac involvement reported. We aimed to perform a cross-sectional study for ATTR amyloidosis with cardiomyopathy (ATTR-CM) in DLT recipients. We evaluated 23 living DLT recipients a median of 9 years since DLT at 2 referral centers with a systematic cardiac evaluation, including bone scintigraphy. Median age was 72 years, 91% had hypertension, 35% had diabetes mellitus, 67% had chronic renal failure, and 8 patients (35%) developed new-onset polyneuropathy. Only 13% had a normal electrocardiogram and a normal echocardiography, and most of them showed some conduction disturbance or increase in left ventricular wall thickness, but only 1 patient with a Glu89Lys mutation developed ATTR-CM diagnosed by bone scintigraphy and endomyocardial biopsy. None of the recipients of a DLT with Val30Met mutation showed cardiac involvement by bone scintigraphy. In conclusion, DLT from Val30Met donors seems to be safe regarding the development of ATTR-CM. Evaluation of cardiomyopathy in DLT recipients is challenging due to concomitant comorbidities and in this context, bone scintigraphy can be helpful to evaluate ATTR-CM. abstract_id: PUBMED:23993291 Severe heart disease in an unusual case of familial amyloid polyneuropathy type I. Familial amyloid polyneuropathy type I (FAP type I) is a rare hereditary systemic amyloidosis caused by the Val30Met mutation in the transthyretin (TTR) gene. The clinical onset and spectrum are variable and depend on phenotypic heterogeneity. Cardiac complications (dysrhythmias and conduction disturbances, cardiomyopathy and dysautonomia) indicate a poor prognosis, even after liver transplantation. We report an atypical case of FAP type I, highlighting the severe cardiac involvement and its complications. Early diagnosis of amyloid heart disease is increasingly important in the context of several clinical trials of promising new and experimental drugs. abstract_id: PUBMED:14640045 Impact of liver transplantation on familial amyloidotic polyneuropathy (FAP) patients' symptoms and complications. Liver transplantation for transthyretin (TTR) amyloidosis is today an accepted treatment for the disease. However, the long-term outcome of the procedure remains to be elucidated, especially for non-Val30Met TTR mutations. Using serum amyloid P (SAP) component scintigraphy, amyloid deposits have been shown to decrease in approximately 50% of the patients, but this finding has not been confirmed in biopsy specimens. However, free radical activity has been reduced after transplantation. Conflicting results regarding neurological disturbances have emerged, but some improvement may take place, especially in patients transplanted at an early stage of the disease. For heart complications, recent reports indicate that increased cardiomyopathy after transplantation is a common finding, and that the development of arrhythmia is also common. Even though an improvement of the patients' nutritional status is observed after transplantation, gastrointestinal disturbances appear to be unchanged after the procedure. Kidney function deteriorates initially after transplantation, but thereafter it remains stable. Since TTR is produced locally in the eye, development of amyloid deposits in the eye has been reported in several cases after transplantation. Transplantation for FAP should be performed early on after onset of the disease, since the majority of symptoms will remain unchanged after the procedure. The patients should be followed closely after transplantation to detect the development of heart arrhythmia. abstract_id: PUBMED:30295933 Hereditary transthyretin-related amyloidosis. Hereditary transthyretin(TTR)-related amyloidosis (ATTRm amyloidosis) is an endemic/non-endemic, autosomal-dominant, early- and late-onset, rare, progressive disorder, predominantly manifesting as length-dependent, small fiber dominant, axonal polyneuropathy and frequently associated with cardiac disorders and other multisystem diseases. ATTRm amyloidosis is due to variants in the TTR gene, with the substitution Val30Met as the most frequent mutation. TTR mutations lead to destabilization and dissociation of TTR tetramers into variant TTR monomers, and formation of amyloid fibrils, which are consecutively deposited extracellularly in various tissues, such as nerves, heart, brain, eyes, intestines, kidneys, or the skin. Neuropathy may not only include large nerve fibers but also small fibers, and not only sensory and motor fibers but also autonomic fibers. Types of TTR variants, age at onset, penetrance, and clinical presentation vary between geographical areas. Suggestive of a ATTRm amyloidosis are a sensorimotor polyneuropathy, positive family history, autonomic dysfunction, cardiomyopathy, carpal tunnel syndrome, unexplained weight loss, and resistance to immunotherapy. If only sensory A-delta or C fibers are affected, small fiber neuropathy ensues. Diagnostic tests for small fiber neuropathy include determination of intraepidermal nerve fiber density, laser-evoked potentials, heat- and cold-detection thresholds, and measurement of the electrochemical skin conductance. Therapy currently relies on liver transplantation and TTR-stabilizers (tafamidis, diflunisal). abstract_id: PUBMED:27838303 Cardiac Dysautonomia Predicts Long-Term Survival in Hereditary Transthyretin Amyloidosis After Liver Transplantation. Objectives: This study sought to compare techniques evaluating cardiac dysautonomia and predicting the risk of death of patients with hereditary transthyretin amyloidosis (mATTR) after liver transplantation (LT). Background: mATTR is a multisystemic disease involving mainly the heart and the peripheral nervous system. LT is the reference treatment, and pre-operative detection of high-risk patients is critical. Cardiovascular dysautonomia is commonly encountered in ATTR and may affect patient outcome, although it is not known yet which technique should be used in the field to evaluate it. Methods: In a series of 215 consecutive mATTR patients who underwent LT, cardiac dysautonomia was assessed by a dedicated clinical score, time-domain heart rate variability, 123-meta-iodobenzylguanidine heart/mediastinum (123-MIBG H/M) ratio on scintigraphy, and heart rate response to atropine (HRRA). Results: Patient median age was 43 years, 62% were male and 69% carried the Val30Met mutation. Cardiac dysautonomia was documented by at least 1 technique for all patients but 6 (97%). In univariate analysis, clinical score, 123-MIBG H/M ratio and HRRA were associated with mortality but not heart rate variability. The 123-MIBG H/M ratio and HRRA had greater area under the curve (AUC) of receiver-operating characteristic curves than clinical score and heart rate variability (AUC: 0.787, 0.748, 0.656, and 0.523, respectively). Multivariate score models were then built using the following variables: New York Heart Association functional class, interventricular septum thickness, and either 123-MIBG H/M ratio (SMIBG) or HRRA (Satropine). AUC of SMIBG and Satropine were greater than AUC of univariate models, although nonsignificantly (AUC: 0.798 and 0.799, respectively). Predictive powers of SMIBG, Satropine, and a reference clinical model (AUC: 0.785) were similar. Conclusions: Evaluation of cardiac dysautonomia is a valuable addition for predicting survival of mATTR patients following LT. Among the different techniques that evaluate cardiac dysautonomia, 123-MIBG scintigraphy and heart rate response to atropine had better prognostic accuracy. Multivariate models did not improve significantly prediction of outcome. abstract_id: PUBMED:22747647 Diflunisal for ATTR cardiac amyloidosis. Transthyretin (TTR) cardiac amyloidosis is an important, often under-recognized and potentially modifiable cause of heart failure with a preserved ejection fraction. The only proven treatment is liver or combined heart/liver transplantation, which, although effective, is not suitable for the vast majority of older adults with this condition. Diflunisal, a nonsteroidal anti-inflammatory drug, can stabilize the TTR tetramer in vitro and may prevent misfolding monomers and dimers from forming amyloid deposits in the heart. It is one of two small molecules assessed in animal safety studies and human clinical trials of TTR polyneuropathy. The authors conducted a single-arm, open-label investigation with a mean follow-up of 0.9 ± 0.3 years to determine the safety and efficacy of diflunisal administration in a cohort of 13 patients with confirmed wild-type or mutant TTR cardiac amyloidosis. Diflunisal was well tolerated from a hematologic standpoint, although a 6% decline in estimated glomerular filtration rate was noted. Therapy was discontinued in one patient who rapidly developed volume overload. There was no significant mean change in cardiac structure (left ventricular mass: -53 g/m(2) change, P=.36), function (ejection fraction: -2% change, P=.61), or biomarkers (Troponin I: +0.03 ng/mL, P=.08; BNP: +93 pg/mL change, P=.52) during the course of therapy. These data suggest that at low dosages and with careful monitoring, diflunisal can be safely administered to compensated patients with cardiac TTR amyloidosis. Further study in a randomized placebo-controlled trial is warranted. abstract_id: PUBMED:21145840 The heart in liver transplantation. The heart and liver are organs that are closely related in both health and disease. Patients who undergo liver transplantation may suffer from heart disease that is: (a) related to the original cause of the liver disease such as hemochromatosis, (b) related to the liver disease itself, or (c) related to other associated conditions. Furthermore, liver transplantation is one of the most cardiovascular stressful events that a patient with cirrhosis may undergo. After liver transplantation, the progression of pre-existing or the development of new-onset cardiac disease may occur. This article reviews the relationship between the heart and liver transplantation in the pre-transplant, intra-operative, and post-transplant periods. Answer: Liver transplantation for familial amyloid polyneuropathy (FAP) non-VAL30MET variants is associated with significant cardiac complications, despite the use of prophylactic pacing and immunosuppressive weaning strategies. A retrospective evaluation of 11 recipients, most of whom were affected by FAP variants, revealed that cardiac complications were still a significant issue post-transplantation. Prophylactic pacing was used intraoperatively in 90.9% of the recipients, and rapid postoperative weaning from tacrolimus was implemented, mainly facilitated by thymoglobulin. Despite these measures, two patients experienced cardiac complications, including one who died due to a myocardial infarction and another who developed tachyarrhythmia but was successfully treated with beta-blockers and amiodarone (PUBMED:15964382). The development of arrhythmia requiring pacemaker insertion (PMI) in Swedish transplant patients with FAP was also investigated, and it was found that liver transplantation (LT) does not prevent the development of heart arrhythmia necessitating PMI. The development of arrhythmia was unrelated to gender or age at disease onset, and the yearly risk did not appear to decrease with time after LT (PUBMED:21280184). In a single-center experience, cardiac symptoms occurred predominantly in patients with non-Val30Met mutations and prompted combined heart-liver transplantation in 4 patients. The study concluded that the clinical course in patients with FAP is very variable, and cardiac symptoms may lead to life-threatening complications or a fatal outcome despite LT (PUBMED:20209591). In summary, while prophylactic pacing and immunosuppressive weaning are used to mitigate cardiac complications in FAP non-VAL30MET variant patients undergoing liver transplantation, these complications remain a significant concern postoperatively. The development of arrhythmias post-transplantation is a notable issue, and the risk persists regardless of the preventive measures taken (PUBMED:15964382; PUBMED:21280184; PUBMED:20209591).
Instruction: Utilizing Telemedicine in the Trauma Intensive Care Unit: Does It Impact Teamwork? Abstracts: abstract_id: PUBMED:25885369 Utilizing Telemedicine in the Trauma Intensive Care Unit: Does It Impact Teamwork? Background: The aim of this study was to examine the impact of a telemedical robot on trauma intensive care unit (TICU) clinician teamwork (i.e., team attitudes, behaviors, and cognitions) during patient rounds. Materials And Methods: Thirty-two healthcare providers who conduct rounds volunteered to take surveys assessing teamwork attitudes and cognitions at three time periods: (1) the onset of the study, (2) the end of the 30-day control period, and (3) the end of the 30-day experimental period, which immediately followed the control period. Rounds were recorded throughout the 30-day control period and 30-day experimental period to observe provider behaviors. For the initial 30 days, there was no access to telemedicine. For the final 30 days, the rounding healthcare providers had access to the RP-7 robot (Intouch Health Inc., Santa Barbara, CA), a telemedical tool that can facilitate patient rounds conducted away from bedside. Results: Using a one-tailed, one-way repeated-measures analysis of variance (ANOVA) to compare trust at Times 1, 2, and 3, there was no significant effect on trust: F(2, 14)=1.20, p=0.16. When a one-tailed, one-way repeated-measures ANOVA to compare transactive memory systems (TMS) at Times 1, 2, and 3 was conducted, there was no significant effect on TMS: F(2, 15)=1.33, p=0.15. We conducted a one-tailed, one-way repeated-measures ANOVA to compare team psychological safety at Times 1, 2, and 3, and there was no significant effect on team psychological safety: F(2,15)=1.53, p=0.12. There was a significant difference in communication between rounds with and without telemedicine [t(25)=-1.76, p<0.05], such that there was more task-based communication during telerounds. Telemedicine increased task-based communication and did not negatively impact team trust, psychological safety, or TMS during rounds. Conclusions: Telemedicine may offer advantages for some teamwork competencies without sacrificing the efficacy of others and may be adopted by intact rounding teams without hindering teamwork. abstract_id: PUBMED:33216935 Trauma, Teams, and Telemedicine: Evaluating Telemedicine and Teamwork in a Mass Casualty Simulation. Introduction: Mass casualty events (MASCAL) are on the rise globally. Although natural disasters are often unavoidable, the preparation to respond to unique patient demands in MASCAL can be improved. Utilizing telemedicine can allow for a better response to such disasters by providing access to a virtual team member with necessary specialized expertise. The purpose of this study was to examine the positive and/or negative impacts of telemedicine on teamwork in teams responding to MASCAL events. Methods: We introduced a telemedical device (DiMobile Care) to Forward Surgical Teams during a MASCAL simulated training event. We assessed teamwork-related attitudes, behaviors, and cognitions during the MASCAL scenario through pre-post surveys and observations of use. Analyses compare users and nonusers of telemedicine and pre-post training differences in teamwork. Results: We received 50 complete responses to our surveys. Overall, clinicians have positive reactions toward the potential benefits of telemedicine; further, participants report a significant decrease in psychological safety after training, with users rating psychological safety as significantly higher than non-telemedicine users. Neither training nor telemedicine use produced significant changes in cognitive and behavioral-based teamwork. Nonetheless, participants reported perceiving that telemedicine improved leadership and adaptive care plans. Conclusions: Telemedicine shows promise in connecting Forward Surgical Teams with nuanced surgical expertise without harming quality of care metrics (i.e., teamwork). However, we advise future iterations of DiMobile Care and other telemedical devices to consider contextual features of information flow to ensure favorable use by teams in time-intensive, high-stakes environments, such as MASCAL. abstract_id: PUBMED:27365321 Telemedicine coverage for post-operative ICU patients. Introduction There is an increased demand for intensive care unit (ICU) beds. We sought to determine if we could create a safe surge capacity model to increase ICU capacity by treating ICU patients in the post-anaesthesia care unit (PACU) utilizing a collaborative model between an ICU service and a telemedicine service during peak ICU bed demand. Methods We evaluated patients managed by the surgical critical care service in the surgical intensive care unit (SICU) compared to patients managed in the virtual intensive care unit (VICU) located within the PACU. A retrospective review of all patients seen by the surgical critical care service from January 1st 2008 to July 31st 2011 was conducted at an urban, academic, tertiary centre and level 1 trauma centre. Results Compared to the SICU group ( n = 6652), patients in the VICU group ( n = 1037) were slightly older (median age 60 (IQR 47-69) versus 58 (IQR 44-70) years, p = 0.002) and had lower acute physiology and chronic health evaluation (APACHE) II scores (median 10 (IQR 7-14) versus 15 (IQR 11-21), p < 0.001). The average amount of time patients spent in the VICU was 13.7 + /-9.6 hours. In the VICU group, 750 (72%) of patients were able to be transferred directly to the floor; 287 (28%) required subsequent admission to the surgical intensive care unit. All patients in the VICU group were alive upon transfer out of the PACU while mortality in the surgical intensive unit cohort was 5.5%. Discussion A collaborative care model between a surgical critical care service and a telemedicine ICU service may safely provide surge capacity during peak periods of ICU bed demand. The specific patient populations for which this approach is most appropriate merits further investigation. abstract_id: PUBMED:26281718 Tele-cooperation for innovative care using the example of the University Hospital Aachen. Telematics in intensive care medicine, emergency medicine, and telemedical intersectoral rehabilitation planning in geriatric trauma The demographic challenge of the ageing society is associated with increasing comorbidity. On the other hand, there will be an ageing workforce in medicine, resulting in an imbalance between the demand and supply of medical care in the near future. In rural areas in particular, this imbalance is already present today. Based on three best practice projects carried out by our telemedical center in Aachen, including emergency medicine, intensive care medicine, and the rehabilitation planning of geriatric trauma care, some experience and the potential of the intersectoral provision of care, supported by telemedicine, are demonstrated. Telemedicine is the provision of medical services over a geographical distance by using tele-communication and data transfer. It has been proven to ensure a constant quality of health care. Telemedical support enables shared expertise independent of time and space, and allows efficient allocation of resources. A review of international experience supports this notion. abstract_id: PUBMED:31142147 A Novel Telemedicine System for Care of Statewide Hand Trauma. Background: Telemedicine is an evolving tool to increase patients' access to subspecialty care. Since 2014, Arkansas has been utilizing telemedicine in the evaluation of patients with hand injuries. The purpose of this study is to assess the effect of this novel telemedicine system for the management of hand trauma on patient transfer. Methods: We reviewed data from the first year of the telemedicine program (2014) and compared it to data from the year prior (2013). Data collection from both years included number of hand consults and need for transfer. From the 2014 data, we also recorded the use of telemedicine, type of transfer, distance of transfer, and time to disposition. Results: During 2013 (pre-telemedicine), there were 263 hand traumas identified. In all, 191 (73%) injuries required transfer to a higher level of care, while 72 (23%) were managed locally. In the first year of the telemedicine program (2014), a total of 331 hand injuries were identified. A total of 298 (90%) resulted in telemedicine consultation with 65% (195) utilizing video encounters. After telemedicine consultation, local management was recommended for 164 injuries (55%) while transfer was recommended for 134 (45%). Using telemedicine, there was a significant decrease in the percentage of transfer for hand injuries (P < .001). Conclusions: The telemedicine program was well utilized and provided patients throughout the state with continuous access to fellowship trained hand surgeons including regions where hand subspecialty care is not available. The program resulted in a significant decrease in the number of hospital transfers for the management of acute hand trauma. abstract_id: PUBMED:32622648 Physician and nurse practitioner roles in emergency, trauma, critical, and intensive care. Background: The delivery of emergency, trauma, critical, and intensive care services requires coordination among all members of the care team. Perceived teamwork and role clarity may vary among physicians (MDs) and nurse practitioners (NPs). Purpose: To examine differences in perceived roles and responsibilities of NPs and MDs practicing in emergency, trauma, critical, and intensive care. Methods: Secondary Analysis of the National Survey of Emergency, Intensive, and Critical Care Nurse Practitioners and Physicians, a cross-sectional national survey of clinicians. Mail survey of randomly selected stratified cross-sectional samples of MDs and NPs drawn from national lists of clinicians in eligible specialties working in emergency, trauma, intensive, and critical care units in the United States. 814 clinicians (351 NPs and 463 MDs) were recruited from national by postal mail survey. Our initial sample included n = 2,063 clinicians, n = 1,031 NPs and n = 1,032 MDs in eligible specialties. Of these, 63.5% of NPs and 70.1% of MDs completed and returned the survey excluding those who were ineligible due to lack of current practice in a relevant specialty. Findings: NPs in ICU/CCU are more likely to be female and report working fewer hours than do MDs and provide direct care to more patients. 55% of NPs and 82% of MDs agree that their individual role in their unit is clear (p < .001); 34% of MDs and 42% of NPs agree that their unit is an example of excellent team work among professionals (p = 0.021); 41% of MD and 37% of NP clinicians (p = 0.061) agree that their teams are "prepared to provide outstanding care in a crisis or disaster." Perceived role clarity was significantly associated with increased perceptions of excellent teamwork and disaster preparedness. Discussion: At the time of this survey, and majority of NPs and MDs working in emergency, critical and intensive care did not agree that their teams were prepared for a crisis or disaster. Leaders of health organizations should encourage teamwork and professional role clarity to assist units to perform effectively in emergency and disaster preparedness. abstract_id: PUBMED:11549196 Telemedicine and trauma care. Advances in technology have made it possible for telemedicine to be used in multiple areas of medicine, including trauma care. Teleradiology and teleconsultation are becoming standard operating procedure for many rural facilities. Future uses of telemedicine include teleproctoring and telepresence surgery. The medicolegal and financial impact of telemedicine remains to be determined. The potential influence of telemedicine in the care of future trauma patients will likely be important and may alter patterns of referral, consultation, and treatment. abstract_id: PUBMED:34126795 A Smartphone Application for Teamwork and Communication in Trauma: Pilot Evaluation "in the Wild". Objective: To evaluate the potential for a smartphone application to improve trauma care through shared and timely access to patient and contextual information. Background: Disruptions along the trauma pathway that arise from communication, coordination, and handoffs problems can delay progress through initial care, imaging diagnosis, and surgery to intensive care unit (ICU) disposition. Implementing carefully designed and evaluated information distribution and communication technologies may afford opportunities to improve clinical performance. Methods: This was a pilot evaluation "in the wild" using a before/after design, 3 month, and pre- post-intervention data collection. Use statistics, usability assessment, and direct observation of trauma care were used to evaluate the app. Ease of use and utility were assessed using the technology acceptance model (TAM) and system usability scale (SUS). Direct observation deployed measures of flow disruptions (defined as "deviations from the natural progression of an procedure"), teamwork scores (T-NOTECHS), and treatment times (total time in emergency department [ED]). Results: The app was used in 367 (87%) traumas during the trial period. Usability was generally acceptable, with higher scores found by operating room (OR), ICU, and neuro and orthopedic users. Despite positive trends, no significant effects on flow disruptions, teamwork scores, or treatment times were observed. Conclusions: Pilot trials of a clinician-centered smartphone app to improve teamwork and communication demonstrate potential value for the safety and efficiency of trauma care delivery as well as benefits and challenges of "in-the-wild" evaluation. abstract_id: PUBMED:15115563 The use of telemedicine to provide pediatric critical care consultations to pediatric trauma patients admitted to a remote trauma intensive care unit: a preliminary report. Objective: Injured pediatric patients in remote communities are often cared for at trauma centers that may be underserved with respect to pediatric specialty services. The objective of this study is to describe a pilot telemedicine project that allows a remote trauma center's adult intensive care unit to obtain nontrauma, nonsurgical-related pediatric critical care consultations for acutely injured children. Design: Nonconcurrent cohort design. Setting: A remote, level II trauma center's shock-trauma intensive care unit and a tertiary care children's hospital pediatric intensive care unit. Patients: Analyses were conducted on cohorts of pediatric trauma patients (<16 yrs) consecutively admitted to the remote adult intensive care unit, including historical control patients and patients who received and did not receive telemedicine consultations. Interventions: Telemedicine consultations were obtained at the discretion of the remote intensive care unit provider for nontrauma, nonsurgical medical issues. Measurements And Results: The Injury Severity Score and Trauma and Injury Severity Score were used to assess severity of injury and predicted mortality rates, respectively, for the patient cohorts. Parental and provider satisfaction with the telemedicine consultations was also described. Thirty-nine consultations were conducted on 17 patients from the 97 pediatric patients admitted during the 2-yr study. Patients who received consultations were younger (5.5 yrs vs. 13.3 yrs, p <.01) and were more severely injured (mean Injury Severity Score = 18.3 vs. 14.7, p =.07). Severity-adjusted mortality rates were consistent with Trauma and Injury Severity Score expectations. Satisfaction surveys suggested a high level of provider and parental satisfaction. Conclusions: Our report of a trauma intensive care unit based pediatric critical care telemedicine program demonstrates that telemedicine consultations to a remote intensive care unit are feasible and suggests a high level of satisfaction among providers and parents. abstract_id: PUBMED:26893947 Convergence of Health Level Seven Version 2 Messages to Semantic Web Technologies for Software-Intensive Systems in Telemedicine Trauma Care. Objectives: To present the technical background and the development of a procedure that enriches the semantics of Health Level Seven version 2 (HL7v2) messages for software-intensive systems in telemedicine trauma care. Methods: This study followed a multilevel model-driven approach for the development of semantically interoperable health information systems. The Pre-Hospital Trauma Life Support (PHTLS) ABCDE protocol was adopted as the use case. A prototype application embedded the semantics into an HL7v2 message as an eXtensible Markup Language (XML) file, which was validated against an XML schema that defines constraints on a common reference model. This message was exchanged with a second prototype application, developed on the Mirth middleware, which was also used to parse and validate both the original and the hybrid messages. Results: Both versions of the data instance (one pure XML, one embedded in the HL7v2 message) were equally validated and the RDF-based semantics recovered by the receiving side of the prototype from the shared XML schema. Conclusions: This study demonstrated the semantic enrichment of HL7v2 messages for intensive-software telemedicine systems for trauma care, by validating components of extracts generated in various computing environments. The adoption of the method proposed in this study ensures the compliance of the HL7v2 standard in Semantic Web technologies. Answer: The utilization of telemedicine in the Trauma Intensive Care Unit (TICU) has been studied to determine its impact on teamwork, which encompasses team attitudes, behaviors, and cognitions during patient rounds. One study examined the effect of a telemedical robot on clinician teamwork in the TICU and found that while telemedicine increased task-based communication, it did not negatively impact team trust, psychological safety, or transactive memory systems (TMS) during rounds (PUBMED:25885369). In the context of mass casualty events (MASCAL), another study introduced a telemedical device to Forward Surgical Teams during a simulated training event. The study found that clinicians had positive reactions toward the potential benefits of telemedicine, and while it did not produce significant changes in cognitive and behavioral-based teamwork, participants perceived that telemedicine improved leadership and adaptive care plans (PUBMED:33216935). Additionally, a collaborative care model between a surgical critical care service and a telemedicine ICU service was shown to safely provide surge capacity during peak periods of ICU bed demand, suggesting that telemedicine can be effectively integrated into ICU workflows without compromising patient safety (PUBMED:27365321). Furthermore, telemedicine has been used to provide pediatric critical care consultations to pediatric trauma patients in remote trauma ICUs, demonstrating feasibility and high levels of provider and parental satisfaction (PUBMED:15115563). In summary, the studies suggest that telemedicine can be adopted by trauma care teams without hindering teamwork and may offer advantages in communication and leadership during patient care, including in high-stakes environments like MASCAL events and remote ICUs. It also indicates that telemedicine can be a valuable tool for providing specialized consultations and managing surge capacity in ICUs, potentially improving the quality of trauma care delivered across different settings.
Instruction: Percutaneous nephrostomy for ureteric obstruction due to advanced pelvic malignancy: have we got the balance right? Abstracts: abstract_id: PUBMED:23666587 Percutaneous nephrostomy for ureteric obstruction due to advanced pelvic malignancy: have we got the balance right? Purpose: The optimal management of patients with ureteric obstruction in advanced pelvic malignancy is unclear. Effective judgment is required to decide which patients would benefit most from decompression of the urinary tract. The objective of our study was to assess survival and complication rates post-percutaneous nephrostomy (PCN) in patients with ureteric obstruction due to advanced pelvic malignancy. Methods: A detailed retrospective case review of all patients who underwent PCN for ureteric obstruction due to pelvic malignancy in one calendar year was conducted to assess indication, survival time, length of stay post-procedure and complications. Results: Thirty-six nephrostomies were performed on 22 patients with prostate cancer being the commonest primary (55 %). Renal failure was the commonest mode of presentation (56 %). Eight patients (36 %) presented without a prior diagnosis of cancer. All PCNs except one were initially technically successful, and 56 % of renal units were able to be antegradely stented and rendered free of nephrostomy. Median survival post-nephrostomy was 78 days (range 4-1,137), with the subset of bladder cancer patients having the poorest survival. Dislodgement of the nephrostomy tube was the most common troublesome complication which led to the greatest morbidity, sometimes requiring repeat nephrostomy insertion. Patients stayed for a median of 23 (range 3-89) days in hospital, which amounted to 29 % of their remaining lifetime spent in hospital. Conclusions: Although effective in improving renal function, PCN is a procedure not without associated morbidity and does not always prolong survival. Therefore, the decision to decompress an obstructed kidney with advanced pelvic malignancy should not be taken lightly. We recommend that such cases be discussed in a multidisciplinary setting, and a decision is taken only after a full informed discussion involving patients and their relatives. abstract_id: PUBMED:3176936 Ultrasound-guided percutaneous nephrostomy in the treatment of advanced gynecologic malignancy. Adjuvant to cancer therapy, 35 percutaneous ultrasound-guided nephrostomies have been applied in 21 patients with ureteral occlusions or lesions secondary to advanced gynecologic malignancies. Application of the catheters was uncomplicated. Dislodgement of the catheters was the major problem with this technique, occurring in 9 cases. There were no serious complications. In 14 patients, normal ureteral function was restored after the cancer therapy, 2 had to be nephrectomized, and in 5 patients the nephrostomies had to remain permanently. It is concluded that nephrostomy is a useful tool in the management of advanced gynecologic malignancies in patients who have a prospect of cure or a period of meaningful tumor regression. abstract_id: PUBMED:32319003 Percutaneous nephrostomy in obstructing pelvic malignancy does not facilitate further oncological treatment. Objective: The optimal management of patients with ureteric obstruction in advanced malignancy is unclear. How quality of life is affected by a nephrostomy and how many of these patients undergo further oncological treatment remains uncertain. The objective of this retrospective multicentre study was to look at the outcomes of patients who had percutaneous nephrostomy insertion for malignancy. Methods: We identified patients who had a nephrostomy inserted for ureteric obstruction due to malignancy at our institution from January 2015 to December 2018. We obtained data retrospectively from our electronic patient record system. Patients who had nephrostomy insertion for other causes such as ureteric calculi or injury were excluded from the study. Results: 105 patients underwent nephrostomy insertion during this time interval. 51.42% patients (n = 54) had urological malignancies (bladder and/or prostate cancer). The median LOS was 14 days (range 1-104 days) post-procedure and 39.04% (n = 41) had at least one 30-day readmission to hospital. The average starting creatinine level was 348 mmol/L (range 49-1133) and the average creatinine at discharge was 170 mmol/L (range 44-651). Although the average change in the creatinine (190 mmol/L) is statistically significant (p < 0.001), it did not seem to prolong life of the patients. Only 26 (24.76%) patients were alive (all-cause mortality) at the end of the 4-year period with an average life expectancy of 139 days following nephrostomy. Only 30.47% (n = 32) patients underwent further oncological treatment. Conclusion: In our series, most patients who had nephrostomy insertion for ureteric obstruction due to malignancy had no further oncological treatment following insertion. Percutaneous nephrostomy is a procedure not without associated morbidity and does not always prolong survival. Due to the poor prognosis in cases of advanced malignancy, we advocate multi-disciplinary decision-making prior to nephrostomy insertion. abstract_id: PUBMED:12028623 Percutaneous nephrostomy versus indwelling ureteral stents in patients with bilateral nongenitourinary malignant extrinsic obstruction. We present five cases of bilateral extrinsic ureteral obstruction in advanced nongenitourinary abdominal malignancy. They show the superiority of percutaneous nephrostomy to indwelling ureteral stents in providing diversion. We suggest that percutaneous nephrostomy drainage should be the management of choice in extrinsic obstruction by advanced abdominal malignancies. abstract_id: PUBMED:15877830 Indications for percutaneous nephrostomy in patients with obstructive uropathy due to malignant urogenital neoplasias. Introduction: Urogenital neoplasias frequently progress with obstructive uropathy due to local spreading or pelvic metastases. The urinary obstruction must be immediately relieved in order to avoid deterioration in these patients. The percutaneous nephrostomy is a safe and effective method for relief the obstruction; however the indications of such procedures have been questioned in patients with poor prognosis. Materials And Methods: A retrospective study was performed with 43 patients (29 female and 14 male) with urogenital neoplasias who were undergoing percutaneous nephrostomy during a 54-month period. The median age was 52 years. The primary tumoral site was the uterine cervix in 53.5% of patients, the bladder in 23.3%, the prostate in 11.6% and other sites in 11.6%. Results: Postoperative complications occurred in 42.3% of the patients. There was no procedure-related mortality. Thirty-nine per cent of the patients died during the hospitalization period due to advanced neoplasia. The mortality rate was higher in patients with prostate cancer (p = 0.006), in patients over 52 years of age (p = 0.03) and in those who required hemodialysis before the procedure (p = 0.02). Thirty-two per cent of the patients survived long enough to undergo some form of treatment focused on the primary tumor. The survival rate was 40% at 6 months and 24.2% at 12 months. The percentage of the lifetime spent in hospitalization was 17.7%. The survival rate was higher in patients with neoplasia of the uterine cervix (p = 0.007) and in patients with 52 years of age or less (p = 0.008). Conclusion: Morbidity was high in this patient group; however, the majority of patients could be discharged from hospital and followed at home. Patients under 52 years of age and patients with neoplasia of the uterine cervix benefited most from the percutaneous nephrostomy when compared to patients with hormone therapy-refractory prostate cancer, bladder cancer or over 52 years of age. abstract_id: PUBMED:8428414 The role of percutaneous nephrostomy in malignant urinary tract obstruction. Percutaneous nephrostomy is a well established technique for rapid relief of ureteric obstruction and improvement of renal function. However, its role in the management of renal failure resulting from advanced abdominopelvic malignancy is controversial and there are no clear guidelines to predict which patients benefit from such intervention both in terms of survival time and quality of life. To establish a protocol for selection of patients with abdominopelvic malignancy most likely to benefit from nephrostomy for renal obstruction, the medical records of 50 consecutive patients undergoing this procedure at the Royal Marsden Hospital were reviewed. The patients were divided into four groups: Group I, renal obstruction caused by a nonmalignant complication as a result of previous surgery or radiotherapy (n = 8); Group II, renal obstruction due to untreated primary malignancy (n = 16); Group III, renal obstruction from relapsed disease with a viable treatment option (n = 8); and Group IV, relapsed disease with no conventional treatment option (n = 18). There was significant benefit from percutaneous nephrostomy in Groups I-III. The overall median survival time of Group IV patients was extremely poor: 38 days (range 6-143 days) with no long-term survivors. The results suggest that strict selection criteria should be applied to patients with a history of abdominopelvic malignancy before proceeding to percutaneous nephrostomy. No worthwhile benefit is obtained if nephrostomy is used as a palliative measure in the absence of definitive treatment. abstract_id: PUBMED:31660792 Easy Method for Percutaneous Nephrostomy Placement with Transrectal Ultrasonography Probe. Aim: Percutaneous nephrostomy (PN), the indispensable interventional procedure in situations such as obstructive uropathy and pyonephrosis, requires serious experience. As a result, many urologists and radiologists avoid this procedure. However, this can leave both doctor and patient in a difficult situation when it is necessary. In this study, we aimed at presenting a nephrostomy insertion technique by using a transrectal ultrasonography probe that we use in our clinic and we believe that this has made the PN procedure easier, along with our results. Patients and Methods: Data from patients with PN inserted by using this method at our clinic from January 2013 to December 2018 were retrospectively scanned. The nephrostomy indications, complications, and success rates for patients were assessed. All procedures were performed under local anesthesia without the help of a radiologist. Results: A total of 84 patients, 55 male and 29 female, had 146 nephrostomy catheters inserted. Nephrostomy was performed in 52 patients due to obstruction linked to malignancy and in 32 patients due to reasons other than malignancy. The procedure was successfully concluded in all patients independent of degree of hydronephrosis. Conclusion: With our technique, PN was found to be performed easily with low complication rates. abstract_id: PUBMED:33312321 Cutaneous metastatic seeding as a sequela of nephrostomy catheter placement. Urothelial carcinoma and nephrolithiasis are a common cause of obstructive uropathy which can be relieved by percutaneous nephrostomy catheter placement. A rare, but known complication of this procedure is iatrogenic seeding of tumor cells along the nephrostomy tract. We describe a case of 68-year-old-female with cutaneous metastasis of high-grade urothelial carcinoma with seeding of tumor cells along the percutaneous nephrostomy catheter tract 8 months after the removal of the catheter. Given its severity, interventional radiologists should be mindful of the number of percutaneous access attempts, exchanges, and catheter manipulations in patients with urothelial carcinoma due to the risk of metastatic seeding along the percutaneous tract or to nearby tissues. abstract_id: PUBMED:8499993 Re-evaluation of indications for percutaneous nephrostomy and interventional uroradiological procedures in pelvic malignancy. Twenty-two patients with advanced cancer involving the pelvis were treated by percutaneous uroradiological techniques. Percutaneous nephrostomy was performed for renal failure or urosepsis or before chemotherapy. In 8 patients, ureteric stents were also placed by the antegrade route, across malignant ureteric strictures, following nephrostomy. In another 8 patients, the ureteric obstruction could not be crossed and permanent nephrostomies were required. Fifteen patients were able to achieve a useful life but in the other 7 patients there was no improvement in their quality of life and they all died 1 month after intervention. Percutaneous nephrostomy also contributed to the death of 1 patient. Not all obstructed kidneys require drainage and in patients with disseminated or advanced disease involving the pelvis, the indications for intervention need to be individually assessed. An improvement in laboratory criteria of renal function following intervention does not necessarily result in improvement in quality of life. This retrospective study highlights the difficulty in selection of patients with advanced disease for intervention and previously suggested guidelines for intervention are reviewed. abstract_id: PUBMED:12874953 Percutaneous nephrostomy--a retrospective study focused on palliative indications. We present an overview of the fast development of less invasive techniques in intrarenal surgery all based on percutaneous nephrostomy. Life-long urinary diversion with nephrostomy is often necessary in patients with malignant disease and such patients have more postnephrostomy complications than patients with kidney stones and their survival is short. In a follow-up of 246 patients with 275 nephrostomies performed consecutively over two years, mean survival after urinary diversion was only 7.9 months in 38 prostate cancer patients with ureteral obstruction and only 5.3 months in 20 patients with advanced bladder cancer. We emphasize the necessity of informing the patient and his/her family of the expected outcome of the procedure and the importance of using carefully chosen and realistic indications. Answer: The balance of percutaneous nephrostomy (PCN) for ureteric obstruction due to advanced pelvic malignancy is a complex issue, as the procedure can improve renal function but does not always prolong survival and is associated with significant morbidity. Studies have shown that while PCN can be technically successful and improve renal function, the median survival post-nephrostomy is often limited, with a significant portion of patients' remaining lifetime spent in hospital (PUBMED:23666587). Moreover, complications such as catheter dislodgement can lead to morbidity and sometimes require repeat procedures (PUBMED:23666587; PUBMED:3176936). In cases of advanced malignancy, many patients do not undergo further oncological treatment following PCN insertion, indicating that the procedure may not facilitate additional cancer therapy (PUBMED:32319003). The decision to perform PCN should be made with caution, and it is recommended that cases be discussed in a multidisciplinary setting with a full informed discussion involving patients and their relatives (PUBMED:23666587). Comparatively, PCN has been suggested as superior to indwelling ureteral stents for providing diversion in patients with bilateral nongenitourinary malignant extrinsic obstruction (PUBMED:12028623). However, the indications for PCN in patients with poor prognosis due to malignant urogenital neoplasias have been questioned, with high morbidity observed and a significant number of patients dying during hospitalization (PUBMED:15877830). The role of PCN in the management of renal failure resulting from advanced abdominopelvic malignancy is controversial, and strict selection criteria should be applied to ensure that patients are likely to benefit from the intervention in terms of survival time and quality of life (PUBMED:8428414). Additionally, rare but serious complications such as cutaneous metastatic seeding can occur (PUBMED:33312321). In conclusion, while PCN can be an effective method for relieving obstruction and improving renal function in patients with advanced pelvic malignancy, it does not always translate into prolonged survival or improved quality of life.
Instruction: Does the risk of developing dementia increase with the number of episodes in patients with depressive disorder and in patients with bipolar disorder? Abstracts: abstract_id: PUBMED:15548477 Does the risk of developing dementia increase with the number of episodes in patients with depressive disorder and in patients with bipolar disorder? Objective: Several findings suggest that some patients with depressive or bipolar disorder may be at increased risk of developing dementia. The present study aimed to investigate whether the risk of developing dementia increases with the number of affective episodes in patients with depressive disorder and in patients with bipolar disorder. Methods: This was a case register study including all hospital admissions with primary affective disorder in Denmark during 1970-99. The effect of the number of prior episodes leading to admission on the rate of readmission with a diagnosis of dementia following the first discharge after 1985 was estimated. A total of 18,726 patients with depressive disorder and 4248 patients with bipolar disorder were included in the study. Results: The rate of a diagnosis of dementia on readmission was significantly related to the number of prior affective episodes leading to admission. On average, the rate of dementia tended to increase 13% with every episode leading to admission for patients with depressive disorder and 6% with every episode leading to admission for patients with bipolar disorder, when adjusted for differences in age and sex. Conclusion: On average, the risk of dementia seems to increase with the number of episodes in depressive and bipolar affective disorders. abstract_id: PUBMED:31742842 A specific group of patients with diagnostic conversion from depression to bipolar disorder and finally to dementia as a mental GSK-3 disease: A hypothesis. Objectives: To focus on a specific group of patients whose diagnoses were changed from unipolar depression to bipolar disorder, and finally to dementia. Methods: Qualitative review of the relevant articles. Results: A substantial portion of patients with unipolar depression manifest manic and/or hypomanic episodes, resulting in a change of diagnoses to bipolar disorder. Further, a substantial portion of bipolar patients develop dementia. Previous research suggests that genetic variants in the glycogen synthase kinase 3β (GSK-3β) gene are associated with early onset of unipolar depression, a subset of which may be bipolar depression, with three SNPs (rs334555, rs119258668, and rs11927974) identified. Notably, another promotor SNP (rs334558) of the GSK-3β gene is reportedly associated with unipolar depression, bipolar disorders, and dementia. Additionally, lithium, which is reported to inhibit GSK-3, is generally accepted to be effective for bipolar disorder and recently reported to be effective for dementia. Conclusions: There is a specific group of patients whose diagnoses longitudinally change from depression to bipolar disorder and finally to dementia, and GSK-3 may be a common etiological factor in these diseases and in diagnostic conversions. abstract_id: PUBMED:27858964 Evidence for clinical progression of unipolar and bipolar disorders. Objective: It is a widely held belief that affective disorders are progressive of nature; however, some recent reviews have questioned this belief. The objective of the present systematic literature review was to present evidence for associations between number of affective episodes and (i) the risk of recurrence of episodes, (ii) probability of recovery from episodes, (iii) severity of episodes, (iv) the threshold for developing episodes, and (v) progression of cognitive deficits in unipolar and bipolar disorders. Method: A systematic review comprising an extensive literature search conducted in Medline, Embase, and PsychInfo up to September 2016 and including cross-references from identified papers and reviews. Results: Most of the five areas are superficially investigated and hampered by methodological challenges. Nevertheless, studies with the longest follow-up periods, using survival analysis methods, taking account of the individual heterogeneity all support a clinical progressive course of illness. Overall, increasing number of affective episodes seems to be associated with (i) increasing risk of recurrence, (ii) increasing duration of episodes, (iii) increasing symptomatic severity of episodes, (iv) decreasing threshold for developing episodes, and (v) increasing risk of developing dementia. Conclusion: Although the course of illness is heterogeneous, there is evidence for clinical progression of unipolar and bipolar disorders. abstract_id: PUBMED:35264279 Prevalence and risk factors for suicide in patients with sepsis: nationwide cohort study in South Korea. Background: Although a recent study reported that survivors of critical illness have an increased risk of suicide, the suicide rate and factors associated with suicide in patients with sepsis have not yet been investigated. Aims: We aimed to examine the prevalence and risk factors of suicide among patients with sepsis in South Korea. Method: All adult patients who were admitted to all hospitals in South Korea with a main diagnosis of sepsis, from 1 January 2010 to 31 December 2018, were included in the study. The primary outcome was suicide within 1 year after sepsis diagnosis. Results: A total of 251 837 adult patients with sepsis were included, of which 132 691 patients (52.7%) died within 1 year after the diagnosis of sepsis, and death by suicide was the cause in 3903 patients (1.5%). Older age, male gender, living in a rural area, higher Charlson Comorbidity Index and Elixhauser Comorbidity Index scores, invasive treatment (continuous renal replacement therapy and mechanical ventilator support) and admission to a hospital with low annual case volumes were associated with a higher risk of suicide. In addition, concurrent substance misuse, post-traumatic stress disorder, bipolar disorder, dementia and previous attempt of suicide or self-harm were associated with a higher risk of suicide. Conclusions: During the 1-year follow-up period, 1.5% of patients died by suicide after the diagnosis of sepsis in South Korea. Knowledge of the factors associated with suicide might allow for earlier intervention to potentially reduce the number of suicide attempts in patients with sepsis. abstract_id: PUBMED:22310032 Plasma β-amyloid peptides levels: a pilot study in bipolar depressed patients. Background: Patients with mood disorders present a great risk for dementia and generally for cognitive decline. Low levels of β-amyloid peptide 1-42 (Aβ42) and high Aβ40/Aβ42 ratio have been associated with this risk and have been reported also in geriatric patients suffering from depression. The aim of the present study was to compare the plasma levels of Aβ40 and Aβ42 in patients with bipolar depression and healthy subjects, and to correlate them with the characteristics of clinical course. Methods: Levels of Aβ40 and Aβ42 were measured by using specific ELISA kits in 16 patients with bipolar depression and in 16 control subjects with a negative history for somatic, psychiatric, neurological and substance abuse disorders. Results: Patients presented significantly lower plasma Aβ42 levels and higher Aβ40/Aβ42 ratio, as compared with control subjects. Moreover, a significant negative correlation was found between Aβ42 plasma levels and the duration of the illness, while a positive correlation was detected between the Aβ40/Aβ42 ratio and the number of affective episodes. Limitations: The major limitations of the study are the small sample size, the scanty characterization of the illness episodes and the fact that all the patients were under psychopharmacological treatment. Conclusion: Although further research is necessary to establish firm conclusions, the present data would suggest that changes in plasma levels of different Aβ peptides might represent a useful tool to identify the risk for cognitive decline in bipolar patients. abstract_id: PUBMED:35753680 Risk of Mental Illnesses in Patients With Hypopituitarism: A Nationwide Population-Based Cohort Study. Objective: The associations of mental illnesses and hypopituitarism have been reported. But, pituitary disorders are rare. The epidemiological studies have rarely addressed these associations between pituitary disorder and mental illnesses. Until now, no cohort study has been conducted to investigate the association. Methods: We performed a nationwide, retrospective cohort study using the Taiwanese National Health Insurance Program dataset to analyze this relationship. In total, 1,194 patients diagnosed with hypopituitarism between 2000 and 2013 were identified. For the control group, 4,776 individuals without hypopituitarism and psychotic diseases were matched (1:4) according to age, sex, and index date. A Cox proportional hazards model was used to determine the adjusted hazard ratio (aHR). Results: Patients with hypopituitarism had a significantly higher risk of incident depression and anxiety disorders than those without hypopituitarism. The aHRs of depressive and anxiety disorders were 2.98 and 1.67, respectively, for the hypopituitarism cohort. Furthermore, the risk of both hypopituitarism-associated depressive and anxiety disorders was significantly high in female subjects and subjects aged ≥18 years. A statistically significant increase was not observed in the risk of bipolar disorders, dementia, or schizophrenia in the hypopituitarism group compared with the control group. Conclusion: Although psychiatric morbidities were uncommon for the hypopituitarism cohort, the risk of developing depressive and anxiety disorders was significantly higher in those with hypopituitarism than in those without hypopituitarism. abstract_id: PUBMED:9476742 Clinical consequences of sensitisation in affective disorder: a case register study. Clinically derived measures of the initial course of episodes might reflect a process of sensitisation in affective disorder. However, the clinical consequences of such measures have not been investigated. The predictive effect of measures of the initial course of episodes was investigated in relation to the subsequent risk of alcoholism, dementia, death and suicidal attempts/suicide in a case register study including all hospital admissions with primary affective disorder in Denmark from 1971 to 1993. A total of 8737 patients with more than one episode were included in the analyses. A short period between initial episodes of the illness, reflecting a great intensity of illness, predicted increased risk of subsequent development of dementia, and for unipolar patients, decreased risk of subsequent alcoholism. Surprisingly, a progressive course, with decreasing intervals between initial episodes of the illness, had no predictive effect. Similarly, no predictive effects on the risk of death or suicidal acts could be demonstrated with any measure of the initial course of episodes. abstract_id: PUBMED:19697533 The analysis of bipolar spectrum features in drug resistant unipolar depression patients as compared to depressive patients who responded to standard antidepressant treatment. A pilot study Aim: This is a pilot study of the multicenter TRES-DEP (Treatment Depression) Project. Its' aim was the detection and analysis of bipolar spectrum features in drug resistant unipolar depression patients in comparison to depressive patients who responded to standard antidepressant treatment and remitted. Sociodemographic and clinical parameters of the patients were also described. Method: 50 drug-resistant (group 1) and 50 non-drug resistant (group 2) 18-65 year old patients, fulfilling ICD-10 criteria for depressive episode or recurrent depressive disorder were included in the study. The main exclusion criteria were: >18 scores in the 17-item Hamilton Depression Rating Scale, treatment with mood stabilizers, the diagnosis of substance misuse, dementia, severe neurological or other somatic disease. The presence of bipolar spectrum was detected by Mood Disorder Questionnaire (MDQ) and Hypomania Checklist Scale (HCL). Results: There was statistically significant more patients fulfilling MDQ or HCL bipolar spectrum criteria in group 1 than in group 2 (44% vs. 12%, p<0.001 and 62% vs. 34%, p<0.005 respectively,). Significantly more drug resistant patients comparing to non-refractory patients, considered their last remission as partial (88% vs. 52%, p=0.001). Non-refractory patients had a history of fewer depressive episodes (5.1+/-3.8 vs. 8.5+/-5.0; p=0.001) and reported a longer time from the last hospitalization (41.9+/-71.1 vs. 14.8+/-26.5 months, p<0.005). Significantly more patients from group 1 reported a family history mental disorders, especially of alcoholism. Conclusion: The results of the study suggest that misdiagnosed and inadequately treated bipolarity may be one of the main reasons of non-response in the treatment of depression. abstract_id: PUBMED:37200139 Antidepressant Strategies for Treatment of Acute Depressive Episodes Among Veterans. Objective: The 2016 VA/DoD Clinical Practice Guideline for Management of Major Depressive Disorder offers consensus-based recommendations when response to the initial antidepressant medication is suboptimal; however, little is known about "real-world" pharmacological strategies used by providers treating depression in the Veterans Affairs Health Care System (VAHCS). Methods: We extracted pharmacy and administrative records of patients diagnosed with a depressive disorder and treated at the Minneapolis VAHCS between January 1, 2010 and May 11, 2021. Patients with bipolar disorder, psychosis-spectrum, or dementia diagnoses were excluded. An algorithm was developed to identify antidepressant strategies: monotherapy (MONO); optimization (OPM); switching (SWT); combination (COM); and augmentation (AUG). Additional data extracted included demographics, service utilization, other psychiatric diagnoses, and clinical risk for hospitalization and mortality. Results: The sample consisted of 1298 patients, 11.3% of whom were female. The mean age of the sample was 51 years. Half of the patients received MONO, with 40% of those patients receiving inadequate doses. OPM was the most common next-step strategy. SWT and COM/AUG were used for 15.9% and 2.6% of patients, respectively. Overall, patients who received COM/AUG were younger. OPM, SWT, and COM/AUG occurred more frequently in psychiatric services settings and required a greater number of outpatient visits. The association between antidepressant strategies and risk of mortality became nonsignificant after accounting for age. Conclusions: Most of the veterans with acute depression were treated with a single antidepressant, while COM and AUG were rarely used. The age of the patient, and not necessarily greater medical risks, appeared to be a major factor in decisions about antidepressant strategies. Future studies should evaluate whether implementation of underutilized COM and AUG strategies early in the course of depression treatment are feasible. abstract_id: PUBMED:17099599 Links between life events, traumatism and dementia; an open study including 565 patients with dementia Summary: Ageing is due to a progressive loss of the person's adaptation capability, whereas during this period environmental aggression increases. In the elderly, life events re-present a psychological traumatism that overwhelms the old person and related family, disrupting and fragilising homeostatic balance. A number of authors have suggested a possible link between life traumatisms and the dementia processes. The aim of this study is to reveal the presence of life traumatisms preceding the apparition of the dementia syndrome. Method: This is a retrospective and comparative work based on the PIXEL study on complaints and demands from the principle informal caregivers of Alzheimer patients. It includes 565 patients presenting the criterion of dementia as defined by the DSM IV, and questionnaires filled out by the principle caregivers. One item of the questionnaire referred to life events which could have played a part in the development of the disorder. In a second stage, the reported events were classified into 4 distinct categories: loss, repeated or prolonged stress, psychotraumatism and depression-inducing events. The statistics were produced using SAS and Stat 10 software. Student's test, ANOVA and chi2-test were used. Results: 372 caregivers answered the first item (65%); 76 of them believed there was no event while 296 related the disorder to one or several life events (79% of responders, 52% of the sample). These results confirm Persson and Clement's study which evidenced a higher frequency of stressing life events for subjects afflicted with dementia as compared with older people without any psychic disorder. Reported events and their respective frequency: spouse death (15.39%), parents' death (15%), familial difficulty (10.08%), anaesthesia (8.49%), child's death (4.42%), somatic disturbance (4%), depression (3.89%), retirement (3.89%), financial problems (2.65%), loneliness (2.65%), removal (1.76%), fall (1%), alcohol (0.8%), traumatism (0.53%), spouse care (0.35%), leaving for home care, storm and caregiver change in life (0.17%). Regrouping the data: 82.71% of the answers can be connected to prolonged stress, 62.85% to real or symbolic loss, 39% to a psychotraumatic event and finally, most of these events can potentially induce depression. Discussion: Some of the events refer to difficulties concerning people close to the subject (death of a close relation, hospitalisation of husband or wife) in a period when the loss of autonomy or handicap means greater dependence on surrounding people. General anaesthesia is mentioned by 48 caregivers. In fact, this usually implies surgery, the presence of organic pathology or the need for hospitalisation, which we know has a destabilizing effect in the elderly. It is therefore not surprising that 23 answers mentioned somatic disorders (4% of the sample). According to Leger, it's mostly a loss type event which is implied in the elderly. Such losses would induce a weakening of cognitive stimulation which could decompensate an infraclinical dementia or accelerate an emerging dementia process. Antecedents of depressive illness are considered as an element of risk for the development of Alzheimer's disease. Depression is spontaneously mentioned by 22 caregivers. We must add the many reported life events which are well known to induce depression in older people. Most of the events considered in this study are liable to provoke manic mood swings. Depression resulting from life events can be considered either as an affection occurring along with dementia or as the aggravating factor of an infraclinical process or, finally, as an additional factor of vulnerability. With older people, many events may constitute a trauma because of the proximity of death and because of their sudden onset (fall, hospitalisation, somatic illness). Several studies have pointed out that a particularly traumatic event could enhance the risk of dementia. Life events associated with chronic or repeated stress are characterized by their permanence or their repetition. According to a general psychosomatic biological pattern, psychic distress will engender a series of degradations or an acute or chronic response to an early trauma. According to this hypothesis, prolonged exposure to an excess of glucocorticosteroids at the time of a disadaptative stress would have deleterious effects on the hippocampus. Indeed, the hippocampus plays a part in a number of functions affected by dementia such as memory, learning process and emotional adjustment. This study takes into account stress factors ("stressors") but not factors influencing their impact on the subject such as an individual predisposition (genetic, psychopathologic, coping abilities) and social support. The force of the impact of these events on older people and what is really experienced by them remain unknown. This study strengthens a number of others evincing an unusual frequency of life events in dementia processes. According to an integrative pattern, repeated or prolonged stress could be a deciding factor in the degenerative process or a factor of decompensation with older people presenting a genetic, biological or psychological vulnerability to dementia. The impact of such life events would vary according to the subject's pre-morbid personality, coping abilities and the support he/she can rely on. Conclusion: Some life events may be involved in the dementia process as shown by the results of this study, but this relationship does not imply direct causality. It's difficult to appreciate whether these results are not a consequence of the greater attention paid to the patient after the appearance of the first symptoms, leading to a closer observation. Stress could trigger the degenerative process. This argues for the necessity of an early diagnosis taking into account a traumatic event of life either precocious or late. Answer: Yes, the risk of developing dementia does increase with the number of episodes in patients with depressive disorder and in patients with bipolar disorder. A case register study in Denmark, which included all hospital admissions with primary affective disorder from 1970-99, found that the rate of a diagnosis of dementia on readmission was significantly related to the number of prior affective episodes leading to admission. Specifically, the rate of dementia tended to increase by 13% with every episode leading to admission for patients with depressive disorder and by 6% with every episode leading to admission for patients with bipolar disorder, when adjusted for differences in age and sex (PUBMED:15548477). Additionally, a systematic literature review up to September 2016 also supported the notion of a clinical progressive course of illness in affective disorders. The review found evidence that an increasing number of affective episodes is associated with an increasing risk of recurrence, increasing duration of episodes, increasing symptomatic severity of episodes, decreasing threshold for developing episodes, and an increasing risk of developing dementia (PUBMED:27858964). These findings suggest that there is a progressive nature to affective disorders, and that the accumulation of episodes may contribute to the risk of developing dementia in patients with depressive and bipolar disorders.
Instruction: Is Obesity Associated With Adverse Workers' Compensation Claims Outcomes? Abstracts: abstract_id: PUBMED:27608149 Is Obesity Associated With Adverse Workers' Compensation Claims Outcomes? Background: A pilot study indicated that obesity was associated with an increased time lost from work and higher costs among workers' compensation claimants sustaining severe, but not minor injuries. Objective: The aim of this study was to further test the hypotheses by increasing sample size and controlling for additional confounding factors. Method: Cost and lost time outcomes were assessed for 2301 lost time workers' compensation claims filed in 2011 and 2012 followed to the end of the first quarter of the third postinjury year. Result: Adjusting for gender, age, marital status, attorney involvement, and spinal procedures, the odds ratios of incurring a claim expense at least $100,000 after a severe injury for an overweight or obese versus normal weight claimant was 2.11 [95% confidence interval (95% CI): 1.04 to 4.29] and 2.23 (95% CI:1.12-4.46), respectively. Conclusion: Obesity was associated with increased costs among workers' compensation claimants sustaining severe, but not minor injuries. abstract_id: PUBMED:26147547 Is Obesity Associated With Adverse Workers' Compensation Claims Outcomes? A Pilot Study. Background: Obesity is associated with an increased risk for many chronic illnesses and disability. Objective: To assess the relationship between obesity and delayed return to work and workers' compensation claim costs for both minor and significant injuries. Method: There were 1107 workers who filed a workers' compensation claim with the Louisiana Workers' Compensation Corporation in 2011 that were followed up to the end of the first quarter of 2014. Result: Adjusting for sex, age, marital status, and attorney involvement, the odds ratio of an obese versus normal weight claimant returning to work after a severe injury was 3.58 (95% confidence interval, 1.26 to 10.14) and claim expenses $100,000 or more, 3.19 (95% confidence interval, 1.18 to 8.62). Conclusions: Obesity was associated with increased time lost from work and costs among workers' compensation claimants sustaining severe, but not minor injuries. abstract_id: PUBMED:26340280 Relationship Between Body Mass Index and Workers' Compensation Claims and Costs: Results From the North Carolina League of Municipalities Database. Objective: To determine whether a relationship exists between elevated levels of body mass index (BMI) and workers' compensation measures. Methods: This was a retrospective analysis of 3951 workers' compensation claimants between 1981 and 2009 representing municipal workplaces. A BMI scale composed of seven levels, including two overweight tiers, was used. Results: Higher BMI levels were related to more lost workdays, indemnity costs, and total costs. Medical care costs were virtually the same in overweight and obese claimants and moderately higher than claimants with recommended BMIs. Males were more likely to incur workers' compensation claims than females across all BMI tiers; yet, obese females incurred twice as many lost workdays and indemnity costs, and nearly 50% higher medical costs and total costs than obese men. Conclusions: Elevated levels of BMI negatively influence several workers' compensation outcome measures. abstract_id: PUBMED:17452538 Obesity and workers' compensation: results from the Duke Health and Safety Surveillance System. Background: Obese individuals have increased morbidity and use of health services. Less is known about the effect of obesity on workers' compensation. The objective of this study was to determine the relationship between body mass index (BMI) (calculated as weight in kilograms divided by height in meters squared) and number and types of workers' compensation claims, associated costs, and lost workdays. Methods: Retrospective cohort study. Participants included 11 728 health care and university employees (34 858 full-time equivalents [FTEs]) with at least 1 health risk appraisal between January 1, 1997, and December 31, 2004. The main outcome measures were stratified rates of workers' compensation claims, associated costs, and lost workdays, calculated by BMI, sex, age, race/ethnicity, smoking status, employment duration, and occupational group. The body part affected, nature of the illness or injury, and cause of the illness or injury were also investigated. Multivariate Poisson regression models examined the effects of BMI, controlling for demographic and work-related variables. Results: There was a clear linear relationship between BMI and rate of claims. Employees in obesity class III (BMI >/=40) had 11.65 claims per 100 FTEs, while recommended-weight employees had 5.80; the effect on lost workdays (183.63 vs 14.19 lost workdays per 100 FTEs), medical claims costs ($51 091 vs $7503 per 100 FTEs), and indemnity claims costs ($59 178 vs $5396 per 100 FTEs) was even stronger. The claims most strongly affected by BMI were related to the following: lower extremity, wrist or hand, and back (body part affected); pain or inflammation, sprain or strain, and contusion or bruise (nature of the illness or injury); and falls or slips, lifting, and exertion (cause of the illness or injury). The combination of obesity and high-risk occupation was particularly detrimental. Conclusions: Maintaining healthy weight not only is important to workers but should also be a high priority for their employers given the strong effect of BMI on workers' injuries. Complementing general interventions to make all workplaces safer, work-based programs targeting healthy eating and physical activity should be developed and evaluated. abstract_id: PUBMED:17482108 Obesity and litigation predict workers' compensation costs associated with interbody cage lumbar fusion. Background Context: Results of lumbar fusion surgery have been mixed and procedures are costly. Interbody cage lumbar fusion (ICLF) has been advanced to improve arthrodesis and clinical outcomes; however, little attention has been given to ICLF costs or potential predictors of these expenses. Purpose: To depict medical and compensation costs associated with ICLF in a Utah cohort of patients receiving workers' compensation as well as to investigate predictors of costs. Study Design/setting: A retrospective-cohort research design was used involving completion of presurgical and postsurgical medical record reviews and accrual of medical and compensation costs. Presurgical variables included in a regression model were presurgical spinal pathophysiology rating, obesity, and litigation status. Patient Sample: Forty-three consecutive patients who were compensated by the Workers' Compensation Fund of Utah and underwent ICLF. Outcome Measures: Total accrued compensation and medical costs. Methods: A retrospective review of presurgical variables and total accrued compensation and medical costs was conducted. Results: Multiple regression analysis indicated that nonpathophysiological factors predicted compensation costs (lawyer involvement [beta=0.40]; obesity [beta=0.34]). Specifically, compensation for those with versus without lawyers was $41,657 versus $24,837, and for those who were obese versus nonobese was $46,152 versus $28,168. Arthrodesis was correlated with medical costs (r=-0.47, p=.002), with incurred costs for patients achieving solid fusion versus pseudarthrosis equaling $38,881 versus $71,655, respectively. Conclusions: Considerable costs were associated with ICLF, particularly for those who were obese, involved in litigation, or failed to achieve solid fusion. With regard to compensation costs, the findings support the importance of assessing nonpathophysiological factors in spinal fusion patients. abstract_id: PUBMED:1833494 Disabling low back Oregon workers' compensation claims. Part III: Diagnostic and treatment procedures and associated costs. Claimants in Oregon with disabling low back injuries attending chiropractors were found to have more treatments over a longer duration and at greater cost than claimants attending medical physicians with similar clinical presentations. These findings are attributed to: a) a higher proportion of chiropractic claimants than medical physician claimants with low back risk factors which may have adversely affected the course of recovery (chronic or recurrent low back conditions, obesity, extremity symptomatology, frequency of exacerbations); b) differences in age and gender of DC and MD claimants; c) the greater physician-patient contact hours characteristic of chiropractic practice; d) differences in therapeutic modalities employed; and e) the physician reimbursement permitted under Oregon workers' compensation law. The findings of this study emphasize the need for prospective studies of treatment outcome. abstract_id: PUBMED:20858984 Weight gain and work comp: A growing problem in the workers' compensation rehabilitation system. Objective: Obesity has become a major public health concern in the United States, and has ultimately affected occupational health, including Workers' Compensation. Obesity has been determined to contribute to work-related injury. This analysis examined weight change post-injury during the Workers' compensation rehabilitation process, and specifically how it relates to perceived mental health need, perceived dietary habits, food stamp usage, and the amount of time since the work injury. Methods: Archival data analysis was conducted examining Workers' Compensation claimants over a four year period. Participants: Data was collected from 1,864 valid Workers' Compensation claims and analyzed using descriptive statistics and a backwards elimination multiple regression analysis to examine predictive relationships between the variables of interest. Results: Approximately 2/3 of the sample reported weight gain post-injury, with the additional 1/3 reporting weight loss or no change. Both perceived benefit from mental health services and perceived healthy dietary habits were significantly predictive of weight gain. Conclusions: Clinical psychological distress, such as depression and anxiety, may be contributory factors to weight gain, and poor perceptions in food quality and caloric estimations may also relate to this problem. Suggestions for future research, including intervention studies, are also included. abstract_id: PUBMED:31731886 Return to Work After Autologous Chondrocyte Implantation of the Knee in Patients with Workers Compensation Insurance. Objective: This retrospective analysis aimed to evaluate patient and defect characteristics on return to work after autologous chondrocyte implantation (ACI) for all patients with workers compensation insurance in our clinic. Methods: Retrospective analysis of medical records of patients with workers compensation treated with ACI in our clinic over 10 years (August 2004 to November 2014). Data were collected on demographics, lesion size and location, number of defects, duration of symptoms, and outcomes. Results: A total of 28 patients with 30 ACI procedures were included in this analysis. Patients had a mean age of 40.0 years, were mostly male, and tended to be overweight (mean body mass index = 31.5 kg/m2). Most patients were employed in occupations with high physical demand (85.7%) compared with lower physical demand jobs (14.3%). The mean number of lesions per knee = 1.73. Overall, the mean total surface area of the defect was large (12.6 cm2), with large mean individual lesion size of 7.39 cm2. All 28 patients returned to work in some capacity, with a total of 22 (79%) returning to full work, and 6 (21%) returning to modified work. There were no significant differences in the comparison of patients who returned to full versus modified work, in mean age, body mass index, high versus low physical demand occupation, or lesion characteristics (size, chronicity, or location). Conclusions: Our results demonstrate that in a challenging workers compensation cohort, in patients with multiple large defects, treatment with ACI allows a high percentage of patients to return to their normal vigorous work activities within 9 months to 1 year following cartilage restoration. This study is a level IV retrospective case series. abstract_id: PUBMED:9759812 Total knee arthroplasty in patients receiving Workers' Compensation. The poor outcomes in patients who have a low-back injury that was sustained while they were on the job have been well described in many studies. The purpose of the current study was to determine the influence of Workers' Compensation on the outcome of total knee arthroplasty in forty-two patients who had been managed between January 1980 and December 1993. There were thirty-two men and ten women, and the mean age at the time of the operation was forty-eight years (range, twenty-nine to sixty-eight years). These patients were directly matched with a group of forty-two patients who were not receiving compensation. The two groups were matched with regard to nine parameters: age, gender, obesity index, preoperative deformity in the coronal plane, preoperative level of symptoms, preoperative radiographic severity according to the criteria of Ahlbäck, method of fixation, number of previous procedures, and duration of follow-up. After a mean duration of follow-up of eighty months (range, forty-eight to 178 months), the patients who were receiving compensation had a mean Knee Society score of 64 points (range, 25 to 100 points). Twelve (29 per cent) of the patients in this group had an excellent or good clinical result, and thirty (71 per cent) had a fair or poor result or had had a revision. The patients who were not receiving compensation had a mean Knee Society score of 93 points (range, 57 to 100 points) after a similar duration of follow-up. Thirty-seven patients (88 per cent) in this group had an excellent or good clinical result, and five (12 per cent) had a fair or poor result or had had a revision; the difference between the two groups with regard to fair or poor results and revisions was significant (p < 0.01). With the numbers available, no significant differences could be detected between the two groups with regard to objective measurements of range of motion and stability or with regard to radiographic alignment, the presence of radiolucent lines, or the shedding of beads. On the basis of our findings, we believe that surgeons should be aware that Workers' Compensation is one of several variables that may have an untoward influence on the perceived outcome of total knee arthroplasty. abstract_id: PUBMED:16966958 Underreporting of work-related injury or illness to workers' compensation: individual and industry factors. Objective: We quantified the underreporting of work-related injury or illness to workers' compensation (WC). Methods: Using data from 2612 wage-earning respondents who participated in the 2002 Washington State Behavioral Risk Factor Surveillance System, we assessed work-related injury or illness in the previous year and identified the factors associated with WC claim filing by logistic regression. Results: The self-reported rate of work-related injury or illness of respondents was 13%. Among those who had a work-related injury or illness, 52% filed a WC claim. After adjustment for age, gender, and race, those who filed WC claims were more likely to be overweight and married. WC claim filing varies considerably across industry and occupation groups holding all other measured factors constant. Conclusions: Individual and industry/occupation factors are related to underreporting of work-related injury or illness to the WC system. Answer: Yes, obesity is associated with adverse workers' compensation claims outcomes. Several studies have found that obesity can lead to increased costs, more lost workdays, and higher rates of workers' compensation claims. For instance, a study with 2301 lost time workers' compensation claims found that after adjusting for various factors, overweight or obese claimants had significantly higher odds of incurring claim expenses of at least $100,000 following a severe injury compared to normal weight claimants (PUBMED:27608149). Similarly, a pilot study with 1107 workers indicated that obesity was associated with increased time lost from work and higher costs among claimants sustaining severe injuries (PUBMED:26147547). Further research using the North Carolina League of Municipalities Database, which included 3951 workers' compensation claimants, showed that higher BMI levels were related to more lost workdays, indemnity costs, and total costs (PUBMED:26340280). Another study involving 11,728 health care and university employees found a clear linear relationship between BMI and the rate of workers' compensation claims, with those in the highest obesity class having significantly more claims and associated costs (PUBMED:17452538). Moreover, a study focusing on interbody cage lumbar fusion in a Utah cohort of workers' compensation patients found that obesity was a predictor of higher compensation costs (PUBMED:17482108). Even in the context of specific treatments like autologous chondrocyte implantation of the knee, patients with workers' compensation insurance who were mostly overweight returned to work, but the study did not directly address the impact of obesity on outcomes (PUBMED:31731886). Overall, the evidence suggests that obesity is a significant factor associated with worse outcomes in workers' compensation claims, including higher costs and longer time away from work.
Instruction: Assessment of pubovisceral muscle defects and levator hiatal dimensions in women with faecal incontinence after vaginal delivery: is there a correlation with severity of symptoms? Abstracts: abstract_id: PUBMED:25110122 Assessment of pubovisceral muscle defects and levator hiatal dimensions in women with faecal incontinence after vaginal delivery: is there a correlation with severity of symptoms? Aim: We assessed pubovisceral muscle (PVM) defects, levator hiatal dimensions and anal sphincter defects using three-dimensional (3D) endovaginal and anorectal ultrasonography in women with previous vaginal delivery and faecal incontinence to determine the relationship between anatomic/functional findings and severity of faecal incontinence symptoms. Method: This was a prospective, observational study including 52 women with faecal incontinence symptoms who had undergone vaginal delivery. Asymptomatic nulliparous women (n = 17) served as controls to provide reference values for pelvic floor measurements. All participants underwent 3D endovaginal and anorectal ultrasonography. We used an ultrasound score to identify and quantify the extent of PVM defects and sphincter damage and to measure levator hiatal dimensions. Incontinence was assessed using the Cleveland Clinic Florida Incontinence Scoring System. Results: Defects of the PVM were identified with 3D endovaginal ultrasonography in 27% of women with faecal incontinence who had undergone vaginal delivery. The incontinence score and the ultrasound score were significantly higher in women with a PVM defect. A significant, positive correlation was found between the incontinence score and the ultrasound score. The levator hiatal dimensions were significantly greater, and the positions of the anorectal junction and bladder neck were lower, in women who had undergone vaginal delivery than in nulliparous women. Conclusion: As determined by the 3D ultrasound score, severity of incontinence is related to the extent of damage of the PVM, as well as of the anal sphincters. Additionally, vaginal delivery results in enlargement of the levator hiatus and a lower position of the anorectal junction and bladder neck compared with nulliparous women. abstract_id: PUBMED:30478651 Pubovisceral muscle and anal sphincter defects in women with fecal or urinary incontinence after vaginal delivery. Background: Vaginal delivery is the most frequent cause of direct anal sphincter trauma as well as pelvic floor muscle defects in women with corresponding signs and symptoms. The aim of the present study was to identify anatomical and functional abnormalities of the anal canal and pelvic floor in women who had had a vaginal delivery and determine the relationship between such abnormalities and the symptoms and severity of fecal incontinence (FI). Methods: Consecutive female patients with symptoms of fecal and/or urinary incontinence were recruited through the colorectal and gynecological outpatient clinics at two large university hospitals and were eligible if they had had at a vaginal delivery. All women were assessed for symptoms FI by means of the Cleveland Clinic Florida Incontinence Scale (CCFIS) and for urinary incontinence symptoms, including the presence of complaints of any involuntary leakage of urine, leakage on exertion, sneezing, or coughing, and/or leaking or losing urine associated with an urge to urinate. All women underwent anorectal and endovaginal three-dimensional ultrasonography and anal manometry. The extent of the anal sphincter and PVM defects identified by ultrasound was scored from 1 to 6 based on the longitudinal involvement of the external and internal anal sphincter, the radial angle of the anterior external anal sphincter defect and the longitudinal involvement of the PVM. Results: There were 130 women and 89 (68%) had at least one defect of the anal sphincter or the pubovisceral muscle or both (42/32% had a pubovisceral muscle defect with or without sphincter defects, 47/36% women had an intact pubovisceral muscle but sphincter defect); and 41 (32%) had intact anal sphincter and pubovisceral muscles. The mean levator hiatus area at rest in women with anal sphincter and/or pubovisceral muscle defects was 18 (± 4 SD) which was significantly greater than in women with no defects (16 ± 3 SD; p = 0.01). Women with PVM defects had significantly higher ultrasound scores (median ultrasound score = 4/range 1-10 vs Intact = 2/range 2-5), indicating more extensive defects (p = 0.001). Bivariate analysis revealed a positive association (p < 0.05) between increasing FI symptom severity (CCFIS score) and women with PVM defects (ρ = 0.6913). Within the group of women with defects mean maximum anal squeeze pressure was significantly lower in women with PVM defect (mean 73 ± 34 SD mmHg vs mean 93 ± 38 SD; p = 0.04). Women with PVM defects had significantly higher median CCFIS scores (median score, 7/range 0-16) compared to women with intact PVM (4/range 0-10) (p < 0.001). There was a significant positive correlation between the CCFIS and ultrasound scores (ρ = 0.625; p < 0.001). Bivariate analysis revealed a negative correlations between the CCFIS score and the lengths of the anterior EAS (ρ = - 0.5621, p < 0.001), IAS (ρ = - 0.40, p < 0.001) and the area of the levator hiatus (ρ = 0.5211, p = 0.001). However, no significant correlations were observed between CCFIS scores and the gap measurement (ρ = 0.101; p = 0.253) or the resting (ρ = - 0.08, p = 0.54) or squeeze pressure (ρ = - 0.12; p = 0.34) values on anal manometry. The variables associated with worsening FI symptom severity (CCFIS score) that remained significant in multiple linear regression included the shorter lengths of the anterior EAS and/or the lengths of the anterior IAS and increased area of the levator hiatus. Conclusions: The study data demonstrate that half of the women had combined defects of PVM and sphincter. There were correlations between anatomical abnormalities including the anal sphincter and/or pubovisceral muscle defects with decrease in the anal pressures and increased severity of FI. abstract_id: PUBMED:28035460 Usefulness of anorectal and endovaginal 3D ultrasound in the evaluation of sphincter and pubovisceral muscle defects using a new scoring system in women with fecal incontinence after vaginal delivery. Purpose: This study aims to evaluate pubovisceral muscle and anal sphincter defects in women with previous vaginal delivery and fecal incontinence and to correlate the findings with the severity of symptoms using the combined anorectal and endovaginal 3D ultrasonography with a new ultrasound scoring system. Methods: Consecutive female patients with previous vaginal delivery and fecal incontinence symptoms were screened. Fecal incontinence was assessed with the Cleveland Clinic Florida fecal incontinence scale, and the extent of defects was assessed by an ultrasound score based on results of anorectal and endovaginal 3D ultrasound. Fecal incontinence was assessed with the Cleveland Clinic Florida fecal incontinence scale. Results: Of 84 women with previous vaginal delivery and fecal incontinence, 21 (25%) had intact pubovisceral muscles and anal sphincters; 63 (75%) had a pubovisceral muscle or anal sphincter defect, or both. Twenty-eight (33%) had a pubovisceral muscle defect [23% with an external anal sphincter (EAS) defect or combined EAS/internal anal sphincter defects; 11% with intact anal sphincters]. Thirty-five (42%) had intact pubovisceral muscles and an anal sphincter defect. Compared with women with intact pubovisceral muscles/anal sphincter defects, patients with pubovisceral muscle defects had significantly higher incontinence scores and significantly higher ultrasound scores indicating more extensive defects. Incontinence symptoms correlated positively with the ultrasound score, measurements of sphincter defects, and area of the levator hiatus. Conclusions: Evaluation of both pubovisceral muscles and anal sphincters is important to identify defects and determine treatment for women with fecal incontinence after vaginal delivery. The severity of fecal incontinence symptoms is significantly related to the extent of defects of the pubovisceral muscles and anal sphincters. abstract_id: PUBMED:23220507 Correlating signs and symptoms with pubovisceral muscle avulsions on magnetic resonance imaging. Objective: We sought to correlate signs and symptoms of pelvic organ prolapse (POP) with pubovisceral muscle avulsions on magnetic resonance imaging (MRI). Study Design: In this retrospective cohort study of 189 women with recurrent POP or unexplained symptoms of pelvic floor dysfunction, we reviewed T2-weighted pelvic floor MRI and categorized defects as minor or major avulsion, or as no defect present. Outcomes were correlated to quality-of-life questionnaire scores and data on obstetric and surgical history, together with POP-Quantification (POP-Q) measurements. Multivariable ordinal logistic regression analysis with manual backward elimination was applied to calculate odds ratios (ORs). Results: Major pubovisceral avulsions were diagnosed in 83 (44%) women, minor avulsions in 49 (26%) women, while no defects were seen in 57 (30%) women. Women with a history of episiotomy or anterior vaginal wall reconstructive surgery had a higher OR for more severe pubovisceral muscle avulsions (adjusted OR, 3.77 and 3.29, respectively), as did women with symptoms of POP (OR, 1.01, per unit increase) or higher stage POP of the central vaginal compartment based on POP-Q measurement "C" (OR, 1.18). Women with symptoms of obstructive defecation were more likely to have no defect of the pubovisceral muscle on MRI (OR, 0.97, per unit increase). Conclusion: The variables episiotomy, previous anterior vaginal wall reconstructive surgery, POP-Q measurement "C," and symptoms scored with the Urogenital Distress Inventory "genital prolapse" and Defecatory Distress Inventory "obstructive defecation" subscales are correlated with pubovisceral muscle avulsions on pelvic floor MRI. abstract_id: PUBMED:25544728 Anatomy of the levator ani muscle and implications for obstetrics and gynaecology Pelvic floor disorders include urogenital and anorectal prolapse, urinary and faecal incontinence. These diseases affect 25% of patients. Most of time, treatment is primarily surgical with a high post-operative risk of recurrence, especially for pelvic organ prolapse. Vaginal delivery is the major risk factor for pelvic floor disorders through levator ani muscle injury or nerve damage. After vaginal delivery, 20% of patients experiment elevator ani trauma. These injuries are more common in case of instrumental delivery by forceps, prolonged second phase labor, increased neonatal head circumference and associated anal sphincter injuries. Moreover, 25% of patients have temporary perineal neuropathy. Recently, pelvic three-dimensional reconstructions from RMI data allowed a better understanding of detailed levator ani muscle morphology and gave birth to a clear new nomenclature describing this muscle complex to be developed. Radiologic and anatomic studies have allowed exploring levator ani innervation leading to speculate on the muscle and nerve damage mechanisms during delivery. We then reviewed the levator ani muscle anatomy and innervation to better understand pelvic floor dysfunction observed after vaginal delivery. abstract_id: PUBMED:30207014 Levator ani muscle morphology and function in women with obstetric anal sphincter injury. Objectives: To estimate the prevalence of, and explore the risk factors for, levator ani muscle (LAM) injury in women with clinically diagnosed obstetric anal sphincter injury (OASI). The secondary aim was to assess the association between LAM injury and pelvic floor muscle contraction, anal incontinence (AI) and urinary incontinence (UI) in women with OASI. Methods: This was a cross-sectional study of 250 women with OASI, recruited between 2013 and 2015 from a tertiary referral center at Croydon University Hospital, UK. AI symptoms were assessed using the modified St Mark's incontinence score and UI was assessed using the International Consultation on Incontinence modular Questionnaire for Urinary Incontinence - Short Form. All participants underwent three/four-dimensional transperineal ultrasound at rest and on maximum pelvic floor muscle contraction. Major LAM injury was defined as a unilateral or bilateral defect in all three central slices on tomographic ultrasound imaging. Muscle contraction was assessed using the modified Oxford scale (MOS) and measured on ultrasound as the proportional change in the anteroposterior (AP) levator hiatal diameter between rest and contraction. Multivariable logistic regression analysis was used to study risk factors for LAM injury. Differences in contraction and AI and UI symptoms between women with intact and those with injured LAM were studied using multivariable ANCOVA and the Mann-Whitney U-test. Results: Of the 248 women with OASI for whom ultrasound volumes of adequate quality were available, 29.4% were found to have major LAM injury. The prevalence of LAM injury was 23.6% after normal vaginal delivery and 40.2% after operative vaginal delivery (adjusted odds ratio, 4.1 (95% CI, 1.4-11.9); P = 0.01). LAM injury was associated with weaker pelvic floor muscle contraction, with an adjusted mean difference for proportional change in AP diameter of 5.0 (95% CI, 3.0-6.9) and MOS of 0.6 (95% CI, 0.3-0.9) (P < 0.001 for both). AI and UI symptom scores were similar between women with intact and those with injured LAM. Conclusions: Operative vaginal delivery was a risk factor for LAM injury in women with OASI. LAM injury was associated with weaker pelvic floor muscle contraction. Special attention is recommended for women with OASI and LAM injury, as they are at high risk for future pelvic floor disorders. The benefits of implementation of an intensive, focused and structured pelvic floor rehabilitation program need to be evaluated in these women. Copyright © 2018 ISUOG. Published by John Wiley & Sons Ltd. abstract_id: PUBMED:21086012 Rectal intussusception is associated with abnormal levator ani muscle structure and morphometry. Background: Anorectal symptoms are common in urogynaecological patients, and so are anatomical abnormalities of the anorectum associated with such symptoms. One such abnormality is rectal intussusception (RI). The aim of this retrospective study was to determine the prevalence of RI in a tertiary urogynaecological population and to describe the associated symptoms, signs and ultrasound findings, in particular those relating to pelvic floor function and anatomy. It was hypothesized that RI is associated with abnormal levator ani muscle anatomy and function. Methods: The electronic records and volume imaging datasets of 967 women who presented to a tertiary urogynaecological clinic between May 2005 and March 2009 were analysed. Rectal intussusception was diagnosed on translabial ultrasound and findings were analysed against symptoms and hiatal area in the levator ani muscle. Results: Rectal intussusception was found in 38 women (3.9%) and was more prevalent with increasing age (p = 0.014) and vaginal parity (p = 0.037). It was associated with symptoms of prolapse (p = 0.005), incomplete bowel emptying (p < 0.001), vaginal digitation (p < 0.001), and faecal incontinence (p = 0.022). RI was more common in patients with a clinical diagnosis of enterocele (p = 0.03) and rectocele (p = 0.002). On imaging, RI was associated with an increased hiatal area on valsalva (p < 0.001) and levator ani avulsion (p = 0.003). Conclusion: Rectal intussusception is not an uncommon finding in urogynaecological patients. While often asymptomatic, it is associated with symptoms of vaginal prolapse, incomplete bowel emptying, vaginal digitation and faecal incontinence, and with abnormal levator ani structure and morphometry. abstract_id: PUBMED:24548759 The relationship between postpartum levator ani muscle avulsion and signs and symptoms of pelvic floor dysfunction. Objective: To establish the relationship between postpartum levator ani muscle (LAM) avulsion and signs and/or symptoms of pelvic floor dysfunction (PFD). Design: Observational longitudinal cohort study. Setting: District General University Hospital, UK. Population Or Sample: Primigravida at 36 weeks' gestation and 3 months postpartum. Methods: Pelvic floor muscle strength (PFMS) and pelvic organ prolapse were assessed clinically using validated methods. Transperineal ultrasound was performed to identify LAM avulsion and measure hiatus dimensions. Validated questionnaires evaluated sexual function, urinary and faecal incontinence. Main Outcome Measures: PFD signs and symptoms related to LAM avulsion. Results: Two hundred and sixty nine primigravida without LAM avulsion participated and 71% (n = 191) returned postpartum. LAM avulsion was found in 21% of vaginal deliveries (n = 30, 95%CI 15.1-28.4%). Women with minor and major avulsion had worse PFMS (P < 0.038) and more anterior compartment prolapse (maximum stage 2; P < 0.024). Antenatal hiatus antero-posterior diameter on ultrasound was significantly smaller in women sustaining avulsion (P = 0.011). Postnatal measurements were significantly increased following avulsion. Women with major avulsion were less sexually active at both antenatal and postnatal periods (P < 0.030). These women had more postnatal urinary incontinence and symptoms such as reduced vaginal sensation and 'too loose vagina'. No postnatal differences were found for faecal incontinence, prolapse symptoms or quality of life. The correlation of differences in variables was only slight-fair with avulsion severity. Conclusions: Twenty one percent of women sustain LAM avulsion during their first vaginal delivery with significant impact on signs and symptoms of PFD. As avulsion has been described as the missing link in the development of prolapse; longer term follow-up is vital. abstract_id: PUBMED:36244896 Impact of obstetric lesions of the levator ani on anal continence Introduction: Vaginal delivery induces avulsion type muscle lesions and minor stretching of the levator ani muscle (LAM). Sharing the same risk factors as obstetric anal sphincter injury (OASIS), they can be concomitant. OASIS are the first risk factor for anal incontinence (AI) in women. The role of levatorian lesions in the genesis of AI is controversial. Objective: This is a systematic review without meta-analysis aimed at clarifying the impact of LA lesions on anal continence. Method: According to PRISMA recommendations, carrying out a bibliographic search (2000-2022) on PubMed with the keywords: "levator ani" "anal incontinence" "obstetric trauma" and on Science Direct with the keywords "levator ani", "avulsion", "anal incontinence". One hundred and eighty articles were identified, 16 were selected.. The level of evidence was determined using the Newcastle Ottawa Scale. Results: According to the different authors, the prevalence of avulsion varied from 15% to 32.2% among women who gave birth vaginally, women with LA lesions presented more LOSA (29.4% to 37.5% of patients with OASIS also had an LA lesion). For AI, LA avulsions are not an independent risk factor but are, in the presence of OASIS, an aggravating factor (OR, 23.3, 95% CI, 2.0-267.6). Double lesions would be at greater risk of long-term AI (P<0.001). Conclusion: LA lesions are not an independent factor of AI, although their prevalence is higher in the population of anal incontinent women; they are more frequent in women with OASIS and in this population, they are an aggravating factor for AI and a risk factor for long-term AI. abstract_id: PUBMED:34297244 Levator plate descent angle in pelvic floor disorders. Background: The levator plate descent angle (LPDA) quantifies the levator plate position with reference to the pubic bone and perineal body at rest. Unfortunately, research on this notable new parameter is lacking, but it is clear that levator ani deficiency (LAD) will undermine the fundamental role of the levator ani muscle (LAM) in organ support. The aim of this study was to establish the relationship between the LPDA and LAD in patients with pelvic floor disorders. Methods: This retrospective study was conducted at Seoul Songdo Hospital, Korea between August 2019 and August 2020 on women with symptoms of pelvic floor disorder such as urinary incontinence, constipation, and fecal incontinence. In all cases, three-dimensional pelvic floor ultrasound was performed for LAD scoring, minimal levator hiatus, and LPDA evaluation. We evaluated LAD using a scoring system that graded levator injury according to the insertion point of each subdivision scored unilaterally. For the entire LAM group, a cumulative LAD score that ranged between 0 and 18 was possible. Scores were categorized as mild (0-6 points), moderate (7-12 points), and severe (13-18 points) deficiency RESULTS: A total of 93 patients were included in the study (mean age 65.89 ± 11.12 [range, 34-86] years). Thirteen participants had mild LAD scores (14.0%), 42 had moderate LAD scores (45.2%), and 38 had severe LAD scores (40.9%). There was a significant difference in mean age (59.23 ± 12.55 years vs. 64.43 ± 10.03 vs. 69.79 ± 10.55 years, p = 0.005) and mean parity (1.85 ± 0.90 vs 2.48 ± 1.15 vs 2.76 ± 1.10, p = 0.038) of patients between groups. There was also a significant difference in the mean Wexner incontinence score (7.14 ± 3.63 vs 7.24 ± 5.76 vs 11.41 ± 5.54, p = 0.028) and in the mean fecal incontinence quality of life (FIQOL) score (12.91 ± 3.11 vs 14.10 ± 3.87 vs 10.41 ± 3.65, p = 0.014). The mean value of the LPDA in the group with mild LAD scores was 14.65° (SD ± 3.54) and in the group with moderate LAD scores was 9.66° (SD ± 3.36). In the group with severe LAD scores, the mean LPDA was 1.83° (SD ± 4.71). The mean value for minimal levator hiatus (MLH) area in the mild LAD score group was 14.16cm2 (SD ± 2.72), that in the moderate LAD score group was 15.82cm2 (SD ± 2.30), and that in the severe LAD score group was 17.99cm2 (SD ± 2.81). There were significant differences between the three groups both in decreasing LPDA (p < 0.001) and increasing MLH (p < 0.001). There was a negative correlation between the LAD score and LPDA and the Pearson correlation coefficient was -0.528 (moderate correlation). There was a positive correlation between the LAD score and MLH, and the Pearson correlation coefficient was 0.303 (weak correlation). Conclusions: The LAD score and LPDA have a moderate negative correlation. In patients with severe pelvic floor symptoms and extensive LAM injury, high LAD scores and low LPDA results were confirmed. In the treatment of patients with pelvic floor disorders, the LPDA seems to be a very useful parameter in determining the severity of structural defects. Answer: Yes, there is a correlation between pubovisceral muscle (PVM) defects, levator hiatal dimensions, and the severity of faecal incontinence symptoms in women after vaginal delivery. Studies have shown that defects of the PVM were identified in a significant proportion of women with faecal incontinence who had undergone vaginal delivery. The severity of incontinence, as assessed by the Cleveland Clinic Florida Incontinence Scoring System, was related to the extent of damage to the PVM as well as the anal sphincters. Additionally, vaginal delivery was found to result in enlargement of the levator hiatus and a lower position of the anorectal junction and bladder neck compared with nulliparous women (PUBMED:25110122). Further research indicated that women with PVM defects had significantly higher ultrasound scores, indicating more extensive defects, and a positive association was found between increasing faecal incontinence symptom severity and women with PVM defects. The mean levator hiatus area at rest was also significantly greater in women with anal sphincter and/or PVM defects compared to women with no defects (PUBMED:30478651). Another study found that 75% of women with previous vaginal delivery and faecal incontinence had a PVM or anal sphincter defect, or both. The severity of faecal incontinence symptoms was significantly related to the extent of defects of the PVM and anal sphincters, as well as the area of the levator hiatus (PUBMED:28035460). These findings suggest that assessment of PVM defects and levator hiatal dimensions using three-dimensional ultrasonography can be valuable in understanding the relationship between anatomic changes and the severity of faecal incontinence symptoms in women post-vaginal delivery.
Instruction: Does a groin node dissection in vulvar cancer affect groin recurrence and overall survival? Abstracts: abstract_id: PUBMED:27865454 Does a groin node dissection in vulvar cancer affect groin recurrence and overall survival?: Results from a population-based cohort study. Background: To determine, in a population-based cohort of vulvar cancer patients, if groin node dissection (GND) decreases the risk of groin recurrence and increases overall survival. Methods: This population-based retrospective cohort study includes all cases of invasive squamous cell carcinoma identified in a provincial cancer registry from 1998 to 2007. Data collection was completed for all clinical and pathologic factors by chart abstraction. Cumulative incidence functions for recurrence were estimated, accounting for death before recurrence as a competing risk. Multivariable Cox regression models examined the associations between GND and groin recurrence, and overall survival. Results: Clinical and pathologic data were collected for 1109 patients, of which 1038 patients were eligible for GND. 647 patients (62%) had a GND, while 391 patients (38%) did not. Median follow-up was 2.8years. Cumulative incidence plots demonstrate that the risk of death without recurrence was consistently higher than groin recurrence in each year after diagnosis. On multivariate analysis, GND was not significantly associated with decreased groin recurrence (HR 0.91, 95% CI 0.58-1.44, p=0.70). The hazard of death was 15% lower for women who received GND (HR 0.85, 95% CI 0.63-1.16, p=0.32), but this difference was not statistically significant. Conclusions: There was no significant difference in groin recurrence or overall survival in those with or without GND in this population-based cohort, raising questions whether a subgroup of patients may not benefit from GND. Patients had a higher probability of dying before groin recurrence could occur. Future trial design should consider death as a competing risk. abstract_id: PUBMED:23932748 Groin recurrence in patients with early vulvar cancer following superficial inguinal node dissection. Objective: To investigate the causes of groin recurrence in patients with vulval cancer who previously had negative nodes following superficial inguinal node dissection (SIND). Material And Methods: Forty-one patients with squamous cell carcinoma of the vulva (stage I or II) were operated upon. The primary treatment was wide local excision with 2 cm safety margin and superficial inguinal lymphadenectomy. Six patients had ipsilateral and one patient had bilateral groin recurrence. Those patients were subjected to deep inguinal node dissection (one patient required bilateral node dissection). Results: The mean age at time of diagnosis was 59 years (range 51-68). The median follow-up period for all patients was 63 months (range 24-71) and that of the recurrent cases was 20 months (range 12-38). The mean depth of invasion of the recurrent cases was 5.5mm (range 5-5.9 mm) and the mean diameter of the primary tumor in recurrent cases was 3.8 cm (range 3-4.5 cm). All recurrent cases had a high grade of the primary tumor. The median interval to recurrence was 21 months (range 12-57). The groin recurrence rate after negative SIND was 17% (7/41 patients).The mean number of nodes resected per groin was eight (range 1-17). The nodes ranged in size from 0.2 to 4.0 cm. Conclusion: Carcinoma of the vulva with the following criteria (size of tumor is greater than 3 cm, depth of invasion greater than 5mm, and high grade tumors) is at high risk of recurrence. abstract_id: PUBMED:1427407 Morbidity and recurrence with modifications of radical vulvectomy and groin dissection. Vulvar carcinoma has been managed in recent years with modifications of radical vulvectomy and groin dissection. Separate groin incisions, superficial inguinal lymphadenectomy, unilateral groin dissection, and wide excision have been utilized to reduce the morbidity of treatment. In this study, the surgical management of 82 patients with vulvar squamous cell carcinoma was reviewed in order to assess morbidity and risk of recurrence. A modification of radical vulvectomy and groin dissection was employed in 67 patients, while 15 patients underwent classical en-bloc vulvar and groin dissection. Wound complications of the vulva occurred in 1 of 12 patients undergoing hemivulvectomy, in 8 of 55 undergoing radical vulvectomy, and in 7 of 15 who had en-bloc vulvar resection and groin dissection (P = 0.01). Among the 46 patients undergoing bilateral groin dissection through separate incisions, groin breakdown, lymphocyst, and lymphedema occurred in 10 (22%), 7 (15%), and 7 (15%), versus 0, 1 (7%), and 2 (13%) of the 15 who had unilateral groin dissection. Modification of vulvar resection did not increase the risk of local recurrence. Groin recurrence developed in 2 of 15 patients who underwent en-bloc groin dissection and in 1 of 46 who underwent bilateral groin dissection through separate incisions. Two of 15 who had a unilateral groin dissection recurred in the contralateral groin. The risk of recurrence as well as morbidity following modifications of radical vulvectomy with groin dissection should be considered when planning treatment. abstract_id: PUBMED:16398235 Report of two cases of controlateral groin recurrence after ipsilateral groin node dissection for vulval cancer. Among the more conservative management strategies intended to individualize the surgical treatment of vulval cancer, ispilateral groin dissection is proposed for T1-T2 lateral lesions. Since patients found negative for metastatic nodes in the ipsilateral groin and developing unexpected recurrences in the controlateral non-dissected groin have a poor outcome, it is useful to report such cases to better evaluate the safety of these less aggressive surgical procedures. abstract_id: PUBMED:3988139 Radical vulvectomy and bilateral groin lymphadenectomy utilizing separate groin incisions: report of a case with recurrence in the intervening skin bridge. Postoperative wound breakdown is very common following the en bloc dissection of the vulva and inguinal/femoral lymph nodes for carcinoma of the vulva. To decrease the incidence of wound morbidity, techniques have been described for performing the inguinal/femoral lymphadenectomy through separate groin incisions. This approach leaves a bridge of tissue between the vulvar excision and the lymph node dissection. A case of stage I squamous cell carcinoma of the vulva that was treated with a radical vulvectomy and bilateral inguinal/femoral lymphadenectomy utilizing separate groin incisions is presented. This patient later developed a recurrence in the tissue bridge between the vulvar and groin excisions. The mechanism for this recurrence is discussed. abstract_id: PUBMED:21061790 Isolated groin recurrence in vulval squamous cell cancer (VSCC). The importance of node count. Objective: To determine whether there is a node count which can define an adequate inguinofemoral lymphadenectomy (IFL) in primary VSCC. Methods: A retrospective and prospective review of patients with node negative VSCC who had a full staging IFL. Detection of isolated groin recurrences (IGR) would allow groins with higher risk of groin recurrence to be identified. Results: The median node count of 228 IFLs in 139 patients was eight (0-24). There were six IGR (4.3%). Increased rate of IGR was present in patients with increased age, tumour diameter and depth of invasion, lymphovascular space invasion, unilateral IFL, and moderate/poor tumour grade. In the 138 groins with node counts of eight or greater there were no IGRs compared to six in the patients with either undissected groins or groin node counts less than eight (p = 0.030) Interval to IGR was significantly shorter than other sites of recurrence. Both disease-specific and overall survival were significantly reduced in IGR. Conclusions: An inadequate IFL is a nodal count of less than eight per groin; both these groins and undissected groins are at increased risk of IGR and should have close surveillance. abstract_id: PUBMED:26432039 Groin surgery and risk of recurrence in lymph node positive patients with vulvar squamous cell carcinoma. Objectives: Treatment of groin metastasis in vulvar squamous cell carcinoma (VSCC) patients consists of surgery, often combined with (chemo)radiotherapy, and is associated with significant morbidity. Our aim was to compare the risk of groin recurrence and morbidity in patients with lymph node positive VSCC after standard full inguinofemoral lymphadenectomy (IFL) versus less radical debulking of clinically involved lymph nodes or removal of sentinel nodes only followed by radiotherapy. Methods: A retrospective cohort study of 68 patients with primary VSCC and proven lymph node metastasis to the groin(s) was conducted. Patients were divided into three subgroups by type of initial groin surgery (84 groins): sentinel node (SN), IFL, and debulking of clinically involved nodes. Most patients (82%) received adjuvant radiotherapy. Overall survival was analyzed using time dependent cox regression. Analysis of morbidity and groin recurrence-free time was performed per groin with the generalized estimating equation model and Kaplan Meier method. Results: There was no significant difference in the risk of developing a groin recurrence (SN 25%, debulking 16%, IFL 13%, p=0.495). Despite the fact that more patients received radiotherapy after debulking (90% vs 67%), the complication rate was significantly lower (p=0.003) compared to IFL, especially regarding lymphocysts and lymphedema (p=0.032 and p=0.002 respectively). Conclusions: The risk of groin recurrence was similar in all treatment groups. Debulking of clinically involved lymph nodes was related to a significant lower risk of complications compared to IFL. These findings support that the preferred treatment of patients with clinically involved lymph nodes is debulking followed by radiotherapy. abstract_id: PUBMED:16184731 Rapid groin recurrence of a vulvar carcinoma with invasion of the femoral vessels and the importance of the initial groin dissection in the staging surgery. Background: Carcinoma of the vulva accounts for 3% to 5% of all gynaecological malignancies. The disease spreads from the vulva through embolization to the locoregional lymphatic station, the inguinofemoral nodes. The single most important prognostic factor is the presence or absence of nodal disease. Case Report: A 56-year old woman was admitted with a large, centrally necrotic left groin recurrence of a vulvar carcinoma. One year previously, she had undergone a radical vulvectomy without groin dissection for a pT2 vulvar carcinoma. In our institution, the tumour mass was excised. The femoral vein was completely thrombosed and invaded by the neoplastic recurrence. Conclusion: Groin recurrence of vulvar carcinoma is a difficult therapeutic challenge and has a poor prognosis. In order to minimize complications, the inguinofemoral dissection should be performed except in stage 1A (FIGO) with stromal invasion not greater than 1 mm. abstract_id: PUBMED:27870709 Groin Recurrences in Node Negative Vulvar Cancer Patients After Sole Sentinel Lymph Node Dissection. Objective: This study aimed to evaluate the recurrence rates after sole sentinel dissection in vulvar cancer and describe characteristics of groin recurrences. Methods: All vulvar cancer cases between 2008 and 2014 were reviewed. Inclusion criteria were restricted to lymph node-negative patients, sole sentinel lymph node dissection (SLND), and tumor diameter less than 4 cm. In all patients, Tc-99m nanocolloid was used for preoperative SLN imaging. Regularly, planar lymphoscintigraphy and single-photon emission computed tomography with computed tomography were performed. Ultrastaging was routinely conducted on all negative lymph nodes. Results: Of 140 vulvar cancer cases, 30 node-negative patients underwent sole SLND and met inclusion criteria. Keratinizing squamous cell carcinoma was determined in final histology in 20/30 (66.7%) patients and the mainly diagnosed tumor stage was pT1b (21/30, 70%). Three perioperative complications occurred. On average, 4.6 (1-9) SLNs were dissected per patient and 2.5 (1-6) per each groin, respectively. During a median follow-up of 43.5 (4-75) months, 5/30 (16.7%; 95% confidence interval, 7.3%-33.6%) local recurrences occurred. In addition, 2/30 (6.6%; 95% confidence interval, 1.9%-21.3%) groin recurrences were identified within a period of 12 months after the primary surgery. Both patients had large (>2 cm) midline tumors. Despite surgical as well as adjuvant treatment of the recurrent disease, both patients with groin recurrences died. Conclusions: Sentinel lymph node dissection is a safe and feasible alternative in early vulvar cancer. But false-negative sentinel carry a high risk of mostly fatal groin recurrences. Especially, midline tumors larger than 2 cm have to be treated with caution, because they are mostly found in cases with groin recurrences after sole SLND. abstract_id: PUBMED:19832900 Groin recurrence in carcinoma of the vulva: management and outcome. The aim of the study was to investigate the management and outcome of inguinal recurrence in vulvar carcinoma patients. A retrospective chart review was conducted on 140 patients with squamous cell carcinoma of the vulva treated between 1994 and 2006. Twenty-one patients were found to have groin recurrence. Median interval between primary treatment of vulvar cancer and groin recurrence was 7 months. Three patients refused any treatment, 3 received chemotherapy, 2 inguino-pelvic radiotherapy and 13 had resection of the groin recurrence. After surgery seven patients received irradiation of the groin and pelvis, and three patients received chemotherapy. One patient died following surgery; 19 patients died of disease with the median survival after diagnosis of inguinal recurrence of 9 months. Only one patient is alive without evidence of disease at 60 months following surgery. In univariate analysis, stage and grade at diagnosis, age and performance status at the recurrent disease, and the extent of residual tumour after resection of groin recurrence were predictors for survival. Groin recurrences from vulvar carcinoma carry a poor prognosis. Multi-modal treatment may result in a palliation of the disease, and a very limited number of patients have long-term survival. Answer: The effect of groin node dissection (GND) on groin recurrence and overall survival in vulvar cancer patients has been investigated in several studies, with varying results. One population-based cohort study found no significant difference in groin recurrence or overall survival in patients with or without GND, suggesting that a subgroup of patients may not benefit from GND (PUBMED:27865454). Another study reported a groin recurrence rate of 17% after negative superficial inguinal node dissection (SIND) in patients with early vulvar cancer, indicating that certain tumor characteristics (size greater than 3 cm, depth of invasion greater than 5mm, and high-grade tumors) are associated with a higher risk of recurrence (PUBMED:23932748). Modifications of radical vulvectomy and groin dissection, such as separate groin incisions and superficial inguinal lymphadenectomy, have been utilized to reduce morbidity without increasing the risk of local recurrence. However, groin recurrence developed in a small number of patients, suggesting that the risk of recurrence as well as morbidity should be considered when planning treatment (PUBMED:1427407). A retrospective and prospective review suggested that an inadequate inguinofemoral lymphadenectomy (IFL) is a nodal count of less than eight per groin, and both these groins and undissected groins are at increased risk of isolated groin recurrence (IGR) (PUBMED:21061790). A study comparing the risk of groin recurrence and morbidity in patients with lymph node positive VSCC after standard full IFL versus less radical debulking of clinically involved lymph nodes or removal of sentinel nodes only followed by radiotherapy found no significant difference in the risk of developing a groin recurrence among the treatment groups. Debulking of clinically involved lymph nodes was related to a significantly lower risk of complications compared to IFL (PUBMED:26432039). In conclusion, the impact of GND on groin recurrence and overall survival in vulvar cancer patients is not definitive. While some studies suggest no significant benefit, others highlight the importance of adequate lymph node dissection and the potential for higher risk of recurrence with certain tumor characteristics. The decision to perform GND should be individualized based on the patient's clinical situation and tumor factors.
Instruction: Does epidural analgesia cause dystocia? Abstracts: abstract_id: PUBMED:21348212 Epidural analgesia in obstetrics: is there an effect on labor and delivery? Background And Objective: Epidural analgesia is routinely used in obstetrics but has been blamed for possible effects on labor that lead to greater use of instruments or conversion to cesarean delivery. We aimed to assess this possibility in a cohort of obstetric patients receiving or not receiving epidural analgesia. Patients And Methods: Prospectively enrolled full-term obstetric patients were distributed in 2 groups according to whether they received epidural analgesia or not. We compared maternal and fetal characteristics, obstetric variables, and type of delivery between groups to record the likely causes of difficult labor and delivery and detect a possible influence of epidural analgesia. Results: Of a total of 602 patients, 462 received epidural analgesia and 140 did not. Epidural analgesia was related to a higher rate of use of instruments but not cesareans (P < .01) and more frequent need for oxytocin (30.7% of the epidural analgesia group vs 0% of the group receiving no epidural analgesia, P < .001). The women receiving analgesia also had a longer mean (SD) duration of the dilatation phase of labor (6.4 [4.2] hours in the epidural group vs 4.7 [3.5] hours in the no-epidural group, P < .01) and of the expulsion phase (1.0 [0.6] hours vs 0.7 [0.6] hours, respectively; P<.01). We observed no effects on the incidence of tearing, rate of episiotomy, or other variables. Predictors of instrumentation or conversion to cesarean delivery were longer duration of the first phase (odds ratio [OR] 1.2; 95% confidence interval [CI], 1.1-1.3), longer duration of the second phase (OR 2.3; 95% CI, 1.3-3.9), and maternal obesity (OR, 1.1; 95% CI, 0.9-1.2). Previous deliveries and initiation of epidural analgesia after the fetus has reached Hodge's first plane decreased risk 2.7-fold and 3.03-fold, respectively. Conclusions: Although epidural analgesia has traditionally been associated with a higher incidence of difficult labor and delivery, this association was not unequivocally evident in this cohort of patients. The apparent increase seems to be attributable to such obstetric factors as longer duration of stages of labor, higher body mass index, and first delivery. abstract_id: PUBMED:9526940 Does epidural analgesia cause dystocia? Study Objective: To analyze the effects of epidural analgesia for labor when dystocia occurs. Design: Retrospective cohort study. Setting: Academic health center. Patients: 641 low risk, nulliparous women in spontaneous labor. Interventions: 406 (63%) women received epidurals analgesia and 253 (37%) did not. Sixty women (9.4%) required an abdominal delivery for dystocia. Measurements And Main Results: Women receiving epidural analgesia were more likely to be white, receive care from an attending physician, need labor augmentation, and deliver a heavier infant. Multivariate analysis identified five variables predictive of dystocia and abdominal delivery: pitocin augmentation odds ratio (O.R.) = 3.9 (2.0 to 7.6), duration of labor more than 20 hours O.R. = 2.4 (1.3 to 4.4), high epidural dose O.R. = 2.2 (1.2 to 4.1), birthweight over 4,000 grams O.R. = 2.0 (1.0 to 4.2), and early placement of epidural O.R. = 1. 9 (1.0 to 3.5). Repeating the regression after excluding the 20 women who developed abnormal labor prior to epidural placement (18 of 20 women had protracted dilatation) demonstrated that pitocin augmentation O.R. = 4.0 (1.8 to 4.), high epidural dose O.R. = 3.0 (1.9 to 6.2), duration of labor greater than 20 hours O.R. = 2.7 (1.3 to 5.7), and birthweight over 4,000 grams O.R. = 2.1 (0. 9 to 4.8) were associated with dystocia. Conclusion: Epidural analgesia appears to be a marker of abnormal labor rather than a cause of dystocia. High concentration anesthetics and epinephrine should be avoided, as they may influence labor. Randomized, controlled trials of this technique will be difficult to do; our work should reassure patients and their clinicians that epidural analgesia does not adversely affect labor. abstract_id: PUBMED:14581873 Epidural analgesia during labour: maternal, fetal and neonatal aspects Aim: The most effective technique for eliminating labour and childbirth pain is continuous lumbar epidural analgesia. The preoccupation regarding the possible negative effects on the time taken for labour and on dystocias in general represents one of the greatest hindrances in the way of its wider use. The purpose of the present study is to monitor the effects of continuous lumbar epidural analgesia on delivery times, on the state of the fetus, on the incidence of dystocic deliveries and on neonatal outcome. Methods: Comparative prospective study. The data relating to the deliveries of 148 nulliparas and 51 pluriparas at term, submitted to epidural analgesia at the Obstetric Clinic of the University of Parma in 1999-2000 were compared with data from 144 nulliparas in labour and 60 pluriparas without epidural analgesia, selected in random fashion out of 4251 women who delivered children in the same period. The anesthetic procedure employed consisted in an injection of 20 mg/10 ml ropivacaine and 50 mg fentanyl in the epidural space at intervals of 1-2 h. Results: The duration of the dilatation period was not influenced by administration of epidural analgesia while the expulsion period was longer in the course of epidural analgesia. There were no significant differences between delivery modalities in the 2 groups either as regards vaginal operative delivery or the number of cesarean sections for dystocia. The cardiotocographic profile was similar in the 2 groups. The neonatal outcome (Apgar index at 1' and 5' and transfer to the intensive care department) did not show significant differences, confirming the absence of noteworthy side-effects even from the neonatal standpoint. Conclusions: The use of low concentrations of ropivacaine (0.2%) associated with fentanyl in the epidural space proved to be a safe and effective technique for controlling labour and delivery pain. abstract_id: PUBMED:28089318 Analysis and evaluation of the effectiveness of epidural analgesia and its relationship with eutocic or dystocic delivery. Objective: Numerous studies have demonstrated the difference in the verbal rating scale with regard to obstructed labour and induced labour, so that obstructed labour and foetal macrosomia have been related to a greater sensation of pain during labour, particularly in the first stage. Even the epidural analgesia is linked to the need for instrumented or caesarean section due to foetal obstruction. The goal of the study is to analyze and evaluate the effectiveness of epidural analgesia in normal versus obstructed labour. Patients And Methods: One hundred and eighty pregnant women were included in an observational, analytical, longitudinal and prospective study, that was performed in the Obstetrics Department of the Hospital Universitario Dexeus. All the nulliparous or multiparous over 36 weeks of pregnancy, after 3cm of cervical dilatation in spontaneous or induced labor were included. All the patients were given epidural analgesia according to protocol. Results: The basic descriptive methods were used for the univariate statistical analysis of the sample and the Mann-Whitney U test was used for the comparison of means between both groups. The correlations between variables were studied by means of the Spearman coefficient of correlation. The differences regarded as statistically significant are those whose P<.05. Conclusion: In our population there were no statistically significant differences in the effectiveness of epidural analgesia in normal versus obstructed labour. Patients who got epidural analgesia and had obstructed labors have the same degree of verbal rating scale as patients that do not had obstructed labors (P>.05). abstract_id: PUBMED:9661559 Epidural labor analgesia and the incidence of cesarean delivery for dystocia. Unlabelled: We performed this retrospective study to examine the changes in cesarean delivery rates associated with the establishment of a labor epidural service. In April 1993, St. Louis Regional Medical Center established an on-demand labor epidural service. We obtained demographic data for all patients and reviewed the operative records of all patients undergoing cesarean section who delivered 12 mo before and 16 mo after the start of the labor epidural service. We compared labor epidural rates and total and nulliparous dystocia cesarean delivery rates before and after the epidural service started and among patients who did and did not receive labor epidural analgesia when it was available. Included were 3195 patients who delivered before and 3733 patients who delivered after epidural analgesia became available. Labor epidural rates were 1.2% vs 29.4% for the Before group versus the After group (P < 0.001). Total (9.1% vs 9.7%) and nulliparous dystocia (5.7% vs 6.4%) cesarean delivery rates did not significantly change with the availability of epidural analgesia. However, the total (11.6% vs 8.8%; P = 0.009) and dystocia (8.0% vs 1.0%; P = 0.001) cesarean delivery rates were higher among patients who received epidural analgesia when it was available. We conclude that epidural labor analgesia is associated with, but does not cause, cesarean delivery for dystocia. Implications: Increased epidural analgesia use did not change the overall dystocia cesarean delivery rate, although dystocia was more common among women who chose an epidural analgesia. Consequently, limiting epidural availability will not affect cesarean delivery rates. The evidence does not support advising patients that epidural labor analgesia increases the risk of cesarean delivery. abstract_id: PUBMED:10319772 Epidural analgesia during labor and maternal fever. Background: In recent observational studies, epidural analgesia during labor at patient request has been associated with maternal fever. The authors report a secondary analysis of fever in women who were randomized to receive either epidural or patient-controlled intravenous analgesia during labor. Methods: Maternal tympanic temperature was measured during spontaneous labor in 715 women at term who were randomized to either epidural analgesia with bupivacaine and fentanyl or to patient-controlled intravenous analgesia with meperidine. Intent-to-treat analysis of women with fever (temperature > or = 38.0 degrees C) versus those without was performed using Student t test and Fisher exact test to determine statistical significance (P < 0.05). Results: Epidural analgesia was associated with maternal fever (odds ratio = 4.0; 95% confidence interval = 2.0-7.7), as was nulliparity (odds ratio = 4.1; 95% confidence interval = 1.8-9.1) and labor longer than 12 h (odds ratio = 5.4; 95% confidence interval = 2.9-9.9). These factors were all independent variables for maternal fever when analyzed using logistic regression. Conclusions: Epidural analgesia is associated with maternal fever. However, nulliparity and dysfunctional labor are also significant cofactors in the fever attributed to epidural analgesia. abstract_id: PUBMED:9392696 Epidural analgesia compared with combined spinal-epidural analgesia during labor in nulliparous women. Background: Among nulliparous women, there appears to be an association between the use of epidural analgesia during labor and an increased risk of dystocia. We tested the hypothesis that combined spinal-epidural analgesia, which permits ambulation during labor, is associated with a lower incidence of dystocia than continuous lumbar epidural analgesia. Methods: Between July 1995 and September 1996, we randomly assigned 761 nulliparous women in spontaneous labor at term who requested epidural analgesia to receive either continuous lumbar epidural analgesia or a combination of spinal and epidural analgesia. Among the women who received combined spinal-epidural analgesia, some were discouraged from walking and others were encouraged to walk. Maternal and neonatal outcomes, the incidence of dystocia necessitating cesarean section, and measures of patients' satisfaction were compared in the two groups. Results: There were no significant differences in the overall rate of cesarean section, the incidence of dystocia, the frequency of maternal or fetal complications, the patients' or nursing staff's assessment of the adequacy of analgesia, or the degree of overall satisfaction between the two groups. Significantly more women receiving combined spinal-epidural analgesia had pruritus (P<0.001) and requested additional epidural bolus doses of local anesthetic (P=0.01). For all the women, dystocia necessitating cesarean section was significantly more likely when analgesia was administered with the fetal vertex at a negative station (odds ratio, 2.5; P<0.001) or at less than 4 cm of cervical dilatation (odds ratio, 2.2; P<0.001). Conclusions: As compared with continuous lumbar epidural analgesia, the combination of spinal and epidural analgesia is not associated with an overall decrease in the incidence of cesarean delivery. abstract_id: PUBMED:8015757 Epidural analgesia in labor and cesarean delivery for dystocia. Published studies assessing the effect of epidural analgesia in nulliparous labor on the frequency of cesarean delivery for dystocia are reviewed. There are at least four retrospective studies and two prospective studies that suggest that epidural analgesia may increase the risk of cesarean delivery for dystocia in first labors. The potential for epidural to increase the frequency of cesarean delivery for dystocia is likely influenced by multiple variables including parity, cervical dilatation at epidural placement, technique of epidural placement, management of epidural during labor, and the obstetrical management of labor after placement of epidural analgesia. Two studies suggest that delaying placement of the epidural until 5 cm of cervical dilatation or greater may reduce the risk of cesarean birth. Epidural is safe and may be a superior labor analgesic when compared with narcotics. However, patients should be informed that epidural analgesia may increase the risk of cesarean birth in first labors. abstract_id: PUBMED:8190421 Effect of epidural analgesia for labor on the cesarean delivery rate. Objective: To use meta-analysis to evaluate the effect of epidural analgesia on the cesarean delivery rate. Data Sources: The MEDLINE data base was searched for articles published in English between January 1981 and April 1992. We also interviewed experts and conducted a bibliographic follow-up and manual review of recent journals published from April to July 1992. Methods Of Study Selection: We excluded articles with irrelevant titles, and those case studies, book chapters, or articles that did not provide primary and relevant data. Two hundred thirty articles were read, including articles that reported on women of standard obstetric risk and on cesarean delivery rates for an epidural group and for a concurrent no-epidural group. These criteria yielded six studies for a primary analysis and two others for a secondary analysis. Data Extraction And Synthesis: The sample size of the epidural and no-epidural groups and the number of cesareans within each group were extracted. Tests of homogeneity were conducted. The pooled cesarean delivery risk difference as a result of epidural analgesia was estimated. The cesarean rate for women undergoing epidural analgesia was ten percentage points greater than for no-epidural women (P < .05). More than a nine percentage point increase was shown for cesarean deliveries for dystocia (P < .05), when pooling either all studies or only randomized studies. Conclusions: The results of this meta-analysis strongly support an increase in cesarean delivery associated with epidural analgesia. Further research should evaluate the balance between analgesia associated with the use of epidurals, and postpartum morbidity and costs associated with cesarean deliveries. abstract_id: PUBMED:27604767 Effect of epidural analgesia on mode of delivery. Continuous epidural analgesia is considered to be the gold standard of pain relief in labour. The objective of this study was to examine the connections between epidural analgesia and the frequency of instrument-assisted deliveries. We retrospectively analysed data encompassing epidural analgesia applications during 2012 and the connections with an increased frequency of instrumental deliveries. Out of 3157 births in 2012, epidural analgesia was used in 443 (14.03 %). Epidural analgesia significantly increased the number of instrumental deliveries with vacuum extraction (χ2 = 35.01; df = 1; p < 0.01) and the number of emergency caesarean sections (χ2 = 18.01; df = 1; p < 0.01). A significantly higher percentage of dystocia leading to emergency caesarean sections was noted in deliveries with epidural analgesia (χ2 = 6.15; df = 1; p < 0.05). An increase in instrumental delivery incidence seems to be an acceptable risk in view of epidural analgesia-related positive effects that future mothers should be informed of. Answer: The relationship between epidural analgesia and dystocia is complex and has been the subject of various studies. Some studies suggest that epidural analgesia may be associated with an increased risk of dystocia, while others indicate that it may be more of a marker for abnormal labor rather than a cause. One study found that epidural analgesia was associated with a higher rate of instrumental deliveries but not cesarean sections, and a more frequent need for oxytocin, suggesting that while epidural analgesia is related to some difficulties in labor, it may not directly cause dystocia (PUBMED:21348212). Another study concluded that epidural analgesia appears to be a marker of abnormal labor rather than a cause of dystocia, and that high concentration anesthetics and epinephrine should be avoided as they may influence labor (PUBMED:9526940). A comparative prospective study showed that the duration of the dilatation period was not influenced by administration of epidural analgesia, and there were no significant differences in the incidence of dystocic deliveries or neonatal outcomes between groups with and without epidural analgesia (PUBMED:14581873). Similarly, another study found no statistically significant differences in the effectiveness of epidural analgesia in normal versus obstructed labor (PUBMED:28089318). However, some studies have suggested that epidural analgesia may increase the risk of cesarean delivery for dystocia in first labors, and that delaying the placement of the epidural until 5 cm of cervical dilatation or greater may reduce this risk (PUBMED:8015757). A meta-analysis also supported an increase in cesarean delivery associated with epidural analgesia, particularly for dystocia (PUBMED:8190421). In conclusion, while there is evidence that epidural analgesia is associated with certain labor difficulties, such as an increased need for instrumental deliveries and oxytocin, the evidence is not unequivocal in establishing a causal relationship between epidural analgesia and dystocia. It may be more accurate to consider epidural analgesia as a marker for abnormal labor, with other factors such as high anesthetic doses and early placement contributing to the risk of dystocia (PUBMED:9526940, PUBMED:21348212, PUBMED:14581873, PUBMED:28089318, PUBMED:8015757, PUBMED:8190421).
Instruction: Are the standard parameters of cervical spine alignment and range of motion related to age, sex, and cervical disc degeneration? Abstracts: abstract_id: PUBMED:26091436 Are the standard parameters of cervical spine alignment and range of motion related to age, sex, and cervical disc degeneration? Object: The aims of this study were 1) to establish the standard parameters of alignment and total and segmental range of motion (ROM) of the cervical spine in the asymptomatic population, and 2) to identify factors that influence cervical ROM and alignment. Methods: The authors measured 636 standard cervical lateral, flexion, and extension plain radiographs of 212 asymptomatic volunteers. The relationship between cervical alignment and total ROM was assessed with simple linear regression. Multivariate linear regression was used to determine the effect of the influential factors on cervical alignment and total and segmental ROM. Results: The mean value for C2-7 cervical alignment was 21.40° ± 12.15°, and the mean value for total ROM was 63.59° ± 15.37°. Sex was a significant factor in cervical alignment, total ROM, and segmental ROM for C2-3 and C5-6 (p < 0.05). Age had a significant negative association with both the total ROM and all of the segmental ROM measurements (p < 0.05). Cervical disc degeneration at the level of interest had a significant negative association with C4-5, C5-6, and C6-7 ROM (p < 0.05). Conclusions: Cervical alignment in female subjects was 2.47° lower than that in male subjects. Total ROM was 3.86° greater in female than in male subjects and decreased 6.46° for each decade of aging. Segmental ROM decreased 1.28° for each decade of aging and 2.26° for each category increase in disc degeneration at the level of interest. abstract_id: PUBMED:29315118 Characteristics of Cervical Spine Motion in Different Types of Cervical Alignment: Kinematic MRI Study. Study Design: Retrospective study. Objective: To evaluate how each type of sagittal cervical alignment affects the motion of the upper and subaxial cervical spine using kinematic magnetic resonance imaging. Summary Of Background Data: The sagittal malalignment of the cervical spine from degeneration in the subaxial cervical spine reflects a disruption in the kinematic properties of the cervical spine and affects the motion of adjacent segments. Changes in the sagittal parameters and kinematics of the upper cervical spine and upper thoracic spine due to the kyphosis are unknown. Methods: Kinematic magnetic resonance imaging of the cervical spine in neutral position from 311 patients, including 90 lordotic, 90 straight, 90 global kyphotic, and 41 segmental kyphotic were analyzed. The lordotic angle at the upper and lower cervical spine, and T1 slope were measured in the neutral position and again in flexion and extension for dynamic analysis. Results: The number of levels with significant disk degeneration was higher in the global kyphosis group. In the global kyphosis group, neutral sagittal parameters showed some characteristics of compensation to the malalignment. Compared with the lordotic group, patients with global kyphosis demonstrated significantly higher lordotic angle of the upper cervical spine and more horizontal T1 slope. The dynamic evaluation showed greater range of motion of the entire cervical spine and subaxial cervical spine in younger patients. However, we still found greater range of motion of the occipito-atlanto-axial complex in global kyphosis, even when controlling for age and number of levels with significant disk degeneration. Conclusion: Sagittal malalignment of the cervical spine affects all parts of the cervical motion complex. The global kyphotic alignment of subaxial cervical spine affects the kinematic properties of the occipito-atlanto-axial complex and upper thoracic spine to compensate for the alteration of cervical alignment. These differences are not seen in straight and segmental kyphosis. Level Of Evidence: Level 3. abstract_id: PUBMED:29222937 Preoperative T1 Slope as a Predictor of Change in Cervical Alignment and Range of Motion After Cervical Disc Arthroplasty. BACKGROUND This study analyzed the effect of preoperative T1 slope on cervical alignment and range of motion (ROM) after cervical disc arthroplasty (CDA) in patients with cervical degenerative disease. MATERIAL AND METHODS This retrospective study included 32 patients with single-level symptomatic cervical disc disease who underwent CDA with the Mobi-C cervical disc prosthesis and had a mean follow-up of 26.8±6.4 months. Standing lateral, flexion, and extension X-rays of the cervical spine were obtained preoperatively and postoperatively at 24-month follow-up. Simple linear regression analysis was used to assess the impact of preoperative T1 slope on changes from preoperative values in radiologic parameters. RESULTS Compared to preoperative values, at 24-month follow-up, there was a significant increase in mean functional spinal unit (FSU) angle (+7.4°), upper adjacent segment (UAS) angle (+3.1°), and overall cervical alignment (C2-C7 angle) (+6.3°), and a significant decrease in mean lower adjacent segment (LAS) angle (-2.4°). Mean ROM of the FSU (-3.6°), LAS (-3.0°), and overall cervical spine (-11.5°) significantly decreased, and mean ROM of the UAS (+1.6°) significantly increased. There were significant correlations between preoperative T1 slope and mean change from preoperative value in FSU angle, C2-C7 angle, and ROM of the overall cervical spine (C2-C7). CONCLUSIONS T1 slope is useful for evaluating changes in the FSU angle, C2-C7 angle, and ROM of the overall cervical spine following CDA with the Mobi-C disc. Patients with a large preoperative T1 slope may be good candidates for CDA with the Mobi-C prosthesis due its motion maintenance and the fact that it has little adverse impact on sagittal alignment. It also could be a good option in terms of sagittal alignment improvement or motion maintenance for patients with kyphosis. abstract_id: PUBMED:32770266 Impact of cervical sagittal balance and cervical spine alignment on craniocervical junction motion: an analysis using upright multi-positional MRI. Purpose: To evaluate the effect of cervical sagittal alignment on craniocervical junction kinematic. Methods: We retrospectively reviewed 359 patients (119 cervical lordosis, 38 cervical sagittal imbalances, 111 cervical straight, and 91 cervical kyphosis) who underwent cervical spine multi-positional magnetic resonance imaging (mMRI). The C2-7 angle, disc degeneration grading and cSVA were analyzed in neutral position. The C3-5 OCI, O-C2 angle, and OCD were analyzed in neutral, flexion, and extension position. The Kruskal-Wallis test was used to detect difference among four groups. The post hoc analysis was performed by Mann-Whitney U test. Results: The cervical sagittal imbalance, cervical straight, and cervical kyphosis groups had significantly more lordosis angle in C3 and C4 OCI and O-C2 angle than the cervical lordosis group (p < 0.0125). Head motion in relation to C2, C3, and C4 (O-C2 angle, C3-4 OCI) in the kyphosis group was significantly greater than in the cervical lordosis group (p < 0.0125). The cervical sagittal imbalance group showed significantly increased O-C2 angle than the cervical lordosis group (p = 0.008). Regression analysis showed that an increase in O-C2 angle by one unit had a relative risk of 4.3% and 3.5% for a patient to be in the cervical sagittal imbalance and cervical kyphosis groups, respectively. Conclusions: Cervical sagittal alignment affected craniocervical junction motion with the head exhibiting greater extension and motion in the cervical sagittal imbalance and cervical kyphosis groups. Motion of the head in relation to C2 can be used to predict the cervical sagittal alignment. abstract_id: PUBMED:33451340 Cervical sagittal alignment after Prestige LP cervical disc replacement: radiological results and clinical impacts from a single-center experience. Background: Cervical disc replacement (CDR) has been widely used to treat one- and two-level cervical degenerative disc disease. Studies have shown the effectiveness of CDR in preserving range of motion (ROM) and delaying adjacent segment degeneration (ASD). Cervical sagittal alignment is an important factor affecting favorable clinical outcomes in cervical spine surgery. This study aimed to explore whether cervical sagittal alignment can be maintained after CDR and to identify the impact of cervical sagittal alignment on outcomes after CDR. Methods: This was a single-center, retrospective study. 132 patients who underwent one-level CDR were included. Cervical sagittal alignments, including cervical lordosis (CL), segmental alignment (SA), sagittal vertical axis (SVA), T1 slope (T1s), and T1s minus CL (T1s-CL), were measured. The effects of cervical sagittal alignment on the CDR outcomes were analyzed. Patients were divided into the heterotopic ossification (HO) group and ASD group to determine the potential impacts of cervical sagittal parameters. Results: The cervical sagittal alignment parameters, except for the SVA, were significantly improved after CDR and showed decreasing trends at the last follow-up. Significantly higher CL and T1s were found in patients with better ROM after CDR. SVA ≥ 20 mm increased the risk of anterior HO (odds ratio = 2.945, P = 0.007). Significantly kyphotic SA and lower T1s values were found in the ASD patients than in the non-ASD patients (P < 0.05). Patients with ASD at the inferior level showed significantly worse CL (P < 0.05). Conclusion: CDR had limited function of improving cervical sagittal alignment. Poor cervical sagittal alignment after CDR was associated with HO, ASD, and less ROM. abstract_id: PUBMED:33824556 Effects of different severities of disc degeneration on the range of motion of cervical spine. Aims And Objectives: The human spine degenerates with age. Intervertebral disc degeneration occurs in the cervical spine. The objective of this study is to determine the effects of degenerative disc diseases on the range of motion (ROM) of the human cervical spinal column using a validated finite-element model. Materials And Methods: The validated intact and healthy C2-T1 finite-element model simulated the cortical shell, cancellous core, posterior elements of the vertebrae, and spinal ligaments (longitudinal, capsular, spinous and ligamentum flava, and nucleus and annulus of the discs). Three different stages of the disc disease, that is, mild, moderate, and severe, were simulated at the C5-C6, C6-C7, and C5-C6-C7 discs, respectively, and they were termed as upper single level, lower single level, and bi-level (BL) models, respectively. The material properties and geometry of the disc(s) were altered to simulate the different stages of degeneration. The external mechanical loading was applied in the sagittal mode, via flexion-extension motions and the magnitude was 2.0 Nm for each mode. They were applied to each of the healthy and disc degeneration models, and for each of the three severities of degeneration. The ROM at adjacent and index levels was extracted and normalized with respect to the healthy (baseline) spine. Results: A nonuniform distribution in the ROM was found for different disc degeneration states, segmental levels, and flexion-extension loading modes. The specific results for each and level are reported in the results section of the paper. Conclusion: Closer follow-up times may be necessary in symptomatic patients with progressive disease, especially with BL involvements. abstract_id: PUBMED:15521399 Active range of cervical motion in healthy subjects and in spinal disorders. DBC equipment validity in complex cervical motion evaluation Unlabelled: Active Range of motion (AROM) is one of the crucial parameters assessed in the cervical spine evaluation. Repeated standardized AROM evaluation enables objectified monitoring of disease progress or therapy related improvement. Active spine therapy according to DBC (Documentation Based Care) requirement is a contemporary therapeutic system with proven efficacy in spinal disorders. The therapeutic concept f DBC is mechanotherapy based and introduces special devices designed for active exercises in strictly defined range, rate and load. DBC-3CR device is constructed for active motion (composed of rotation, anteflexion and lateral flexion) exercise of the cervical spine with repeated dynamic load. Technical properties allow to avail the machine for diagnostic purposes as far complex range of motion is concerned. Purpose: Comparison of AROM between healthy volunteers and patients with cervical disorders. Evaluation of DBC-3CR validity in complex AROM measurement Patients: A--99 healthy volunteers; B--106 patients with chronic neck pain resulting from cervical spondylosis; C--36 patients with acute neck pain related to cervical disc disease or whiplash injury. Methods: Complex AROM detection with DBC-3CR was compared with reference measurements (goniometric and inclinometric simple AROM detection). Age and sex dependence of AROM among healthy subjects was observed. AROM was compared between B, C and reference groups. Discussion: AROM was significantly decreased in patients with chronic and acute neck pain. Motion limitation pattern was symmetric in both groups. The presence of linear dependency between complex AROM acquired with DBC-3CE and rotation acquired with goniometer was observed (Pearson coefficient 0.81 at p < 0.0001). Conclusions: I. DBC-3CR device is useful in AROM evaluation. 2. AROM in healthy subjects decreases with age, but is sex--independent parameter. 3. In cervical spondylosis AROM is symmetrically reduced as far as rotation and lateral flexion are concerned. 4. Acute cervical pain results in symmetric AROM limitation. abstract_id: PUBMED:34275569 Age-related degenerative changes and sex-specific differences in osseous anatomy and intervertebral disc height of the thoracolumbar spine. The aim of this study was to determine age-related changes and sex-specific differences in sagittal alignment, range of motion (ROM), and intervertebral disc height of the thoracolumbar spine in healthy subjects. Lateral neutral and flexion-extension radiographs of the thoracolumbar spine of 627 asymptomatic subjects (307 males and 320 females; average age, 49.6 ± 16.5 years) were evaluated. We included at least 50 males and 50 females in each decade of life between the 20s and the 70s. Intervertebral disc height from T10/T11 to L5/S1, local lordotic alignment, and ROM from T10-T11 to L5-S1 were measured. T10-L2 kyphosis and T12-S1 lordosis as well as flexion, extension, and total ROM were measured. T10-L2 kyphosis did not markedly change with age in subjects of either sex but a sudden increase was noted in the 70s females. T12-S1 lordosis increased with age in both sexes, except the 70s. Flexion, extension, and total ROM at T10-L2 and T12-S1 decreased with age in most subjects. The levels from L3-L4 to L5-S1 were conspicuous as mobile segments. Intervertebral disc height gradually increased from T10/T11 to L4/L5; the shortest was at T10/T11 and the longest at L3/L4 or L4/L5 in all subjects. Age-related decreases in intervertebral disc height were most prominent at L4/L5 in middle-aged and elderly individuals of both sexes. Normative values of sagittal alignment, ROM, and intervertebral disc height at each segmental level were established in both sexes and all age groups in healthy subjects. abstract_id: PUBMED:26630418 Influence of the Number of Cervical Fusion Levels on Cervical Spine Motion and Health-Related Quality of Life. Study Design: A retrospective study. Objective: To analyze the influence of the number of cervical fusion levels on total cervical motion and health-related quality of life (HRQoL) in patients with solid anterior cervical fusions (ACFs). Summary Of Background Data: Few studies have analyzed the degree to which cervical range of motion (ROM) and HRQoL are affected by the number of cervical fusion segments. Methods: We analyzed a cohort of patients who underwent ACF for degenerative disc disease. To assess the clinical outcomes and HRQoL, preoperative, 1- and 2-year postoperative neck and arm pain, visual-analogue scale, neck disability index, and short form-36 were analyzed. Radiographically, C2-7 and C0-2 ROM, C2-7 sagittal vertical axis (SVA), and Kellgren grade of radiographic adjacent segment pathology (RASP) were evaluated. Results: A total of 105 patients (M:F = 46:59, mean age of 51.4 yr) were enrolled. There were 36 patients who underwent single-level ACF (group 1), 41 patients who had a double level ACF (group 2), and 28 patients who underwent ACF involving 3 or more levels (group 3). There was no decrease in C2-7 motion in group 1, a mean 7-degree decrease in group 2, a mean 18-degree decrease in those who underwent a 3-level ACF, and a mean 22-degree decrease after 4-level ACF. The grade of RASP was not influenced by the number of fusion levels. All HRQoL parameters showed no significant correlation between number of fusion levels, cervical ROM, and SVA. Conclusion: Single-level ACF showed no decrease in total cervical motion; multilevel ACF decreased cervical motion by a mean of 7.8 degrees per segment of fusion. Progression of RASP showed no correlation with the number of fusion levels. HRQoLs were not influenced by the number of fusion levels, cervical ROM, or SVA after solid ACF. Level Of Evidence: 3. abstract_id: PUBMED:31958587 Magnetic Resonance Imaging Research of Thoracic Inlet Parameters in the Cervical and Cervicothoracic Spine in Degenerative Cervical Spondylosis. Objective: The purpose of this study was to examine the thoracic inlet angle (TIA) and its related parameters in the cervical and cervical-thoracic vertebrae in patients with degenerative cervical spondylosis (DCS) and explore the association of the TIA, tilt angle of the neck, and tilt angle of the first thoracic spine with the cervical degeneration score. Methods: Patients with DCS were included from January 2014 to December 2017. The relevant parameters were assessed on T2-weighted magnetic resonance imaging. The association of cervical parameters with the cervical degeneration score was examined by multiple linear regression model. Results: A total of 204 patients (126 men) were eligible and enrolled, with a mean age of 55-56 years. Men had significantly higher thoracic inlet parameters than women (all P < 0.05). Thoracic inlet parameters were positively correlated with age (all P < 0.05). After adjustment for age and sex, the significant association between the TIA and cervical or cervical-thoracic spine degeneration scores was not observed in all vertebrae. Meanwhile, men had higher vertebral height (VH) and intervertebral disk height (IVDH) than women. Only the VHs of C7, T1, and T2 and the IVDHs of C6-7, T1-2, and T2-3 were significantly and positively associated with the TIA (all P < 0.05). Conclusions: Age was positively correlated with thoracic inlet parameters in patients with DCS. Men had significantly higher thoracic inlet parameters than women because the men had higher VHs and IVDHs. However, the TIA was not associated with the level of cervical disk degeneration. The clinical relevance of these findings has not been established. Answer: Yes, the standard parameters of cervical spine alignment and range of motion (ROM) are related to age, sex, and cervical disc degeneration. Studies have found that sex is a significant factor in cervical alignment, total ROM, and segmental ROM for certain cervical levels (C2-3 and C5-6). Female subjects tend to have a slightly lower cervical alignment and greater total ROM compared to male subjects. Age has a significant negative association with both the total ROM and all of the segmental ROM measurements, indicating that ROM decreases with each decade of aging. Cervical disc degeneration at specific levels has a significant negative association with ROM at those levels (C4-5, C5-6, and C6-7) (PUBMED:26091436). Additionally, the type of sagittal cervical alignment, such as lordotic, straight, or kyphotic, affects the motion of the cervical spine. For example, global kyphotic alignment of the subaxial cervical spine affects the kinematic properties of the occipito-atlanto-axial complex and upper thoracic spine, which may compensate for the alteration of cervical alignment (PUBMED:29315118). Preoperative T1 slope has also been shown to predict changes in cervical alignment and ROM after cervical disc arthroplasty (CDA), with a large preoperative T1 slope indicating good candidates for CDA due to motion maintenance and little adverse impact on sagittal alignment (PUBMED:29222937). Furthermore, cervical sagittal alignment impacts craniocervical junction motion, with greater extension and motion observed in groups with cervical sagittal imbalance and cervical kyphosis (PUBMED:32770266). Cervical disc replacement (CDR) has limited function in improving cervical sagittal alignment, and poor cervical sagittal alignment after CDR is associated with heterotopic ossification, adjacent segment degeneration, and less ROM (PUBMED:33451340). The effects of different severities of disc degeneration on the ROM of the cervical spine have also been studied, showing a nonuniform distribution in ROM for different disc degeneration states, segmental levels, and flexion-extension loading modes (PUBMED:33824556).
Instruction: Representations of disability in the Canadian news media: a decade of change? Abstracts: abstract_id: PUBMED:23350765 Representations of disability in the Canadian news media: a decade of change? Purpose: To assess stability and change in representations of disability and persons with disability in the Canadian news media between 1998 and 2008. Method: The study replicated research conducted in 1998 that assessed the representation of disability in the Canadian news media. Following the earlier study, three newspapers were selected (Globe and Mail, Toronto Star and Toronto Sun) and all articles from a three-month period in 1998 and 2008 were assessed for disability content. In total, 362 articles were found in the two time periods. These were coded for structure and content using a schema developed in the earlier research. Results: Between 1998 and 2008, there was a significant increase in the proportion of stories using "person first" language, and a significant increase in the proportion of "progressively" themed articles (e.g. dealing with barriers to participation, or disability awareness and inclusion). At the same time, there were significant differences between newspapers, with the Toronto Sun (a tabloid) maintaining a strong focus on "traditional" themes (e.g. special education, charitable provision). Conclusions: The differences in news media representations between 1998 and 2008 suggest a positive change in the way people with disabilities are represented, with greater attention to the complexity of their identity and their multiple social roles. Implications For Rehabilitation: The participation of persons with disabilities in society continues to be limited by negative attitudes. Media reporting has a significant influence on public attitudes toward disability. In a content analysis of three Canadian newspapers, this study found several significant changes in the language and content of disability-related articles. Together, these changes provide some evidence of more favorable media representations of disability. Further research in rehabilitation is needed to understand how such changes may both reflect and facilitate ongoing efforts to enhance people with disabilties' participation in social life. abstract_id: PUBMED:36759342 Real change or more of the same? Analysing Australian media's portrayal of intellectual disability during the NDIS rollout. Background: Through the filtering of information, the creation, and reinforcement of stereotypes, media moulds attitudes and set agendas on critical social issues including public policy and disability. Methods: This study explored Australian media representations of the care of people with intellectual disability during a crucial period in disability policy change: the National Disability Insurance Scheme (NDIS) rollout (2013-2018). Search criteria identified 168 news stories, examined via content analysis and news framing. Results: Four major issues were identified: Roles and responsibilities of government; housing; mistreatment of persons with intellectual disability, and responsibility of care for families. Stories tended to be presented negatively, however, regional and local/community metropolitan stories were more balanced or positive compared with major metropolitan stories. Conclusion: Despite significant disability policy change, media presentations continue to reinforce stereotypes of people with intellectual disability and position the government as one continuing to fall short in critical areas of funding, safety, and support. abstract_id: PUBMED:36699812 Fake News through the Eyes of Three Generations of Russians: Differences and Similarities in Social Representations. Background: The problem of fake news becomes especially prominent during periods of social exacerbation, such as the coronavirus pandemic, wherein the events have a significant impact on many lives. Generational differences are considered as a factor affecting perceptions of the reliability of news. Objective: The aim of this study was to reveal and compare the social representations of information reliability and news verification criteria among people belonging to the Generation of Reforms (born 1968-1981), the Millennial Generation (1982-2000) and Generation Z (2001 and later) in Russia. Design: The study involved 431 participants and was comprised of two stages: focus groups and a survey. The data analysis methods employed were thematic analysis, qualitative and quantitative content analysis, coefficient of positive answers (according to J. Abric), Kruskal-Wallis H test, Pearson's chi-square test, Spearman's rank correlation coefficient, and Kendall's t-rank correlation coefficient. Results: We have found significant differences between the Generation of Reforms (CPA: 80,5; p = 0,000) and Generation Z (CPA: 90,2; p = 0,000), and similarities between the Millennial Generation (CPA: 90,3; p = 0,000) and Generation Z, in the structure and content of social representations regarding "fakes". Notably, Generation Z favors a fact-checking strategy to identify news reliability, while "Reformists" rely on offline contacts. Conclusion: Generations in Russia differ with respect to their tolerance of "fakes" and their strategies for news verification. The results advance our understanding of "fakes" as purely social constructs. The attribution of media incompetence to older and younger cohorts by each other was discussed as the generational conflict. abstract_id: PUBMED:36454353 Medical Assistance in Dying: A Review of Related Canadian News Media Texts. Medical assistance in dying (MAiD) was legalized in Canada in 2016. Canadians' opinions on the service are nuanced, particularly as the legislation changes over time. In this paper, we outline findings from our review of representations of MAiD in Canadian news media texts since its legalization. These stories reflect the concerns, priorities, and experiences of key stakeholders and function pedagogically, shaping public opinion about MAiD. We discuss this review of Canadian news media on MAiD, provide examples of four key themes we identified (vulnerability, autonomy, dignity, and human rights), and discuss their implications for health policy and equity. Though key stakeholders share the values of autonomy, dignity, and human rights, they appeal to them in diverse ways, sometimes with conflicting policy demands. These representations offer a useful gauge of how views about MAiD continue to shift alongside changes in federal legislation. These stories can influence related policies, respond to the powerful voices that shape MAiD legislation, and have the potential to change national conversations. Our analysis adds to the existing body of scholarship on MAiD by examining post-Bill C-7 news media, identifying related health equity issues and tensions, and discussing potential impacts of MAiD's representations in news media. abstract_id: PUBMED:26229010 Polarizing news? Representations of threat and efficacy in leading US newspapers' coverage of climate change. This study examines non-editorial news coverage in leading US newspapers as a source of ideological differences on climate change. A quantitative content analysis compared how the threat of climate change and efficacy for actions to address it were represented in climate change coverage across The New York Times, The Wall Street Journal, The Washington Post, and USA Today between 2006 and 2011. Results show that The Wall Street Journal was least likely to discuss the impacts of and threat posed by climate change and most likely to include negative efficacy information and use conflict and negative economic framing when discussing actions to address climate change. The inclusion of positive efficacy information was similar across newspapers. Also, across all newspapers, climate impacts and actions to address climate change were more likely to be discussed separately than together in the same article. Implications for public engagement and ideological polarization are discussed. abstract_id: PUBMED:37226520 Femicide in Mexico. Who Are the Slain Women According to News Media? A Quantitative Study of Social Representations of Victims and Perpetrators. Research on femicide news revealed discriminatory narratives against the victims in specific cases and social contexts. This article uses a quantitative approach to analyze the news content that serves to create social representations of victims and perpetrators. We propose a methodology based on examining independent elements in the descriptions, identifying extratextual patterns, and providing the data to compare the social representations of intimate partner violence (IPV), familiar, and non-IPV femicides. Three online news outlets were analyzed from July 2014 to December 2017, creating a corpus of 2,527 articles. The results revealed that it is more common to create negative representations of victims than negative representations of the perpetrators. abstract_id: PUBMED:32662052 Employment, Education, and Income for Canadians with Developmental Disability: Analysis from the 2017 Canadian Survey on Disability. This study assessed needs and outcomes for people with developmental disability (DD) to understand the socioeconomic status of this group prior to implementation of the Accessible Canada Act in June 2019. The 2017 Canadian Survey on Disability (CSD) was used to analyze data for a sample of individuals with self-reported disability. Data related to employment, education, income, housing, caregivers, and use of government benefits is reported. Compared to the general Canadian public, persons with DD are less likely to: finish high-school or post-secondary education; participate in the labor force or be employed; and earn on average less/year in total income. To align with recent accessibility legislation, significant progress is needed to address disparities for people with DD. abstract_id: PUBMED:27867283 From 'Virgin Births' to 'Octomom': Representations of Single Motherhood via Sperm Donation in the UK News. The use of sperm donation by single women has provoked public, professional and political debate. Newspapers serve as a critical means of both broadcasting this debate and effecting a representation of this user group within the public sphere. This study uses the theory of social representations to examine how single motherhood by sperm donation has been represented in the UK news over time. The study sampled news coverage on this topic in eight British newspapers during three 4-year periods between the years 1988 and 2012. The dataset of news reports (n = 406) was analysed using a qualitative approach. Findings indicated that UK media reports of single women using donor sperm are underpinned by conventional categories of the 'personal', the 'traditional' and the 'natural' that when paired with their corollaries produce a representation of this user group as the social 'other'. The amount of coverage on this topic over time was found to vary according to the political orientation of different media sources. Using key concepts from social representations theory, this article discusses the relationship between themata and anchoring in the maintenance of representations of the social 'other' in mass mediated communication. Findings are explained in relation to theoretical conceptions of the mass media and its position within the public sphere. It is argued that the use of personal narratives in news reports of single mothers by sperm donation may have significant implications for public understandings of this social group. © 2016 The Authors. Journal of Community & Applied Social Psychology published by John Wiley & Sons Ltd. abstract_id: PUBMED:34252784 Coverage of medical cannabis by Canadian news media: Ethics, access, and policy. Background: The use of recreational cannabis by adults was legalized in Canada in 2018 (The Cannabis Act, 17 October 2018). This change in drug policy indirectly opened the doors to broader uses, including those for health-related conditions. As a first step towards understanding ethical, legal, and social shifts on this landscape, we examined messaging surrounding medical cannabis and health in Canadian news media before and after legalization. Methods: We retrieved news articles about medical cannabis and health from Canadian sources (2010-2020). Deductive and inductive content analytic approaches were applied to code eligible articles for both a priori ethical, legal, social, and issues, and emergent themes. Wilcoxon-Mann-Whitney U tests were used to test for thematic changes pre- and post-legalization. Results: Two hundred and sixteen (216) articles met inclusion criteria. Analysis yielded three themes and eight subthemes, comprising 4931 coded references. The a priori theme of Ethical, Legal and Social Issues (access, benefits/risks of medical cannabis, public engagement, and conflicts of interest) dominated news media coverage across the decade (coded in 99% of articles). The emergent theme of Medical Applications (therapeutic use of cannabis, consumption considerations, comparisons to other drug and treatment options) was coded in 86% of articles; Resource Issues (research and education) in 50%. We found little discourse on ethics issues specific to exceptional populations. Coverage on aspects of access declined significantly post-legalization. Conclusion: Capturing the push of Canadian news coverage of medical cannabis is a powerful means of understanding how public opinions on the subject are shaped and then, by extension, inform public policy for well-being and healthcare. Continued examination of these issues, public consultation, engagement with diverse populations such as people with disabilities and neurologic and mental health conditions, and integration of diverse cultural views into the policy discourse are critical steps for future research and action. abstract_id: PUBMED:33380263 Talk like an expert: The construction of expertise in news comments concerning climate change. This article explores how readers of UK newspapers construct expertise around climate change. It draws on 300 online readers' comments on news items in The Guardian, Daily Mail and The Telegraph, concerning the release of the International Panel on Climate Change report calling for immediate action on climate change. Comments were analysed using discursive psychology. We identified a series of discursive strategies that commenters adopted to present themselves as experts in their commentary. The (mostly indirect) use of category entitlements (implicitly claiming themselves as expert) and the presentation of one's argument as factual (based on direct or indirect technical knowledge or common sense) emerged as common ways in which readers made claims to expertise, both among the supporters and among the sceptics of climate change science. Our findings indicate that expertise is a fluid concept, constructed in diverse ways, with important implications for public engagement with climate change science. Answer: Between 1998 and 2008, there was a significant shift in the representation of disability in the Canadian news media. The proportion of stories using "person first" language increased, indicating a more respectful and individualized approach to discussing persons with disabilities. Additionally, there was a rise in the number of "progressively" themed articles that focused on issues such as barriers to participation, disability awareness, and inclusion. This suggests a positive change in the way people with disabilities are represented, with a greater emphasis on the complexity of their identity and their roles in society. However, the study also found significant differences between newspapers, with the Toronto Sun maintaining a strong focus on "traditional" themes like special education and charitable provision (PUBMED:23350765).
Instruction: Is p53 immunohistochemistry useful for optimal decision for treatment of polypoid lesions in ulcerative colitis? Abstracts: abstract_id: PUBMED:12229164 Is p53 immunohistochemistry useful for optimal decision for treatment of polypoid lesions in ulcerative colitis? Background: Usefulness of p53 staining for the differentiation between adenoma and DALM has been reported recently, so recognizable lesions stained positively can be diagnosed as DALMs. For the cases with DALMs, total colectomy has been thought to be necessary. Methods: Immunohistochemical staining for p53 was performed in 4 adenocarcinomas and 4 adenomas in ulcerative colitis. Results: Three carcinomas and 3 adenomas were positive. One carcinoma (protruded mucosal cancer) and 3 adenomas (1 flat elevated lesion and 2 laterally spreading tumors) stained positively for p53 were treated only by polypectomy or local excision. The patients have been under surveillance for periods ranging from 1 to 10 years, during which no metachronous dysplasia has developed. Conclusions: These findings suggest that some groups of the polypoid lesions can be resected locally even if stained positively by p53 immunohistochemistry. abstract_id: PUBMED:17032312 Genetic instability on chromosome 17 in the epithelium of non-polypoid colorectal carcinomas compared to polypoid lesions. Early colorectal carcinomas (submucosal invasive adenocarcinomas) can be classified into polypoid growth carcinoma (PG-Ca) and non-polypoid growth carcinoma (NPG-Ca) types, the latter transforming more rapidly to advanced carcinoma. Previously, we indicated that stromal genetic instability might contribute to tumorigenesis of both sporadic and ulcerative colitis-associated colorectal adenocarcinomas. In the present study, we analyzed genetic instability of both epithelial and surrounding stromal components in PG-Ca and NPG-Ca. In 99 colorectal submucosal invasive adenocarcinomas, epithelial and stromal genetic instability was analyzed with National Cancer Institute standard microsatellite markers, chromosome 17 (Chr.17) markers and tumor suppressor gene-related markers, using a combination of the laser-captured microdissection and GeneScan approaches. Immunohistochemical analysis was carried out for hMLH1, hMSH2, MGMT and p53. In addition, we investigated methylation of the hMLH1 and MGMT promoters. The frequencies of epithelial microsatellite instability (MSI) with Chr.17 markers were significantly higher in NPG-Ca (33.3%) compared to PG-Ca (10.4%), particularly with D17S579 and D17S796. For loss of heterozygosity, only D17S786 showed a significant difference. The frequencies of stromal MSI with all markers were 31.7% and 25.9% in NPG-Ca and PG-Ca, respectively, but D17S579 and TP53 showed higher MSI in NPG-Ca than PG-Ca. Immunohistochemically, p53 protein expression in PG-Ca was significantly higher in loss of heterozygosity-positive cases with altered Chr.17 markers overall, especially the D17S796 marker, compared to cases without genetic instability. These results suggest that epithelial and stromal MSI of Chr.17 markers contributes more to carcinogenesis in NPG-Ca, whereas stromal genetic instability might be necessary for the development of both types of colorectal carcinoma. abstract_id: PUBMED:23415736 Differential response of flat and polypoid colitis-associated colorectal neoplasias to chemopreventive agents and heterocyclic amines. Individuals with ulcerative colitis face an increased risk of developing colorectal cancer and would benefit from early chemopreventive intervention. Results from preclinical studies in the mouse model of dextran sulfate sodium-induced colitis demonstrate that flat and polypoid colitis-associated dysplasias arise via distinct genetic pathways, impacted by the allelic status of p53. Furthermore, flat and polypoid dysplasias vary in their response to induction by the heterocyclic amine 2-amino-3-methylimidazo[4,5-f]quinoline (IQ) and inhibition by 5-aminosalicylic acid, a common therapy for the maintenance of colitis patients. These data suggest that use of combination therapy is essential for the optimal inhibition of colitis-associated colorectal cancer. abstract_id: PUBMED:31917154 Traditional serrated adenoma-like lesions in patients with inflammatory bowel disease. Patients with inflammatory bowel disease (IBD) have an increased risk of colorectal carcinoma. The significance of serrated lesions resembling traditional serrated adenoma (TSA) in IBD patients is unclear. In this retrospective study, we analyzed 52 TSA-like lesions arising in 30 IBD patients and diagnosed in colectomy or endoscopic specimens. The 27 colectomy lesions presented predominantly as ill-defined areas with granular appearance, with a median size of 15 mm, located throughout the large bowel and associated with synchronous advanced colorectal lesions in 58%. Low-grade serrated dysplasia was present in 56%, high-grade serrated dysplasia in 37%, and TSA-type cytology in 7%. Increased Ki-67 immunostaining and abnormal p53 expression were identified in 96% and 48%, respectively; 74% had a KRAS mutation, and 4% had a BRAF mutation. Endoscopically resectable TSA-like lesions were all discrete polypoid lesions, smaller in size (median 9 mm), predominantly in the distal large bowel, with an adjacent precursor polyp in 24%, and associated with synchronous and metachronous advanced colorectal lesions in 6%. Most (92%) show TSA-type cytology. p53 overexpression was present in 4%, KRAS mutation in 41%, and BRAF mutation in 32%. None of the 52 TSA-like lesions demonstrated loss of MLH1 or SATB2 expression by immunohistochemistry. On follow-up, 4 patients were diagnosed with colorectal carcinoma or high-grade adenomatous IBD-associated dysplasia. None of the patients with lesions showing TSA-type cytology only developed an advanced lesion. Our findings suggest that some TSA-like lesions, essentially from colectomy, may represent a form of IBD-associated dysplasia associated with an increased risk of advanced neoplasia. abstract_id: PUBMED:27567189 Immunohistochemistry in the diagnosis of dysplasia in chronic inflammatory bowel disease colorectal polyps. Background And Study Aims: Development of cancer is the most significant complication in inflammatory bowel disease (IBD). Distinguishing true dysplasia from reactive atypia in polyps is difficult, leading sometimes to the unsatisfactory diagnosis of "indefinite for dysplasia". Therefore, there is a need for the development of markers that can help improve diagnosis. We evaluated the diagnostic value of the expression of AMACR, Ki67 and p53 by immunohistochemistry in the diagnosis of dysplasia in polyps developed on IBD. Patients And Methods: Forty colorectal polyps in IBD were studied. These had been diagnosed over a period of 11years. Dysplasia was classified according to the Vienna Classification (version 2000). Immunohistochemistry was performed using anti-AMACR, anti-Ki67 and anti-p53 antibodies. Results: Polyps were classified as follows: 21 negative for dysplasia (ND), 10 indefinite for dysplasia (IFD), 6 low-grade dysplasia (LGD), 1 high-grade dysplasia (HGD) and 2 adenocarcinomas (ACA). AMACR positivity was observed in all polyps with HGD and ACA, 5 of the 6 LGD polyps and 3 of the 10 IFD (p=0.007). p53 immunostaining showed nuclear staining in the basal part of the crypts in 8 of the 10 IFD lesions. In ACA and HGD polyps, p53 positivity was typically observed in all epithelial cell layers (p=0.004). ACA and HGD showed diffuse and scattered staining of Ki67 along the full length of the crypts. Five lesions with LGD had extension of Ki-67 positive cells up to and into the surface epithelium. Ki67 staining in all IFD lesions was restricted to the basal third of the crypt (p<0.001). By combining the three markers, a relationship with dysplasia was statistically significant (p<0.001). Sensitivity ranged from 66.7% to 88.9% and specificity from 71.4% to 100%. The positive predictive value (PPV) for detecting dysplasia using these different antibodies ranged from 66.7% to 100% and the negative predictive value (NPV) for excluding dysplasia ranged from 85.7% to 93.3%. Conclusions: The high degree of sensitivity and specificity of AMACR, p53 and Ki67 for dysplasia in IBD suggests that these antibodies, when combined, may be useful to detect neoplastic epithelium in this condition. abstract_id: PUBMED:21266817 Immunohistochemical analysis of REG Iα expression in ulcerative colitis-associated neoplastic lesions. Background And Aims: The regenerating gene (REG)Iα has been identified by microarray analysis as a gene that is distinctly overexpressed in ulcerative colitis (UC), and its protein product is suggested to play a pivotal role in the development of UC-associated carcinoma. In the present study, we investigated the significance of REG Iα expression as a diagnostic marker of UC-associated neoplasia. Methods: Tissue samples were obtained from colectomy specimens from 31 patients with long-standing UC (mean disease duration 17.2 years, range 5-29). The lesions were evaluated according to the International Classification for Dysplasia in Inflammatory Bowel Diseases, and the sections were examined using immunohistochemistry for REG Iα and p53. Results: In the 'regenerating atypia' group, REG Iα immunoreactivity was restricted to the lower third of the UC mucosa (grade 1). Lesions classified as 'indefinite for dysplasia' also showed predominantly basal-type staining for REG Iα. However, in 'low-grade dysplasia' and 'high-grade dysplasia' lesions, the localization of REG Iα immunoreactivity expanded to the middle (grade 2) and upper (grade 3) third of the UC mucosa, respectively. The REG Iα immunostaining pattern differed significantly (p < 0.0001) between non-neoplastic and neoplastic lesions, and was significantly (p < 0.0001) associated with p53 overexpression. Conclusions: Immunohistochemical analysis of REG Iα expression is useful for differential diagnosis of non-neoplastic and neoplastic lesions in UC tissues. abstract_id: PUBMED:10435567 P53 and beta catenin expression in chronic ulcerative colitis--associated polypoid dysplasia and sporadic adenomas: an immunohistochemical study. In patients with chronic ulcerative colitis (CUC), polypoid dysplastic lesions (PDLs) are morphologically similar to sporadic adenomas (SAs), but may be biologically distinct from them and are managed differently. p53 mutations have been shown to occur at an earlier phase in the progression of CUC-associated neoplasia when compared with sporadic colon carcinogenesis. In contrast, APC gene mutations are common and occur at an earlier stage in the development of SA. beta catenin is a cell membrane protein that accumulates in the nucleus of colon cancer cells in response to APC gene mutations. This study was performed to test the hypothesis that CUC-associated PDLs have a different molecular profile than do CUC-associated SAs and therefore may be distinguished on this basis. Mucosal biopsy specimens of 38 benign polypoid epithelial neoplasms (17 CUC-associated PDLs and 21 CUC-associated SAs) from 33 patients with CUC and 13 SAs from patients without CUC (controls) were immunohistochemically stained for p53 and beta catenin and graded as follows: 0 = no staining, 1+ = <50% of cells positive, and 2+ = > or =50% of cells positive. The results were correlated with the clinical and histologic features and compared between the two CUC-associated polyp subgroups. Overall, six (16%) polyps were p53-positive, of which five were CUC-associated PDLs (one 1+ and four 2+) and one was a CUC-associated SA (1+) (p = 0.05). Strong (2+) p53 positivity was detected, however, in only CUC-associated PDLs (4 of 5; 80%). Nine of 32 polyps evaluated for beta catenin were positive and included 1 (8%) of 12 CUC-associated PDLs and 8 (40%) of 20 CUC-associated SAs (p = 0.06). Two of the nine beta catenin polyps were strongly positive, and both were CUC-associated SAs. Non-CUC-associated (control) SAs were positive for p53 and beta catenin in 2 (15%) of 13 and 6 (46%) of 13 cases, but none in a strong (2+) fashion. No differences were observed in p53 or beta catenin staining, between CUC-associated and non-CUC-associated SAs. Neither p53 nor beta catenin expression correlated with any clinical or pathologic features, including size and degree of dysplasia of the polyps. CUC-associated PDLs and CUC-associated SAs may have a different molecular genotype. In patients with CUC, the combination of strong p53 expression and absent or weak beta catenin expression is evidence in favor of a CUC-associated PDL in diagnostically difficult lesions. Furthermore, CUC-associated and non-CUC-associated SAs have a similar P53 and beta catenin immunophenotype and thus provide evidence that they are pathogenetically related neoplasms regardless of the presence or absence of colitis. abstract_id: PUBMED:10571818 p53 immunohistochemistry of ulcerative colitis-associated with dysplasia and carcinoma. In order to evaluate the usefulness of p53 immunohistochemistry (IHC) in the diagnosis of ulcerative colitis-associated colorectal carcinoma (UCACRC), ordinary paraffin sections were examined in 61 cases with ulcerative colitis (UC) and 29 control cases without UC. Among the 61 cases with UC, 11 were complicated by carcinoma coexisting with dysplasia, three with dysplasia, and two cases with adenoma. There were a total of 38 dysplasias, including 33 low grade dysplasias (LGD) and five mixed low and high grade dysplasias (LGD + HGD). The results of p53 IHC were divided into diffuse, nested, scattered and sporadic patterns for 29 control cases. Diffuse and nested patterns were presumed to reflect mutant forms of p53 protein and were defined as overexpression of p53 protein. In non-neoplastic mucosa of UC, the frequency of p53 positive tubules was significantly higher in active phase (13.5-17.9%) than in resolving phase (3.9-6.5%) and in remission (0.7-2.4%), regardless of association with neoplasia. Eight of the 37 lesions of dysplasia (21. 6%) showed p53 overexpression: 12.5% in LGD and 80.0% in LGD + HGD. The rate of p53 overexpression was significantly higher in UCACRC (90.9%) than in non-neoplastic mucosa of UC (0%), LGD and sporadic colorectal carcinoma (54.5%), but it did not differ between UCACRC and LGD + HGD. Interestingly, the mucosa without dysplasia showed p53 overexpression in one case of UCACRC. The biopsy specimen taken 4 years before the diagnosis of carcinoma revealed p53 overexpression in another case with UCACRC. These results suggest that p53 abnormalities play an important role in UC-associated tumorigenesis in its relatively early phase. For the diagnosis of dysplasia and carcinoma in UC, p53 IHC seems to be useful. abstract_id: PUBMED:9744312 Loss of heterozygosity of the von Hippel Lindau gene locus in polypoid dysplasia but not flat dysplasia in ulcerative colitis or sporadic adenomas. Carcinoma in ulcerative colitis (UC) develops from dysplastic precursor lesions, which include flat dysplasia (FD) and polypoid dysplasias (PD). PD may present as single or multiple polypoid structures or as plaque-like lesions that, independent of histological grade, are an indication for colectomy. PDs are histologically similar to adenomas and may not be readily distinguished by light microscopy. It is not known whether FD and PD are different entities, or whether they represent etiologically similar lesions with different morphological expression. We microdissected 25 cases of UC with PD and 19 samples of FD with surrounding chronic colitis (CC) in UC. Loss of heterozygosity (LOH) at the von Hippel Lindau (vHL) gene locus and the putative tumor suppressor genes APC, INK4A (9p16), and p53 was studied. LOH of the vHL gene, INK4A (9p16), and APC was also studied in 11 sporadic adenomas of the colon. LOH at the vHL locus was present in 50% of the samples of PD and in 12% of the samples of FD. LOH was seen in CC close to PD and FD in 26% and 12% of cases, respectively. No adenoma showed LOH of the vHL gene markers studied. LOH in p53 was seen in PD in 16% cases and in FD in 42% cases and in CC close to PD and FD in 0% and 14% cases, respectively. LOH patterns between PD and FD of the markers for APC and 9p16 were not different. LOH in APC was seen in two of five cases of adenoma. We conclude that PD and FD share genetic alterations in APC and 9p16 genes. More frequent involvement of the VHL gene in PD and surrounding CC and involvement of p53 in HGD and CC in FD may represent genetic differences between the development of PD and FD and may be the cause of the different morphology. The infrequency of LOH at the vHL locus in adenomas versus PD may serve as a discriminator between adenomas and PD in diagnostically problematic cases. abstract_id: PUBMED:10452501 The differentiation of true adenomas from colitis-associated dysplasia in ulcerative colitis: a comparative immunohistochemical study. Adenomas in areas involved by ulcerative colitis (UCA) are difficult to identify because of their morphological similarity to ulcerative colitis-associated dysplasia (UCD) and have an uncertain biology. Recently, a set of morphopathologic criteria were published for the diagnosis of UCA versus UCD. As a first step to analyze these criteria, we studied p53 and bcl-2 expression in groups of UCA and UCD along with a sporadic adenoma control group. Ninety lesions from UC areas (62 patients) were examined, including 24 UCA without high-grade dysplasia (HGD) and 66 UCD consisting of 43 polypoid and 23 flat dysplastic lesions (29 with HGD). Immunohistochemical p53 and bcl-2 expression were evaluated semiquantitatively. P53-positive cases were significantly less frequent in the UCA (4%) versus the UCD group (30%, P = .01) and the polypoid UCD subgroup (35%, P = .005). Moderate or strong bcl-2 expression was significantly more frequent in the UCA than in the UCD group (96% v 70%, P = .01) and in the UCA versus both polypoid and flat UCD subgroups. Comparison of UCA with low-grade dysplastic polypoid UCD cases alone showed a difference just below significance for p53 (P = .07). p53 and bcl-2 expression rates were very similar in the UCA group and the sporadic adenoma (n = 25) control group. These results show that UCA has phenotypic features more similar to sporadic adenomas than UCD and supports the concept that adenomas in UC have a biology different from UC-associated dysplasia. Answer: Yes, p53 immunohistochemistry appears to be useful for the optimal decision for treatment of polypoid lesions in ulcerative colitis. Immunohistochemical staining for p53 has been performed in various studies to differentiate between adenomas and dysplasia-associated lesion or mass (DALM) in ulcerative colitis. It has been reported that recognizable lesions stained positively for p53 can be diagnosed as DALMs, which traditionally indicated the need for total colectomy (PUBMED:12229164). However, findings suggest that some polypoid lesions that stain positively for p53 can be resected locally without the development of metachronous dysplasia, indicating that p53 immunohistochemistry can help identify cases where less aggressive treatment may be appropriate (PUBMED:12229164). Furthermore, p53 protein expression has been associated with genetic instability in colorectal carcinomas, and its immunohistochemical analysis has been suggested as useful for the differential diagnosis of non-neoplastic and neoplastic lesions in ulcerative colitis tissues (PUBMED:17032312, PUBMED:21266817). The expression of p53 has been found to be significantly higher in cases with loss of heterozygosity and altered chromosome 17 markers, particularly in polypoid growth carcinoma compared to non-polypoid growth carcinoma (PUBMED:17032312). Additionally, p53 immunostaining has shown nuclear staining in the basal part of the crypts in lesions indefinite for dysplasia and has been observed in all epithelial cell layers in high-grade dysplasia and adenocarcinomas, suggesting its utility in distinguishing true dysplasia from reactive atypia in polyps (PUBMED:27567189). The overexpression of p53 protein has been associated with dysplasia and carcinoma in ulcerative colitis, indicating its role in tumorigenesis in the early phase (PUBMED:10571818). In summary, p53 immunohistochemistry is a valuable tool in the assessment of polypoid lesions in ulcerative colitis, aiding in the differentiation between benign and neoplastic lesions, and guiding treatment decisions to optimize patient outcomes.
Instruction: Is capnometry helpful in children with bronchiolitis? Abstracts: abstract_id: PUBMED:27021578 Is capnometry helpful in children with bronchiolitis? Background: Acute bronchiolitis is the most frequent lower respiratory tract infection in infants. Only small subsets of patients develop severe disease resulting in hospitalization despite having no identifiable risk factors. There is still a debate as to the role of capnometry in assessing ventilation in children with acute respiratory distress, and bronchiolitis in particular. Methods: This was a prospective, single blind cohort study in which children younger than two years presenting to the emergency department (ED) with bronchiolitis were included. Our primary outcome was the correlation between the end tidal CO2 (EtCO2) and the clinical decision of hospital admission and discharge. Our secondary outcome measure was the correlation of EtCO2 upon arrival to the ED and clinical measures of bronchiolitis severity. Finally, by using multivariate models, we looked for other parameters that could contribute to the prediction of illness severity. Results: One hundred and fourteen children with bronchiolitis were evaluated. Their median EtCO2 upon arrival to the ED was 34 mmHg (range 24-65 mmHg). EtCO2 values upon admission or discharge were not statistically different among patients who were hospitalized and among those who were discharged from the ED. Among admitted patients, we found no correlation between capnometry readings at admission and number of oxygen desaturation days, nor with the length of hospitalization. Wang clinical respiratory severity score was found, by using multivariate models, to predict nasogastric tube need, oxygen desaturation days, and length of hospitalization. Conclusion: Capnometry readings upon arrival to the ED did not predict hospital admission or hospital discharge eligibility. Among hospitalized patients, EtCO2 did not correlate with the evaluated disease severity measures. Wang score was found to be the most consistent predictor of significant outcomes. abstract_id: PUBMED:22929141 Capnometry as a predictor of admission in bronchiolitis. Objectives: Bronchiolitis is a dynamic condition, and predicting clinical deterioration can be difficult. The objective of this study was to determine whether capnometry readings among bronchiolitic children admitted to the hospital are significantly different from those discharged from the emergency department. Methods: We prospectively studied a convenience sample of children younger than 24 months with clinical bronchiolitis. A single end-tidal CO2 (ETCO2) reading was taken before treatment, and a clinical work of breathing score was assigned to each patient. Treating physicians and nurses were blinded to capnometry readings. The decision to admit was based on the judgment of the attending physician. Descriptive statistics and appropriate hypothesis testing were performed. A receiver operating characteristic curve was constructed for the association between admission and capnometry readings. The α was set at 0.05 for all comparisons. Results: One hundred five children with bronchiolitis were included for study. Capnometry readings for admitted (mean, 32.6 mm Hg; 95% confidence interval [CI], 30.3-34.9 mm Hg) and discharged (mean 31.4 mm Hg; 95% CI 29.8-33.0 mm Hg) bronchiolitic children were not significantly different. Capnometry readings for low (mean, 31.7 mm Hg; 95% CI, 29.5-33.8 mm Hg), intermediate (mean, 32.1 mm Hg; 95% CI, 30.1-34.1 mm Hg), and high (mean, 30.5 mm Hg; 95% CI, 19.3-41.7 mm Hg) work of breathing (score) ranges were not significantly different. Conclusions: Capnometry readings are not useful in predicting admission for children younger than 2 years with clinical bronchiolitis. There are no significant differences in capnometry readings among bronchiolitic children with low, medium, and high work of breathing scores. abstract_id: PUBMED:26987508 Capnometry in children with bronchiolitis: A pathophysiological point of view. N/A abstract_id: PUBMED:33023258 Normal Percentiles for Respiratory Rate in Children-Reference Ranges Determined from an Optical Sensor. (1) Background: Increased respiratory rates (RRs) are described in several medical conditions, including pneumonia, bronchiolitis and asthma. There is variable methodology on how centiles for RR are derived in healthy children. Available age percentiles for RR have been generated using methods that have the potential themselves to alter the rate. (2) Methods: An optical respiratory sensor was used to measure RR. This technique enabled recording in awake children without the artefact of the observer's presence on the subject's RR. A cross-sectional sample of healthy children was obtained from maternity wards, childcare centres and schools in Brisbane, Queensland, Australia. (3) Results: RRs were observed in 560 awake and 103 sleeping children of which data from 320 awake and 94 sleeping children were used to develop centile charts for children from birth to 13 years of age. RR is higher when children are awake compared to asleep. There were significant differences between awake and sleeping RR in young children. The awake median RR was 59.3 at birth and 25.4 at 3 years of age. In comparison, the median sleeping RR was 41.4 at birth and 22.0 at 3 years. (4) Conclusions: The centile charts will assist in determining abnormal RRs in children and will contribute to further systematic reviews related to this important vital sign. This is particularly in relation to the data on children aged from 0 to 3 years, where data are presented on both the awake and sleeping state. Many studies in the literature fail to acknowledge the impact of sleep state in young children on RR. abstract_id: PUBMED:27500927 Management of acute viral bronchiolitis in children: Evidence beyond guidelines. Acute viral bronchiolitis is one of the leading causes of worldwide admission of children under 2 years of age during winter months. There is a lack of consensus regarding the clinical definition of acute viral bronchiolitis in children and hence the management varies across the globe. The purpose of this article is to review the epidemiology, etiology, risk factors, pathophysiology, clinical presentation, assessment and management of children with respiratory syncytial virus (RSV) bronchiolitis. The available evidence in the worldwide literature suggests that supportive and symptomatic management is still the mainstay of management in this condition. The key to reducing the morbidity and mortality in children with RSV bronchiolitis is through prevention of infection through immunoprophylaxis especially in high-risk children. abstract_id: PUBMED:31516647 Pneumorrhachis in children: A report of two cases and review of the literature. Pneumorrhachis refers to the clinical presentation of air within the spinal canal, and it is rarely associated with pneumomediastinum, particularly in young children. Pneumorrhachis associated with pneumomediastinum is generally asymptomatic. Here we report 2 unusual cases involving very young children with pneumorrhachis secondary to pneumomediastinum and present a review of the relevant literature. Case 1 involved a 4-year-old girl who presented with wheezing, violent coughing, and dyspnea associated with bronchiolitis. Case 2 involved a 3-year-old boy who presented with wheezing, violent coughing, and dyspnea associated with interstitial pneumonia possibly caused by graft-versus-host disease with human herpesvirus 6 infection after allogeneic hematopoietic stem cell transplantation. In both cases, pneumorrhachis improved with oxygen inhalation therapy and treatment of the underlying disease. Pneumorrhachis is rarely associated with neurological problems; however, decompressive laminectomy may be indicated to relieve the air block. Because pneumorrhachis is rare in children and neurological sequelae may be difficult to identify, close clinical, and radiographic observations are necessary. Plain radiography is not sufficient, and computed tomography should be performed to rule out intraspinal air. abstract_id: PUBMED:32983545 The prevalence of human bocavirus in <2-year-old children with acute bronchiolitis. Acute bronchiolitis is one of the most common lower respiratory tract infections in children with less than 2 years of age. Nowadays, molecular methods provide an opportunity to better understand the etiology of bronchiolitis. Several viral agents including Respiratory syncytial virus (RSV), Rhinovirus, Parainfluenza and Human bocavirus (HBoV) are responsible for acute bronchiolitis. There are growing studies on the prevalence of HBoV in patients with bronchiolitis. The present systematic review and meta-analysis were conducted to determine the pooled prevalence of HBoV in the respiratory samples of children with acute bronchiolitis. A literature search was conducted in the databases of PubMed, Scopus and Web of Science to recruit studies reporting the frequency of HBoV in <2-year-old children with acute bronchiolitis from 2005 to 2019. Only studies that used polymerase chain reaction (PCR)-based methods to detect the virus in nasopharyngeal samples were included. A total of 22 studies assessing 6751 cases were analyzed. According to the meta-analysis based on the random-effects model, the overall prevalence of HBoV in children with <2 years old was obtained 13% (95% CI: 0.09-0.17). Additionally, the rates of single (as the sole organism) and mixed (in combination with other viruses) HBoV infections were 4% and 9%, respectively. This study showed a high rate of HBoV detection in children with acute bronchiolitis. This should be considered as part of a diagnostic test panel for respiratory infections in children with bronchiolitis. abstract_id: PUBMED:34866816 Viral Bronchiolitis in Children: Less is More. Jerath N. Viral Bronchiolitis in Children: Less is More. Indian J Crit Care Med 2021;25(11):1219-1220. abstract_id: PUBMED:38324665 Emergency management and asthma risk in young Medicaid-enrolled children with recurrent wheeze. Objectives: To describe clinical characteristics of young children presenting to the emergency department (ED) for early recurrent wheeze, and determine factors associated with subsequent persistent wheeze and risk for early childhood asthma. Methods: Retrospective cohort study of Medicaid-enrolled children 0-3 years old with an index ED visit for wheeze (e.g. bronchiolitis, reactive airway disease) from 2009 to 2013, and at least one prior documented episode of wheeze at an ED or primary care visit. The primary outcome was persistent wheeze between 4 and 6 years of age. Demographics and clinical characteristics were collected from the index ED visit. Logistic regression was used to estimate the association between potential risk factors and subsequent persistent wheeze. Results: During the study period, 41,710 children presented to the ED for recurrent wheeze. Mean age was 1.3 years; 59% were male, 42% Black, and 6% Hispanic. At index ED visits, the most common diagnosis was acute bronchiolitis (40%); 77% of children received an oral corticosteroid prescription. Between 4 and 6 years of age, 11,708 (28%) children had persistent wheeze. A greater number of wheezing episodes was associated with an increased odds of ED treatment with asthma medications. Subsequent persistent wheeze was associated with male sex, Black race, atopy, prescription for bronchodilators or corticosteroids, and greater number of visits for wheeze. Conclusions: Young children with persistent wheeze are at risk for childhood asthma. Thus, identification of risk factors associated with persistent wheeze in young children with recurrent wheeze might aid in early detection of asthma and initiation of preventative therapies. abstract_id: PUBMED:27365695 Comparision of Nebulised Salbutamol and L-epinephrine in First Time Wheezy Children. Background: Salbutamol is the standard recommendation in bronchial asthma. However, the use of bronchodilators in wheeze associated respiratory infections including bronchiolitis continues to be controversial. The aim of this study was to compare the efficacy of nebulised salbutamol versus epinephrine in 'first time' wheezy children using clinical parameters and airway resistance. Methods: Sixty children between two months to 60 months were recruited, 30 in each treatment group. Children received periodic (0, 20, 40 minutes) doses of either salbutamol (0.15mg/kg with 3ml saline subject to a minimum of 2.5mg) or laevo- epinephrine (1:1000, 0.5ml/kg subject to a max of 2.5ml with 3ml saline) via nebuliser along with oxygen. Changes in heart rate (HR), oxygen saturation (SpO2), respiratory rate (RR) and respiratory distress assessment instrument (RDAI) were assessed along with measurement of airway resistance using interrupter method (Hand held spirometer-Microloop with MicroRint module). Result: The respiratory status was better in the epinephrine group with significant improvement in RR, RDAI score, SpO2 and fall of airway resistance. There were no significant side effects noted in either group. Conclusion: Nebulised epinephrine is a useful and safe drug for wheezy children and is superior to salbutamol. This needs validation by large multicentric randomized blinded studies. Answer: Based on the provided abstracts, capnometry does not appear to be helpful in predicting hospital admission or discharge for children with bronchiolitis. Two studies specifically investigated the utility of capnometry in this context. The first study (PUBMED:27021578) found no statistical difference in end-tidal CO2 (EtCO2) values upon admission or discharge between patients who were hospitalized and those who were discharged from the emergency department (ED). Additionally, among admitted patients, there was no correlation between capnometry readings at admission and the number of oxygen desaturation days or the length of hospitalization. The Wang clinical respiratory severity score was identified as a better predictor of significant outcomes such as the need for a nasogastric tube, oxygen desaturation days, and length of hospital stay. Similarly, the second study (PUBMED:22929141) reported that capnometry readings were not significantly different between children with bronchiolitis who were admitted to the hospital and those who were discharged from the ED. Furthermore, there were no significant differences in capnometry readings among children with low, medium, and high work of breathing scores. Therefore, based on the evidence from these studies, capnometry does not seem to be a useful tool for predicting the need for hospital admission or for assessing the severity of bronchiolitis in children (PUBMED:27021578, PUBMED:22929141). Other clinical measures, such as the Wang score, may be more consistent predictors of disease severity and the need for hospitalization in this patient population.
Instruction: Should we exclude elderly patients with chronic obstructive pulmonary disease from a long-time ambulatory pulmonary rehabilitation programme? Abstracts: abstract_id: PUBMED:22549658 Should we exclude elderly patients with chronic obstructive pulmonary disease from a long-time ambulatory pulmonary rehabilitation programme? Objective: To assess the outcomes of a 6-month comprehensive multidisciplinary outpatient pulmonary rehabilitation programme in patients with chronic obstructive pulmonary disease according to age. Design: Prospective cohort study. Patients: A total of 140 patients with chronic obstructive pulmonary disease (Global Initiative for Chronic Obstructive Lung Disease (GOLD) 3-4) admitted to our centre for pulmonary rehabilitation. Methods: Patients were divided into 3 groups: group A (< 65 years), group B (65-74 years) and group C (≥ 75 years). All the patients received an education and individualized training programme. Pulmonary rehabilitation efficacy was evaluated at 6 months of treatment and 12 months post-treatment. Results: A total of 116 patients completed the pulmonary rehabilitation programme: 59 in group A (85.5%), 40 in group B (80%) and 17 in group C (80.9%). All the parameters studied (number of sessions, 6-min walking distance, isometric quadriceps strength, health-related quality of life, maximal load, peak oxygen uptake, maximal inspiratory and expiratory pressures) were significantly improved in each of the groups at 3 and 6 months compared with baseline. Moreover, percentage changes from baseline at 6 months for all of the parameters studied were not significantly different between age-groups. Conclusion: Pulmonary rehabilitation is efficient in elderly patients with severe and very severe chronic obstructive pulmonary disease, and their compliance with pulmonary rehabilitation was similar to that seen in younger groups. Therefore, elderly patients with chronic obstructive pulmonary disease should not be denied pulmonary rehabilitation. abstract_id: PUBMED:32009862 Survival prediction in patients with chronic obstructive pulmonary disease following a pulmonary rehabilitation programme in Hong Kong. Background And Objective: Pulmonary rehabilitation programme (PRP) is an important component in the management of chronic obstructive pulmonary disease (COPD). However, to date so far there has been limited literature on the survival outcomes of patients with COPD after a PRP in Hong Kong. This study aimed to investigate the outcomes of a pulmonary rehabilitation programme on the survival rates of a retrospective cohort of patients with COPD. Methods: This was a retrospective study that included subjects who participated in the PRP in a rehabilitation hospital from the year 2003 to 2015. A total of 431 patients with chronic obstructive pulmonary disease were identified from the electronic record system of the hospital. The dataset were split into two age groups for reporting and analysis using the mean age of 72 as the cut-off. Their median survival times were calculated using Kaplan-Meier analysis. Cox-proportional regression model was used to explore factors that predicted better survival. The most significant predictors were used as strata, and their respective effects on survival functions were analysed with Kaplan-Meier analysis again. Results: The overall median survival of the cohort was 4.3 years. The median survival times of the younger patient group (aged <72) and the older patient group (aged ≥72) were 5.3 and 3.6 years, respectively. For the patients, aged <72 years old, Moser's Activities of Daily Living class and the pulmonary rehabilitation programme completion rate were the most significant survival predictors. For the patients aged ≥72 years old, Monitored Functional Task Evaluation score was the most significant survival predictor. Conclusion: Moser's Activities of Daily Living class ≥2 and non-completion of PRP for younger group, low exercise capacity with Monitored Functional Task Evaluation score <17 for older group were identified as significant predictors of poor survival. The findings of this study helped identifying those patients with COPD who have the needs to be more intensively treated and closely monitored. abstract_id: PUBMED:31435281 Effect of a short term pulmonary rehabilitation programme on exercise capacity, pulmonary function and health related quality of life in patients with COPD. Objectives: Patients with chronic obstructive pulmonary disease (COPD) have been shown to benefit from pulmonary rehabilitation programmes. The purpose of this study was to ascertain the effects of a short-term pulmonary rehabilitation programme (PRP) on exercise capacity, pulmonary function and quality of life in patients with COPD. Methods: A pre-test and post-test experimental design was conducted on patients from the outpatient physical therapy department. Thirty stable COPD patients with mild to severe airflow obstruction, (mean age 54.1 ± 5.22, FEV1, between 0.80 and 0.30 predicted; FEV1/FVC < 0.70) were recruited for a 6-week comprehensive pulmonary rehabilitation programme (PRP) that included education and exercise training. Exclusion criteria included the following: cardiovascular conditions likely to be aggravated by exercise, locomotor impairment, haemoptysis, cognitive impairment, severe pulmonary hypertension, and metastatic cancer. The patients were randomly divided into experimental and control groups. Results: Six-minute walk distance (6MWD), pulmonary function (FEV1, FVC, FEV1/FVC), and St. George's Respiratory Questionnaire (SGRQ) scores were measured at baseline, at the end of the 3rd week and at the end of the 6th week. The results showed significant improvements in 6MWD and SGRQ (p < 0.05). Conclusion: A 6-week outpatient-based PRP significantly improves exercise capacity and quality of life, irrespective of the degree of airflow obstruction. abstract_id: PUBMED:25312798 Differences in patient outcomes between a 6, 7 and 8 week pulmonary rehabilitation programme: A service evaluation. Objective: To investigate changes and differences in patient outcomes over time for 6, 7 and 8 week pulmonary rehabilitation programmes in order to identify optimal duration. Setting: Community based pulmonary rehabilitation programmes in the East of England. Participants: In total 363 participants completed one of the three pulmonary rehabilitation programmes. Patients with a chronic respiratory condition showing a commitment to the pulmonary rehabilitation programme and no contraindications to exercise were included. Intervention: Pulmonary rehabilitation twice a week for 6, 7 or 8 weeks. Main Outcome Measures: St Georges Respiratory Questionnaire (SGRQ), Clinical COPD Questionnaire (CCQ), Hospital Anxiety and Depression Scale (HADS) and Incremental Shuttle Walk Test (ISWT). Results: All programmes showed improvements in ISWT post rehabilitation, with the 8 week programme showing the greatest improvement. Conclusion: Findings show that greatest improvements in terms of exercise capacity may be seen from 8 week pulmonary rehabilitation programmes, but that improvement for the ISWT can be obtained from 6, 7 or 8 week programmes. abstract_id: PUBMED:37842788 Comprehensive pulmonary rehabilitation for a 90-year-old patient with intertrochanteric fracture complicated by chronic obstructive pulmonary disease: a case report. Introduction: Chronic obstructive pulmonary disease (COPD)-induced osteoporosis, myasthenia, and disequilibrium are important risk factors for hip fracture, and decreased respiratory function after hip fracture surgery can decelerate recovery of activities of daily living (ADL) in elderly patients. Case Presentation: A 90-year-old male patient underwent an open reduction and intramedullary pinning surgery for right femoral intertrochanteric fracture. After surgery, he remained confined to bed with pain and swelling in the right lower extremity. Due to his history of COPD, he had a postoperative pulmonary infection and respiratory insufficiency. Intervention: This patient received routine rehabilitation after surgery. However, severe respiratory symptoms prevented him from completing the rehabilitation procedure. Therefore, comprehensive pulmonary rehabilitation including airway clearance techniques, inspiratory resistance training, aerobic training, respiratory muscle training, and intermittent low-flow oxygen inhalation was carried out to enhance the recovery process. Outcomes: After 4 weeks of treatment, the patient improved from continuous oxygen inhalation to being able to complete bed resistance training without supplemental oxygen. After 8 weeks, ADL reached independence, while lower limb muscle strength, pulmonary function parameters, fatigue index, and cough efficiency were improved as compared to test values obtained before treatment. Conclusion: Compared with the 6-month recovery time reported in the literature for patients with hip fractures to achieve independence with basic ADLs, the recovery time of this patient was shortened to 2 months with the application of comprehensive pulmonary rehabilitation. It is feasible to carry out pulmonary rehabilitation for elderly patients after fracture surgery, but prospective clinical trials are needed to verify its efficacy. abstract_id: PUBMED:31268214 Effectiveness of pulmonary rehabilitation in elderly patients with COPD: A systematic review and meta-analysis of randomized controlled trials. Aim: The review aimed to evaluate the effects of pulmonary rehabilitation in elderly patients with chronic obstructive pulmonary disease (COPD). Background: With an increase in published reports on pulmonary rehabilitation, there is a need for a meta-analysis to measure the effects of pulmonary rehabilitation in elderly COPD patients. Design: A systematic review and meta-analysis. Data Sources: The Cochrane library (Issue 4, 2018), Web of Science (1975 to April 2018), Embase (1974 to April 2018), Pubmed (1966 to April 2018), CINAHL (1982 to April 2018), JBI (The Joanna Briggs Institute) (1996 to April 2018), CNKI (China National Knowledge Infrastructure) (1979 to April 2018), CBM (SinoMed) (1982 to April 2018), and Wanfang Data (1900 to April 2018) were searched. Review Methods: Six outcome indicators were utilized for the effects of pulmonary rehabilitation. Two reviewers selected trials, evaluated the quality, and extracted data. Meta-analysis was performed by using the RevMan 5.3 software. Results: Eight studies recruited 414 elderly patients. Pulmonary rehabilitation resulted in significantly improved exercise capacity and quality of life in elderly people but with no influence on oxygen saturation compared with the control group. Conclusion: There is a need to provide more detailed pulmonary rehabilitation programs for elderly patients with COPD. abstract_id: PUBMED:37048597 Clinical Effects of Pulmonary Rehabilitation in Very Old Patients with COPD. Background: Pulmonary rehabilitation (PR) improves physical and mental performance as well as quality of life in patients with chronic obstructive pulmonary disease (COPD). However, data on outcomes in very old patients are insufficient. We analyzed whether the elderly with COPD benefit in a similar way to younger patients from participation in an inpatient PR according to the assessments usually collected. Methods: Data from 3173 patients with COPD were retrospectively analyzed. Patients were referred to PR at the Zurich RehaZentren, Switzerland, between January 2013 and December 2019. PR was performed 6 days per week with an average duration of 18.85 days. Functional Independence Measurement (FIM), Feeling Thermometer (FT), and 6-Minute Walk Test (6MWT) were recorded on admission and discharge. Results: In all age groups, the 6MWT and FT improved significantly. FIM results also showed a significant increase. The results of the different age groups showed no significant differences in percentage improvements according to the assessments that were considered. Conclusions: All patient groups with COPD, even the oldest (>85 years), benefited from PR regardless of their age and according to the assessments. Prospective studies are needed to support this hypothesis. abstract_id: PUBMED:32043808 The effectiveness, safety and compliance of Zheng's supine rehabilitation exercise as a rehabilitation programme among elderly patients with AECOPD. Purpose: We investigated the effectiveness, safety and compliance of Zheng's Supine Rehabilitation Exercise (ZSRE) as a rehabilitation programme among elderly patients with acute exacerbation of chronic obstructive pulmonary disease (AECOPD). Patients And Methods: About 82 elderly patients with AECOPD were divided into a rehabilitation group and control group on their admission day, and both groups received routine medical treatment. Patients in the rehabilitation group started ZSRE on the second day of admission and continued until 8 weeks after discharge. Results: At the 9th week after discharge, the COPD Assessment Test (CAT), 6-minute walking distance (6MWD) and Modified Medical Research Council Dyspnea Scale (mMRC) in the rehabilitation group were all significantly better than those in the control group (P < 0.01; P < 0.01; and P < 0.05, respectively). In the rehabilitation group, the CAT and 6MWD were significantly improved in the 9th week after discharge as compared with those at admission or discharge, and mMRC was significantly improved at the 9th week after discharge as compared with that at admission (all P < 0.01). Conclusion: ZSRE can be performed by elderly patients with the acute exacerbation of severe or extremely severe COPD with high safety and compliance and was helpful for their recovery. abstract_id: PUBMED:12029897 A nurse-led pulmonary rehabilitation programme for patients with COPD. A pulmonary rehabilitation programme set out to improve quality of life for patients with COPD by providing education and improving exercise tolerance. Progress was measured and outcomes monitored, the results indicating that such programmes could have a big impact, particularly on patients who are identified in the early stages of the disease. abstract_id: PUBMED:25835594 Guidelines for outpatient pulmonary rehabilitation in Austria Pulmonary rehabilitation has become a standard of care for patients with chronic lung disease. It has been clearly demonstrated that pulmonary rehabilitation improves exercise capacity and quality of life in patients with chronic lung disease and reduces the number of hospital days and other measures of health-care utilization in patients with chronic obstructive pulmonary disease (COPD). This is an update of the guidelines in outpatient pulmonary rehabilitation in Austria, closely related to the official American Thoracic Society and European Respiratory Society Statement published in 2013.The guidelines represent standards of quality for requirements of structural and personal qualifications. Answer: No, elderly patients with chronic obstructive pulmonary disease (COPD) should not be excluded from a long-time ambulatory pulmonary rehabilitation programme. Evidence suggests that pulmonary rehabilitation is efficient in elderly patients with severe and very severe COPD, and their compliance with pulmonary rehabilitation is similar to that seen in younger groups (PUBMED:22549658). Additionally, a systematic review and meta-analysis of randomized controlled trials found that pulmonary rehabilitation resulted in significantly improved exercise capacity and quality of life in elderly people with COPD (PUBMED:31268214). Furthermore, clinical effects of pulmonary rehabilitation in very old patients with COPD showed that all patient groups, even those older than 85 years, benefited from pulmonary rehabilitation regardless of their age (PUBMED:37048597). Moreover, a study on the survival prediction in patients with COPD following a pulmonary rehabilitation programme in Hong Kong indicated that for patients aged ≥72 years old, the Monitored Functional Task Evaluation score was the most significant survival predictor, suggesting that pulmonary rehabilitation can have a positive impact on survival in older patients (PUBMED:32009862). Another study demonstrated that a short-term pulmonary rehabilitation programme significantly improves exercise capacity and quality of life, irrespective of the degree of airflow obstruction (PUBMED:31435281). Additionally, a case report of comprehensive pulmonary rehabilitation for a 90-year-old patient with an intertrochanteric fracture complicated by COPD showed that the patient's recovery time was shortened with the application of pulmonary rehabilitation (PUBMED:37842788). Lastly, the effectiveness, safety, and compliance of a rehabilitation programme among elderly patients with acute exacerbation of COPD were also demonstrated in a study where the rehabilitation group showed significant improvements in various health assessments (PUBMED:32043808). In conclusion, the evidence supports the inclusion of elderly patients in pulmonary rehabilitation programmes, as they can derive significant benefits in terms of exercise capacity, quality of life, and potentially survival.
Instruction: Are special investigations of value in the management of patients with fecal incontinence? Abstracts: abstract_id: PUBMED:9269804 Are special investigations of value in the management of patients with fecal incontinence? Purpose: The aim of this study was to determine whether special investigations significantly alter either the diagnosis or the management plan of patients with fecal incontinence assessed on the basis of a structured history and physical examination alone. Methods: Fifty consecutive patients with fecal incontinence were prospectively studied in a tertiary referral clinic. Each patient was assessed by two clinicians who independently formulated a diagnosis and treatment plan based on the history and physical examination. The resulting 100 patient assessments were then compared with the final diagnosis and treatment plan formulated on completion of endoanal ultrasound, anal manometry, external sphincter electromyography, and defecating proctography. Results: In the assessment of fecal incontinence, the addition of special investigations altered the diagnosis of the cause of incontinence based on history and examination alone in 19 percent of cases. The management plan was altered in 16 percent of cases. Special investigations were most useful in separating neuropathy from rectal wall disorders and in demonstrating the unexpected presence of internal sphincter defects and neuropathy. Conclusions: Even experienced colorectal surgeons will misdiagnose up to one-fifth of patients presenting with fecal incontinence if assessment is based on the history and physical examination alone. However surgically correctable causes of incontinence are rarely missed on clinical assessment. abstract_id: PUBMED:10773727 Prospective assessment of the clinical value of anorectal investigations. Background/aim: Anorectal physiological testing and imaging have become part of routine colorectal and gastro-enterological practice, but their clinical value is controversial. We prospectively evaluated the new diagnostic information, impact on management and prognostic information provided by anorectal testing. Methods: One hundred consecutive patients referred for testing were studied. The referring doctor's diagnosis, reason for referral, planned management and expectation of the value of investigations were recorded. Incontinent patients underwent anorectal physiological testing and endo-anal ultrasound. Patients with constipation underwent anorectal physiological testing and a study of whole-gut transit time to distinguish between slow and normal transit. Constipated patients over age 45, or those younger patients who digitated to assist defaecation, also underwent evacuation proctography to identify large rectoceles. Results: Fifty-one patients had faecal incontinence. Of 12 patients with suspected anterior external anal sphincter obstetric damage, all of whom were planned for surgical repair, 3 were unsuitable for repair, 3 had a normal sphincter, 2 had a weak but structurally intact sphincter, and 1 had internal anal sphincter damage only. Of 6 patients with failed anterior repair, 3 had a defect suitable for repeat repair, 2 had intact repairs and good function, and 1 had extensive damage requiring reconstructive surgery. The tests also influenced management for incontinent patients after haemorrhoidectomy surgery (n = 5), after fistula surgery (n = 5), with congenital abnormalities (n = 3), after cerebrovascular accident (n = 1) and those with no presumptive diagnosis (n = 15). Of the 20 patients referred with constipation, demonstration of a recto-anal reflex in 1 patient with a megarectum excluded the need for full-thickness biopsy. A further patient with altered electrosensation went on to have a neurological lesion defined. Definition of slow transit in some patients did not immediately affect management. New information or a change in management was provided in patients referred for pre-operative assessment (n = 13). Information was gained in only 1 of 7 patients with solitary rectal ulcer and in none of the 8 patients with anal pain. Conclusion: Anorectal assessment provides important diagnostic and prognostic information and directly alters management in patients with benign anorectal disorders. abstract_id: PUBMED:37010063 Pelvic floor investigations for anal incontinence: Are they useful to predict outcomes from conservative treatment? Aims: There are several options for treating anal incontinence (AI), with limited success rate in long-term follow-up. Patients' selection is important to avoid unnecessary investigations and therapies. The aim of this review is to assess the utility of pelvic floor investigations to predict success from conservative treatment in AI. Methods: Baseline demographics, severity scores, and pelvic floor investigations of 490 patients with AI symptoms were retrospectively reviewed. Patient-reported outcomes were used to define success of conservative treatment. Results: Bivariate analysis showed that gender, St Mark's incontinence score, Bowel continence and quality of life domains of International Consultation on Incontinence Modular Questionnaire-Bowel symptoms score, Bristol stool chart, anal squeeze pressure, enterocoele, leak of contrast at rest, and dyssynergia in defecography were associated with patient's outcomes from conservative treatment (p < 0.05). Multivariate analysis showed that only the Bowel continence score was an independent predictor of patient's success with treatment. Conclusions: Pelvic floor investigations are of limited value to predict success of conservative treatment and they should be reserved for patients who fail noninvasive management and might require surgical intervention. abstract_id: PUBMED:11845801 Controlling faecal incontinence (including constipation and bowel dysfunction). Spina bifida may affect faecal continence in many ways. An understanding of the issues can assist general practitioners in supporting their patients through the process of learning to effectively management specialist centres. abstract_id: PUBMED:26654481 Management of functional nonretentive fecal incontinence in children: Recommendations from the International Children's Continence Society. Background: Fecal incontinence (FI) in children is frequently encountered in pediatric practice, and often occurs in combination with urinary incontinence. In most cases, FI is constipation-associated, but in 20% of children presenting with FI, no constipation or other underlying cause can be found - these children suffer from functional nonretentive fecal incontinence (FNRFI). Objective: To summarize the evidence-based recommendations of the International Children's Continence Society for the evaluation and management of children with FNRFI. Recommendations: Functional nonretentive fecal incontinence is a clinical diagnosis based on medical history and physical examination. Except for determining colonic transit time, additional investigations are seldom indicated in the workup of FNRFI. Treatment should consist of education, a nonaccusatory approach, and a toileting program encompassing a daily bowel diary and a reward system. Special attention should be paid to psychosocial or behavioral problems, since these frequently occur in affected children. Functional nonretentive fecal incontinence is often difficult to treat, requiring prolonged therapies with incremental improvement on treatment and frequent relapses. abstract_id: PUBMED:30294989 Children who soil: A review of the assessment and management of faecal incontinence. Soiling is a common and distressing condition affecting children. In the vast majority of patients, it is associated with constipation. Most constipation is functional and is best thought of as difficulty achieving adequate bowel emptying. In a small minority of patients, there is no associated constipation, so-called non-retentive faecal incontinence. The aetiology of this latter condition in children remains unclear. The mainstay of management in all cases is a regular toileting programme, together with laxatives as required. This must be individualised considering the diagnosis, the age of the child and the psychosocial factors affecting the child and family. The diagnosis is made with a thorough history and examination, supplemented in some cases with targeted investigations. Engaging the child and family in a long-term treatment programme (at least 6 months to 2 years) is essential for treatment success. The following clinical practice guideline and algorithm for the assessment and management of children who soil represents consensus opinion using available evidence. abstract_id: PUBMED:26872572 Do we manage incontinence in children and adults with special needs adequately? ICI-RS 2014. Aims: To review studies on the associations of incontinence and special needs in children and adults and to outline future directions in research and clinical care. Materials And Methods: A review of literature was conducted. Open questions and future directions were discussed during the ICI-RS meeting in 2014. Results: Special needs comprise a wide variety of conditions and disabilities. Individuals with special needs carry a greater risk for all types of incontinence. There is a high tendency for incontinence to persist from childhood into adulthood. Many people do not receive adequate medical care for their incontinence. Conclusions: More detailed research is needed, especially in the adult population with special needs. Assessment and treatment of incontinence should be offered routinely to all those with special needs. abstract_id: PUBMED:18590063 Limited predictive value of diagnostic tests for outcomes following pelvic floor physiotherapy in patients with faecal incontinence Objective: To develop an efficient diagnostic strategy for patients with faecal incontinence to identify subgroups that may benefit from pelvic floor physiotherapy. Design: Cohort study. Method: Assessment of consecutive patients with faecal incontinence treated at 15 hospitals in a 24-month period. In addition to medical history and physical examination, a complete diagnostic work-up was performed. All patients were then offered standardised pelvic floor physiotherapy. Treatment outcome was evaluated by change in Vaizey incontinence score 3 months after therapy. Linear regression analysis was used to calculate the predictive value of each diagnostic test and combination of tests with regard to treatment outcomes. Results: Of the 281 patients included (mean age 59 years), 252 were female. The median Vaizey score was 18 at baseline and 16 after therapy (p < 0.001). The Vaizey score improved in 143/239 patients (60%). The presence of perineal or perianal scarring on physical examination or internal anal sphincter atrophy on endoanal MRI were associated with a negative treatment outcome. A high maximal squeeze pressure by anorectal manometry was associated with a positive treatment outcome. The predictive value of information obtained by medical history and additional tests was limited (R2: 0.23; p = 0.02). Conclusion: Pelvic floor physiotherapy provided a modest improvement in symptoms of faecal incontinence. The predictive value of additional tests in predicting outcomes following pelvic floor physiotherapy was limited. Subgroups of patients could not be identified based on diagnostic test information. abstract_id: PUBMED:33810408 Continence Status and Presence of Pressure Skin Injury among Special Elderly Nursing Home Residents in Japan: A Nationwide Cross-Sectional Survey. Urinary and fecal incontinence as well as skin pressure injury are common healthcare problems in nursing homes; however, the prevalence and related risk factors were not well understood in the Japanese special elderly nursing home settings. We surveyed the prevalence of urinary, fecal and double incontinence, and skin pressure injury among the elderly living in special elderly nursing homes in Japan. A nationwide cross-sectional epidemiological survey was conducted with a total of 4881 residents. The prevalence of urinary, fecal and double incontinence was 82.9%, 68.9% and 64.9%, respectively. Skin pressure injury was found in 283 residents (283/4881, 5.8%). Age, Care-Needs level, loss of voiding desire, and fecal incontinence were significant risk factors for urinary incontinence. Residential period, Care-Needs level, loss of voiding and defecation desires, and urinary incontinence were significant risk factors for fecal incontinence. Only male sex was a significant risk factor for skin pressure injury. Our study revealed continence status and the prevalence of pressure skin injury among older adult residents who receive end-of-life care in special elderly nursing homes in Japan. Further studies should be conducted to examine whether recovery of urinary and fecal sensations improves continence status. abstract_id: PUBMED:29324429 A Pilot Clinical Study of a Safe and Efficient Stool Management System in Patients With Fecal Incontinence. Introduction: According to the Wound Ostomy Continence Nursing Society's Continence Committee, the incidence of fecal incontinence (FI) can occur from 18% to 37% in an acute care setting. A stool management system has been designed to manage FI in bedridden patients and has proven to be efficacious in wound management and prevention and infection control, provide safer patient outcomes, and enhance ease of nursing. Objective: This study aims to evaluate the safety and efficacy of an intrarectal device intended to manage fecal incontinence in hospitalized bedridden patients through nonclinical and clinical testings. Materials And Methods: An uncontrolled pilot evaluation in 20 patients was performed as part of a value-based purchasing evaluation at a tertiary hospital in Tucson, Arizona, to assess safety and efficacy in infection control and wound care. The company-provided engineering bench-top studies of insertion and withdrawal forces of the device versus existing intrarectal balloon catheters also were evaluated. Results: The device has broader patient eligibility and potentially allows 3 times more FI patients to be managed safely. It has lower intrarectal pressures compared with indwelling balloon catheters. Conclusions: This study, along with pilot clinical findings, suggests that this technology minimizes the pressure exerted on the rectal wall. There were significantly fewer forces against the anorectal mucosa compared with the cuff-based catheter during insertion, withdrawal, and accidental expulsion. Answer: Special investigations are indeed of value in the management of patients with fecal incontinence. These investigations can significantly alter the diagnosis or the management plan in a notable percentage of cases. For instance, the addition of special investigations altered the diagnosis in 19% of cases and the management plan in 16% of cases in one study, demonstrating their utility in separating neuropathy from rectal wall disorders and in identifying internal sphincter defects and neuropathy that were not apparent from history and examination alone (PUBMED:9269804). Another study found that anorectal assessment provided important diagnostic and prognostic information and directly altered management in patients with benign anorectal disorders (PUBMED:10773727). However, the predictive value of these investigations for outcomes following conservative treatment for anal incontinence is limited. One study showed that only the Bowel continence score was an independent predictor of patient's success with treatment, suggesting that pelvic floor investigations should be reserved for patients who fail noninvasive management and might require surgical intervention (PUBMED:37010063). Similarly, another study found that the predictive value of additional tests in predicting outcomes following pelvic floor physiotherapy was limited, and subgroups of patients who may benefit from this therapy could not be identified based on diagnostic test information (PUBMED:18590063). In summary, while special investigations can provide valuable diagnostic and management insights for fecal incontinence, their role in predicting treatment outcomes, particularly for conservative treatments, may be limited. These investigations should be considered as part of a comprehensive assessment, especially when noninvasive management fails or surgical intervention is being considered.
Instruction: Association between statin use and lipid status in quality improvement initiatives: statin use, a potential surrogate? Abstracts: abstract_id: PUBMED:23540819 Association between statin use and lipid status in quality improvement initiatives: statin use, a potential surrogate? Objective: To test the association between statin use and low-density-lipoprotein (LDL)-cholesterol control in outpatient community practices undergoing quality improvement efforts in diabetes care. Design: A retrospective observational study of primary care practices that underwent efforts at improving the quality of diabetes care. Each practice provided an electronic registry-based monthly report of the percentage of patients with LDL &lt;130 mg/dl (3.4 mmol/l), LDL &lt;100 mg/dl (2.6 mmol/l) and statin use. Setting: Primary care practices in Pennsylvania focused on improving diabetes care by implementing the Chronic Care Model in urban, suburban and rural regions. Participants: Consisted of 109 primary care practices, academic practices and federal health centres. Practices typically saw patients from a mix of government-funded and commercial health plan carriers. Results: There was a positive linear association between documented statin use and the percentage of patients with LDL &lt;130 mg/dl (3.4 mmol/l) and LDL &lt;100 mg/dl (2.6 mmol/l = goal for patients with diabetes). The correlation between statin use and LDL &lt;130 was 0.50 (95% CI 0.41-0.64), and between statin use and the percentage of patients with LDL &lt;100 was 0.47 (95% CI 0.29-0.58). Practices with 5% larger statin use had an expected 1.9% larger percentage of patients with LDL &lt;130 (95% CI 1.4 -2.9%) and an expected 1.7% larger percentage of patients with LDL &lt;100 (95% CI 0.9- 2.3%). Conclusion: An association exists between statin use and LDL control in the real world of primary care practices undergoing quality improvement. Additional studies are necessary to ultimately test the validity of statin use as a process measure and/or surrogate for LDL-cholesterol control. abstract_id: PUBMED:38273643 Statin use in cirrhosis and its association with incidence of portal vein thrombosis. Background And Aim: Statin use has shown a reduction in hepatic decompensation and portal hypertension. Its association with portal vein thrombosis (PVT) incidence is unknown. We aim to compare the incidence of PVT in patients with and without statin use. Methods: We excluded patients with a history of hepatocellular cancer, liver transplants, Budd-Chiari syndrome, and intra-abdominal malignancies. Patients with cirrhosis were followed from their first hepatologist clinical encounter (January 1, 2016, to January 31, 2021) for 180 days to determine PVT incidence. We tested the association of statin use with PVT using 1:1 propensity score (PS) matching and Cox proportional hazard regression. Results: We analyzed 2785 patients with cirrhosis (mean age:61.0 ± 12.3 years, 44.3% female, 63.8% White, mean MELD-Na score:11.7 ± 6.1, and statin use:23.1%). A total of 89 patients developed PVT during the follow-up, which was lower in patients with statin use as compared to no statin use (1.3% vs 3.8%, P = 0.001, unadjusted HR:0.28, 95% CI: 0.13-0.62, P = 0.001). After matching for demographics, comorbidities, and hepatic decompensation events, patients with statin use had a lower risk of developing PVT in 180-day follow-up as compared to those without statin use (HR:0.24, 95% CI: 0.10-0.55, P = 0.001). Subgroup analysis showed that statin use was associated with lower PVT incidence in non-NASH (HR: 0.20, 95% CI: 0.07-0.54, P = 0.002) and decompensated cirrhosis (HR: 0.12, 95% CI:0.03-0.53, P = 0.005) than no statin use. Conclusion: PVT incidence was lower in decompensated cirrhosis patients with statin use than in those with no statin use. However, this finding needs to be further tested in randomized control trials. abstract_id: PUBMED:34124094 Association of Preadmission Statin Use and Mortality in Critically Ill Patients: A Meta-Analysis of Cohort Studies. Background: A large number of studies have been conducted to determine whether there is an association between preadmission statin use and improvement in outcomes following critical illness, but the conclusions are quite inconsistent. Therefore, this meta-analysis aims to include the present relevant PSM researches to examine the association of preadmission use of statins with the mortality of critically ill patients. Methods: The PubMed, Web of Science, Embase electronic databases, and printed resources were searched for English articles published before March 6, 2020 on the association between preadmission statin use and mortality in critically ill patients. The included articles were analyzed in RevMan 5.3. The Newcastle-Ottawa Scale (NOS) was used to conduct quality evaluation, and random/fixed effects modeling was used to calculate the pooled ORs and 95% CIs. We also conducted subgroup analysis by outcome indicators (30-, 90-day, hospital mortality). Results: All six PSM observational studies were assessed as having a low risk of bias according to the NOS. For primary outcome-overall mortality, the pooled OR (preadmission statins use vs. no use) across the six included studies was 0.86 (95% CI, 0.76-0.97; P = 0.02). For secondary outcome-use of mechanical ventilation, the pooled OR was 0.94 (95% CI, 0.91-0.97; P = 0.0005). The corresponding pooled ORs were 0.67 (95% CI, 0.43-1.05; P = 0.08), 0.91 (95% CI, 0.83-1.01; P = 0.07), and 0.86 (95% CI, 0.83-0.89; P &lt; 0.00001) for 30-, 90-day, and hospital mortality, respectively. Conclusions: Preadmission statin use is associated with beneficial outcomes in critical ill patients, indicating a lower short-term mortality, less use of mechanical ventilation, and an improvement in hospital survival. Further high-quality original studies or more scientific methods are needed to draw a definitive conclusion. abstract_id: PUBMED:37243540 Association of statin use and lipid levels with cerebral microbleeds and intracranial hemorrhage in patients with atrial fibrillation: A prospective cohort study. Background: An increased risk of intracranial hemorrhage (ICH) associated with statins has been reported, but data on the relationship between statin use and cerebral microbleeds (CMBs) in patients with atrial fibrillation (AF), a population at high bleeding and cardiovascular risk, are lacking. Aims: To explore the association between statin use and blood lipid levels with the prevalence and progression of CMBs in patients with AF with a particular focus on anticoagulated patients. Methods: Data of Swiss-AF, a prospective cohort of patients with established AF, were analyzed. Statin use was assessed during baseline and throughout follow-up. Lipid values were measured at baseline. CMBs were assessed using magnetic resonance imagining (MRI) at baseline and at 2 years follow-up. Imaging data were centrally assessed by blinded investigators. Associations of statin use and low-density lipoprotein (LDL) levels with CMB prevalence at baseline or CMB progression (at least one additional or new CMB on follow-up MRI at 2 years compared with baseline) were assessed using logistic regression models; the association with ICH was assessed using flexible parametric survival models. Models were adjusted for hypertension, smoking, body mass index, diabetes, stroke/transient ischemic attack, coronary heart disease, antiplatelet use, anticoagulant use, and education. Results: Of the 1693 patients with CMB data at baseline MRI (mean ± SD age 72.5 ± 8.4 years, 27.6% women, 90.1% on oral anticoagulants), 802 patients (47.4%) were statin users. The multivariable adjusted odds ratio (adjOR) for CMBs prevalence at baseline for statin users was 1.10 (95% CI = 0.83-1.45). AdjOR for 1 unit increase in LDL levels was 0.95 (95% CI = 0.82-1.10). At 2 years, 1188 patients had follow-up MRI. CMBs progression was observed in 44 (8.0%) statin users and 47 (7.4%) non-statin users. Of these patients, 64 (70.3%) developed a single new CMB, 14 (15.4%) developed 2 CMBs, and 13 developed more than 3 CMBs. The multivariable adjOR for statin users was 1.09 (95% CI = 0.66-1.80). There was no association between LDL levels and CMB progression (adjOR 1.02, 95% CI = 0.79-1.32). At follow-up 14 (1.2%) statin users had ICH versus 16 (1.3%) non-users. The age and sex adjusted hazard ratio (adjHR) was 0.75 (95% CI = 0.36-1.55). The results remained robust in sensitivity analyses excluding participants without anticoagulants. Conclusions: In this prospective cohort of patients with AF, a population at increased hemorrhagic risk due to anticoagulation, the use of statins was not associated with an increased risk of CMBs. abstract_id: PUBMED:32360737 Association between statin use and incidence or progression of osteoarthritis: meta-analysis of observational studies. Objective: The objective of the study was to determine whether statin use could reduce the risk of the incidence or progression of osteoarthritis (OA). Methods: The PubMed, Embase, and Cochrane databases were systematically searched for observational studies on the association between statin use and OA. ORs and 95% CIs were directly retrieved or calculated. The Newcastle-Ottawa quality assessment scale was used for study quality assessment. Subgroup analysis, sensitivity analysis, and publication bias were conducted using Stata software. Results: A total of 11 studies (679807 participants) were identified from the systematic literature search. No significant association between statin use and incidence (OR = 1.010; 95% CI: 0.968 to 1.055; P = 0.638) or progression (OR = 1.076; 95% CI: 0.824 to 1.405; P = 0.589) of OA was found in our meta-analysis. The meta-analysis according to the symptomatic or radiological OA also found no significant association between statin use and OA. The subgroup analysis showed that atorvastatin (OR = 0.953; 95% CI: 0.911 to 0.998; P = 0.041) and rosuvastatin (OR = 1.180; 95% CI: 1.122 to 1.241; P &lt; 0.0001) had opposite effects on OA. The results of the analysis according to the joint site, interval, and statin dose were all not significant. Conclusions: Statin use may not be associated with a lower risk of incidence and progression of OA, regardless of joint site. The opposite effects of atorvastatin and rosuvastatin were detected in OA. abstract_id: PUBMED:35276584 Association between statin use and Parkinson's disease in Korean patients with hyperlipidemia. Introduction: Findings on the association between statin therapy and Parkinson's disease (PD) occurrence have been inconsistent. This study aimed to identify the association between statin use and PD in participants with a history of hyperlipidemia or blood cholesterol &gt;200 in a Korean population to exclude nonstatin users owing to normal lipid values. Methods: We conducted a nested case-control analysis using the Korean National Health Insurance Service-National Sample Cohort assessed between 2002 and 2015. We identified 3026 PD cases. A total of 12,104 controls were then individually matched by age, sex, income, and region of residence at a ratio of 1:4. Potential confounders comprised basic demographic factors, lifestyle factors, various medical conditions and comorbidities. A conditional/unconditional logistic regression method was applied. Results: Compared with statin use for &lt;6 months, adjusted odds ratios (aORs) with 95% confidence intervals (CIs) for 6-12 months of statin use and ≥12 months of statin use were 1.03 (0.92-1.15) and 1.61 (1.35-1.93) after adjustment for confounders, respectively (P = 0.664 and P &lt; 0.001). In analyses according to statin solubility, only the association between lipophilic statin use for ≥12 months and PD maintained statistical significance, with an aOR of 1.64 (95% CI = 1.34-2.01, P &lt; 0.001). These relations were consistent in subgroup analyses by covariates. Conclusions: Statin use for more than 12 months was associated with a higher probability of PD in the Korean population with hyperlipidemia. This probability was significant for lipophilic statins but not hydrophilic statins. abstract_id: PUBMED:30284172 Quality Improvement and Personalization for Statins: the QUIPS Quality Improvement Randomized Trial of Veterans' Primary Care Statin Use. Background: Implementation of new practice guidelines for statin use was very poor. Objective: To test a multi-component quality improvement intervention to encourage use of new guidelines for statin use. Design: Cluster-randomized, usual-care controlled trial. Participants: The study population was primary care visits for patients who were recommended statins by the 2013 guidelines, but were not receiving them. We excluded patients who were over 75 years old, or had an ICD9 or ICD10 code for end-stage renal disease, muscle pain, pregnancy, or in vitro fertilization in the 2 years prior to the study visit. Interventions: A novel quality improvement intervention consisting of a personalized decision support tool, an educational program, a performance measure, and an audit and feedback system. Randomization was at the level of the primary care team. Main Measures: Our primary outcome was prescription of a medium- or high-strength statin. We studied how receiving the intervention changed care during the quality improvement intervention compared to before it and if that change continued after the intervention. Key Results: Among 3787 visits to 43 primary care providers, being in the intervention arm tripled the odds of patients being prescribed an appropriate statin (OR 3.0, 95% CI 1.8-4.9), though the effect resolved after the personalized decision support ended (OR 1.7, 95% CI 0.99-2.77). Conclusions: A simple, personalized quality improvement intervention is promising for enabling the adoption of new guidelines. CLINICALTRIALS. Gov Identifier: NCT02820870. abstract_id: PUBMED:32879704 Impact of cardiologist intervention on guideline-directed use of statin therapy. Background: Statins have an important and well-established role in the prevention of atherosclerotic cardiovascular disease (ASCVD). However, several studies have reported widespread underuse of statins in various practice settings and populations. Review of relevant literature reveals opportunities for improvement in the implementation of guideline-directed statin therapy (GDST). Aim: To examine the impact of cardiologist intervention on the use of GDST in the ambulatory setting. Methods: Patients with at least one encounter at the adult Internal Medicine Clinic (IMC) and/or Cardiology Clinic (CC), who had an available serum cholesterol test performed, were evaluated. The 2 comparison groups were defined as: (1) Patients only seen by IMC; and (2) Patients seen by both IMC and CC. Patients were excluded if variables needed for calculation of ASCVD risk scores were lacking, and if demographic information lacked guideline-directed treatment recommendations. Data were analyzed using student t-tests or χ2, as appropriate. Analysis of Variance was used to compare rates of adherence to GDST. Results: A total of 268 patients met the inclusion criteria for this study; 211 in the IMC group and 57 in the IMC-CC group. Overall, 56% of patients were female, mean age 56 years (± 10.65, SD), 22% Black or African American, 56% Hispanic/Latino, 14% had clinical ASCVD, 13% current smokers, 66% diabetic and 63% hypertensive. Statin use was observed in 55% (n = 147/268) of the entire patient cohort. In the IMC-CC group, 73.6% (n = 42/57) of patients were prescribed statin therapy compared to 50.7% (n = 107/211) of patients in the IMC group (P = 0.002). In terms of appropriate statin use based on guidelines, there was no statistical difference between groups [IMC-CC group 61.4% (n = 35/57) vs IMC group, 55.5% (n = 117/211), P = 0.421]. Patients in the IMC-CC group were older, had more cardiac risk factors and had higher proportions of non-white patients compared to the IMC group (P &lt; 0.02, all). Conclusion: Although overall use of GDST was suboptimal, there was no statistical difference in appropriate statin use based on guidelines between groups managed by general internists alone or co-managed with a cardiologist. These findings highlight the need to design and implement strategies to improve adherence rates to GDST across all specialties. abstract_id: PUBMED:28694929 Statin use and cognitive function in middle-aged adults with type 1 diabetes. Aim: To test associations between statin use and cognitive impairment in adults with childhood-onset type 1 diabetes (T1D). Methods: In 2010-13, n = 108 middle-aged participants from ongoing observational Pittsburgh Epidemiology of Diabetes Complications Study underwent neurocognitive assessment (mean age and T1D duration of 49 and 41 years, respectively). All were diagnosed with childhood-onset (i.e., prior to age 18) T1D between 1950 and 1980 and were seen within one year of diagnosis at Children's Hospital of Pittsburgh. Self-reported statin use (yes/no and if yes, name of statin) was collected biennially from parent study baseline (1986-1988) to time of neurocognitive testing. Logistic regression models tested associations between statin use groups and cognitive impairment (defined as having two or more cognitive test scores 1.5SD or worse than published norms) while linear regression models tested associations between statin use groups and cognitive domain z-scores (domains: Verbal IQ, memory, executive function, psychomotor speed, and visuo-construction). All models controlled for education and age. To address confounding by indication, models were repeated using a propensity score for statin use. Results: Of the 108 participants, 51 reported never using statins. Median duration of statin use among the 57 ever users was 6 years. These 57 ever statin users were split to create two groups (≤ or &gt; median years of statin use): 1-6 years (n = 25), and 7-12 years (n = 32). Compared with never users, using statins 1-6 years tripled the odds of cognitive impairment (OR = 3.16; 95%CI: 0.93-10.72; P = 0.06) and using statins 7-12 years almost quintupled the odds of cognitive impairment (OR = 4.84; 95%CI: 1.63-14.44; P = 0.005). Compared with never users, using statins 1-6 or 7-12 years was related to worse performance in the memory domain (β = -0.52; P = 0.003, and -0.39; P = 0.014, respectively). Adjusting for coronary artery disease, low density lipoprotein cholesterol, and Apo E4 status did not substantially alter results, and none of these covariates were significantly related to cognitive outcomes (all P &gt; 0.05). Propensity score analyses support that associations between poor cognitive outcomes and statin use were not due merely to confounding by indication. Conclusion: Statin use was associated with cognitive impairment, particularly affecting memory, in these middle-aged adults with childhood-onset T1D, whom at this age, should not yet manifest age-related memory deficits. abstract_id: PUBMED:31328681 Inpatient statin use and stroke recurrence in patients with or without diabetes mellitus. Objectives: Although statin therapy is associated with lower recurrence in patients with acute ischaemic stroke, data-evaluating associations between inpatient statin use and stroke recurrence in diabetic patients after acute stroke onset are limited. Methods: This study was based on population data from the Chinese National Stroke Registry. Patients with acute ischaemic stroke and no history of statin therapy were selected. Individuals treated regularly with any type or dosage of statins during acute hospitalization were defined as having inpatient statin therapy. The subjects were divided into two groups according to statin use status during acute hospitalization. Multivariate logistic regression analysis was used to analyse the associations between statin use and stroke recurrence in patients with or without diabetes. Results: A total of 11,429 patients, 2341 (20.48%) with diabetes, were selected for analysis. Statin therapy during hospitalization was documented in 4982 (43.59%). Logistic analysis showed no significant associations between inpatient statin use and stroke recurrence in diabetic subjects at 3 months (OR = 0.90, 95% CI = 0.69-1.16, P = 0.40) or 1 year (OR = 0.92, 95% CI = 0.74-1.16, P = 0.48), but statin use was significantly associated with lower recurrence in non-diabetic patients at both 3 months (OR = 0.80, 95% CI = 0.69-0.92, P = 0.002) and 1 year (OR = 0.82, 95% CI = 0.72-0.93, P = 0.002) after discharge. Conclusion: Inpatient statin use was associated with lower stroke recurrence in non-diabetic patients after acute ischaemic stroke, but no definite association between inpatient statin use and stroke recurrence in patients with diabetes mellitus was found. Answer: The retrospective observational study by PUBMED:23540819 found a positive linear association between documented statin use and the percentage of patients with controlled low-density lipoprotein (LDL) cholesterol levels in outpatient community practices undergoing quality improvement efforts in diabetes care. Specifically, practices with a 5% larger statin use had an expected 1.9% larger percentage of patients with LDL <130 mg/dl and an expected 1.7% larger percentage of patients with LDL <100 mg/dl. This suggests that statin use could potentially serve as a surrogate for LDL-cholesterol control in the context of primary care practices focused on improving diabetes care. However, the study concludes that additional research is necessary to validate statin use as a process measure or surrogate for LDL-cholesterol control.
Instruction: Can very high level of D-dimer exclusively predict the presence of thromboembolic diseases? Abstracts: abstract_id: PUBMED:21463843 Can very high level of D-dimer exclusively predict the presence of thromboembolic diseases? Background: D-dimer quantitative test is mainly used to rule out the presence of thromboembolic diseases (TEDs). Whether very high D-dimer (100 times above the cutoff point) can exclusively indicate the presence of TED should be known. Methods: D-dimer was detected by a quantitative immunoturbidimetric assay. The normal value is 0.2-0.7 mg/L fibrinogen equivalent units (FEUs). During the year of 2009, 1,053 D-dimer tests were performed. We analyzed the results of these patients to find out the causes of very high D-dimer. Results: The mean value of D-dimer in the 1,053 tests was 8.56 mg/L FEU, ranging from &lt;0.2 mg/L to 563.2 mg/L FEU. Of them, 28 samples from 21 patients had very high D-dimer value: &gt;50 mg/L FEU. Of the 21 patients, 9 (43%) had TED, 1 had suspected TED, but not proved by computed tomographic (CT) angiogram, 3 had massive gastrointestinal or other site bleeding, 3 patients had cardiac arrest with samples taken immediately after recovery from cardiopulmonary resuscitation (CPR), 2 had sepsis with disseminated intravascular coagulation (DIC), 1 had postpartum hemolysis, elevated liver enzymes, low platelets (HELLP) syndrome with acute pulmonary edema and renal failure, 1 had multiple traumatic injury, and 1 received thrombolytic therapy. Conclusion: Although TED was the most frequently seen disorder in patients with very high D-dimer value, very high D-dimer was not necessary exclusively the marker of TED. Other disorders such as massive bleeding, status post CPR, sepsis with DIC, multiple traumatic injuries, hyperfibrinolysis and HELLP syndrome can also have very high D-dimer. abstract_id: PUBMED:32913864 Clinical laboratory investigation of a patient with an extremely high D-dimer level: A case report. Background: D-dimer, a soluble degradation product of cross-linked fibrin, is commonly used as an important marker for the diagnosis of disseminated intravascular coagulation and differential diagnosis of thrombosis. Herein, we present a geriatric case with an unusually elevated D-dimer level. Case Summary: An 82-year-old woman, admitted to the ward with a diagnosis of chronic heart failure, was noted to have a remarkably elevated D-dimer level, beyond the qualified range (&gt; 100 mg/L), utilizing the Innovating D-dimer for Sysmex CS-5100 System™. However, no evidence, including clinical symptoms, radiographic evidence of thromboembolic disease, and parallel fibrinogen degradation product values, suggested that this patient was at high risk of thrombopenia. To confirm the discrepancy, a series of approaches including sample dilution, re-analysis via alternative methods, and sample treatment with blockage of specific heterophilic antibodies were performed. A remarkable disappearance of the elevated D-dimer values was observed in the samples after they were subjected to these approaches (4.49, 9.42, 9.06, and 12.58 mg/L, respectively). This confirmed the presence of heterophilic antibodies in this case. In addition, a reduction in cardiac output due to the presence of cardiac failure could also be responsible for the existence of a hypercoagulable state in this case. Conclusion: In conclusion, the presence of heterophilic antibodies should be considered when an elevated D-dimer value is not in conformity with the clinical evidence, and a viral infection should be considered when interference by a heterophilic antibody exists. abstract_id: PUBMED:29226714 Analysis of D-dimer laboratory findings and clinical evaluation D-dimer is a product of the enzymatic degradation of the fibrinogen-fibrin molecule, and its existence is demonstrable in circulation. The test based limits may be considered as normal values. It was first thought to be a product of coagulation, then a product of lysis. High-concentration D-dimer in blood detected in thromboembolic diseases is considered to be of diagnostic value. In cases where thromboembolism was ruled out despite elevated titres but heparin (LMWH) or CLOPIDOGREL was given as a cautionary measure, we found that D-dimer values remained elevated. This finding means that in vivo coagulation is not a precondition to D-dimer formation. Analysis of such cases uncovers liver or kidney disease in the background, but old age may also be a factor. Often elevated ferritin levels were observed 'in parallel' with elevated D-dimer values. These findings lead us to presume an enzymatic degradation process of 'elderly' protein molecules, which is universally applicable. Orv Hetil. 2017; 158(50): 1971-1976. abstract_id: PUBMED:37622508 EVALUATION OF D-DIMER LEVEL AS A BIOMARKER OF DISEASE SEVERITY AND MORTALITY IN PATIENTS WITH COVID-19. Objective: The aim: To examine risk factors and evaluate the use of D-dimer as a biomarker of disease severity and mortality in patients with COVID-19. Patients And Methods: Materials and methods: Data from a large NYU Langone Health system were analyzed to examine the prevalence of elevated D-dimer levels at first detection and the trend. A retrospective cohort study of 2,377 patients (NYU Langone Health) with severe COVID-19. Also we conducted a retrospective study based on the mortality database of 247 patients from COVID-19 at the Chernivtsi Regional Clinical Hospital. Results: Results: Patients with elevated baseline D-dimer were more likely to have critical illness than patients with normal D-dimer (43.9% vs. 18.5%). The frequency of adverse events increased with increasing D-dimer levels. Individuals with D-dimer &gt;2000 ng/mL had the highest risk of critical illness (66.0%). Conclusion: Conclusions: Thus, the level of D-dimer can be considered an important prognostic factor in COVID-19, as its level is elevated in the vast majority of patients with COVID-19 and correlates with a severe course and high mortality. abstract_id: PUBMED:29393288 Some features controlling the blood D-dimer level after resection of malignant brain glioma A high blood D-dimer level is often diagnosed in patients with malignant brain glioma (MBG), with 24% of these patients being detected with deep vein thrombosis of the leg and/or pulmonary embolism (PE). The cause of an elevated blood D-dimer level in most other cases remains unclear. The purpose of this study is to identify the features associated with an increased blood D-dimer level in patients after MBG resection, which may be used to improve the efficacy of adjuvant radiation therapy (ART). Results: The study included 50 patients. Three to four weeks after resection of malignant brain glioma (MBG), the blood D-dimer level was determined in patients immediately before the onset of ART. An increase in the blood D-dimer level more than by 0.25 μg/mL was detected in 78% of patients. More often, a high D-dimer level was detected in patients aged 60 years or more. In the same age group, an increase in the D-dimer level was significantly larger (p&lt;0.05) than in younger patients (1.2 and 0.6 μg/mL, respectively). The degree of brain tumor malignancy did not affect the rate and value of an increase in the blood D-dimer level. Conclusion: Our findings indicate that an increased blood D-dimer level in patients without symptoms of venous thrombosis after craniotomy for MBG is more pronounced in patients aged 60 years and over. The degree of glioma malignancy has no significant effect on this indicator. abstract_id: PUBMED:36098192 D-Dimer Level Among COVID-19 Patients as Biological Mediator for Hyper Coagulation State. &amp;lt;b&amp;gt;Background and Objective:&amp;lt;/b&amp;gt; Viremia due to SARS-CoV-2 lead variety of biochemical change in the human body, which play a crucial role in the activation of the coagulation cascade causing thrombotic complications and coagulopathies. The study aimed&amp;lt;i&amp;gt; &amp;lt;/i&amp;gt;to ascertain the D-dimer level as a biological mediator in COVID-19 patients in Khartoum state and compare the results to the control group. &amp;lt;b&amp;gt;Materials and Methods:&amp;lt;/b&amp;gt; A cross-sectional study was conducted during the period of August to December, 2021, including 50 healthy patients and 50 COVID-19 patients, blood samples were collected from study groups for measurement of D-dimer level using an I Chroma device. Statistical analysis was conducted using SSPS version 21. &amp;lt;b&amp;gt;Results:&amp;lt;/b&amp;gt; This study revealed a statistically increased D-dimer level among COVID-19 patients compared with the control group (2000-10000 vs. up to 500 ng mL&amp;lt;sup&amp;gt;&amp;lt;/sup&amp;gt;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;), respectively. &amp;lt;b&amp;gt;Conclusion:&amp;lt;/b&amp;gt; Viremia induced by COVID-19 infection can cause a high D-dimer level which can lead to thrombosis event or bleeding tendency. abstract_id: PUBMED:31550800 Characteristics and clinical significance of serum D-dimer level before and after delivery in women with advanced maternal age Objective: To explore the characteristics of serum D-dimer level before and after delivery in women with advanced maternal age (VTE), and to assess the value of the characteristics for the diagnosis of venous thromboembolic disease. Methods: The objects were 785 puerperae with advanced maternal age (AMA) who experienced delivery in Women's Hospital, Zhejiang University School of Medicine during 1(st) Jan 2017 to 31(st) Dec 2017, and 327 puerperae with appropriate maternal age as controls were randomly selected from puerperae who gave birth in the same period. Their serum D-dimer levels before and after delivery were evaluated, and AMAs with high-level postnatal serum D-dimer were specially observed for the prognosis. Besides, puerperae complicated with VTE were retrieved from the medical database during 2014-2018, and their clinical characteristics and dynamic variation of serum D-dimer levels were analyzed. Result: The significant difference of D-dimer levels neither before nor after delivery was not observed between AMAs and controls (antenatal: 1.64(1.19, 2.29) mg/L vs 1.53(1.04, 2.23) mg/L, and postnatal: 2.70(1.71, 2.97) mg/L vs 2.63(1.17, 4.13) mg/L, P&lt;0.05 for both; None of AMAs with high-level serum D-dimer after delivery were complicated with VTE, and most of their serum D-dimer levels decreasedsharply with in four postnatal days (the average decrease was 9.2(7.69,12.74) mg/L, and 96.2% of the sepuerperae's decrease was more than 50%). Eight puerperae complicated with VTE were found in the database from 2014 to 2018, among which five were AMAs. The eight puerperae all received a B ultrasound because of the discomfort of lower limbs or abnormal variation of serum D-dimer levels (a slow decrease or an increase trend), so that a diagnosis of VTE was established; besides, the diagnosis or symptoms all emerged in the 3(rd) to 5(th) day after caesarean. Conclusions: The factor of advanced maternal age has little influence on the serum D-dimer level before or after delivery among pregnant women. It is not a single detection for serum D-dimer level, but the intensive monitoring of clinical symptoms and dynamic change of serum D-dimer level, that helps early diagnosis of VTE. abstract_id: PUBMED:34501339 Association between Early Absolute Neutrophil Count and Level of D-Dimer among Patients with COVID-19 Infection in Central Taiwan. Thromboembolism is a critical event in patients with coronavirus disease (COVID)-19 infection and highly associated with neutrophil extracellular traps. D-dimer has been found to be an essential thromboembolism-associated biomarker; however, the association between absolute neutrophil count (ANC) and level of D-dimer in patients with COVID-19 infection remains unclear. In this study, we enrolled consecutive patients with COVID-19 admitted to Taichung Veterans General Hospital (TCVGH), a referral center in central Taiwan with 20 airborne infection isolation rooms. Spearman correlation was used to determine the association between ANC and level of D-dimer in distinct time periods. A total of 28 consecutive patients with COVID-19 infection were enrolled, and 32.1% (9/28) of them required mechanical ventilation. Patients requiring mechanical ventilation had a higher ANC (8225 vs. 3427/µL, p &lt; 0.01) and levels of D-dimer (6.0 vs. 0.6 mg/L, p &lt; 0.01) compared with those without mechanical ventilation. Notably, we identified five patients with image-proven thromboembolic events during the hospital course, with the number of patients with pulmonary embolism, venous thrombosis and acute ischemic stroke were 2, 1, and 2, respectively. We found that ANC within 4 days correlated with the level of D-dimer to a moderate level (r = 0.71, p &lt; 0.05), and the association between ANC and D-dimer no longer exist after day 5. In conclusion, we found highly prevalent thromboembolic events among patients with severe COVID-19 infection in central Taiwan and identified the association between early ANC and D-dimer. More studies are warranted to elucidate the underlying mechanism. abstract_id: PUBMED:33110857 Comparison of the D-dimer concentration in pregnant women with or without pulmonary thromboembolism. Introduction And Objectives: Pulmonary thromboembolism is the third most common vascular disease after coronary heart disease and stroke and is approximately 10 times more common in pregnant women compared with the nonpregnant population at the same age. The purpose of the current work is to evaluate D-dimer plasma concentration in pregnant women with or without pulmonary thromboembolism. Methods: The present study was a cross-sectional study in which 100 pregnant women with suspected pulmonary embolism referred to Imam Khomeini Hospital in Ahwaz in 1398 were studied. After imaging and approving or rejecting a pulmonary embolism, simultaneously the D-dimer sample was taken from them; then the dimer level in each trimester was compared in positive or negative cases of pulmonary embolism. The SPSS software version 22 was used for data analysis. Results: The results showed that 12 patients in the study population had pulmonary embolism and 88 patients did not have pulmonary embolism. According to the results of patients with pulmonary embolism based on CT angiography results, there was no significant relationship with increasing gestational age and mean dimer level (P = 0.41). But there was a significant relationship between gestational age and mean dimer level in the group with no pulmonary embolism (P = 0.0001). There was no significant relationship between maternal age and mean dimer level in patients with pulmonary embolism (P = 0.376) and without pulmonary embolism (P = 0.1). Also, there was no significant relationship between the number of pregnancies in both groups with and without pulmonary embolism (P = 0456, P = 0.392). Conclusion: Concomitant use of D-dimer and Wells' criteria can help us to diagnose or rule out pulmonary thromboembolism and minimize the risk of pregnant women being exposed to X-rays; given the biodiversity of the D-dimer of every woman during a natural pregnancy, repeated D-dimer measurements in the evaluation of thromboembolic pregnancy during pregnancy have no clinical application. abstract_id: PUBMED:15992948 D-dimer level influences thromboembolic events in patients with atrial fibrillation. Background: Elevated coagulative molecular markers could reflect the prothrombotic state in the cardiovascular system of patients with non-valvular atrial fibrillation (NVAF). A prospective, cooperative study was conducted to determine whether levels of coagulative markers alone or in combination with clinical risk factors could predict subsequent thromboembolic events in patients with NVAF. Methods: Coagulative markers of prothrombin fragment 1+2, D-dimer, platelet factor 4, and beta-thromboglobulin were determined at the enrollment in the prospective study. Results: Of 509 patients with NVAF (mean age, 66.6 +/- 10.3 years), 263 patients were treated with warfarin (mean international normalized ratio, 1.86), and 163 patients, with antiplatelet drugs. During an average follow-up period of 2.0 years, 31 thromboembolic events occurred. Event-free survival was significantly better in patients with D-dimer level &lt; 150 ng/ml than in those with D-dimer level&gt;or==150 ng/ml. Other coagulative markers, however, did not predict thromboembolic events. Age (&gt;or==75 years), cardiomyopathies, and prior stroke or transient ischemic attack were independent, clinical risk factors for thromboembolism. Thromboembolic risk in patients without the clinical risk factors was quite low (0.7%/year) when D-dimer was &lt; 150 ng/ml, but not low (3.8%/year) when D-dimer was &gt;or==150 ng/ml. It was &gt;5%/year in patients with the risk factors regardless of D-dimer levels. This was also true when analyses were confined to patients treated with warfarin. Conclusions: D-dimer level in combination with clinical risk factors could effectively predict subsequent thromboembolic events in patients with NVAF even when treated with warfarin. Answer: No, a very high level of D-dimer cannot exclusively predict the presence of thromboembolic diseases (TEDs). While a study found that TED was the most frequently seen disorder in patients with very high D-dimer values, it also identified other conditions that can result in very high D-dimer levels. These include massive bleeding, status post cardiopulmonary resuscitation (CPR), sepsis with disseminated intravascular coagulation (DIC), multiple traumatic injuries, hyperfibrinolysis, and HELLP syndrome (PUBMED:21463843). Additionally, the presence of heterophilic antibodies should be considered when an elevated D-dimer value does not align with clinical evidence, as they can interfere with the test results (PUBMED:32913864). Elevated D-dimer levels have also been observed in patients with liver or kidney disease, old age, and elevated ferritin levels, suggesting that in vivo coagulation is not a precondition for D-dimer formation (PUBMED:29226714). Therefore, while D-dimer is a valuable marker for the diagnosis of thromboembolic conditions, it is not specific and can be elevated in a variety of other clinical scenarios.
Instruction: Ovarian cancer in elderly patients: a difference in treatment based on age? Abstracts: abstract_id: PUBMED:30640699 Treatment decision-making in elderly women with ovarian cancer: an age-based comparison. Objective: To investigate treatment choices and outcomes in women with ovarian cancer, comparing elderly (≥75 years) and younger patients (&lt;75 years). Methods: A single-center retrospective analysis of patients diagnosed with ovarian cancer between 2010 and 2015. The initial treatment plan and course of treatment were extracted from medical files. Results: Of 128 included patients, 34% were aged ≥75 years. The initial treatment plan consisted of the combination of cytoreductive surgery and platinum-based doublet chemotherapy (ie, standard treatment) in only 10% of the elderly patients with an indication for this treatment. 5% of these patients completed this treatment without adaptations (compared with 85% and 48%, respectively, in younger patients). 38% of the elderly patients with an indication for cytoreductive surgery and chemotherapy received best supportive care only. Patient preference was an important reason to withhold standard treatment. Surgery- and chemotherapy-related complications and hospital admissions did not differ between groups. Median survival was lower in the elderly (p=0.002) and in patients receiving best supportive care (p&lt;0.001). Conclusions: Elderly patients were less frequently treated in accordance with the treatment guideline. To select those older patients who may benefit from (adapted) treatment is challenging. Future studies should evaluate determinants associated with treatment completion to improve outcomes in this vulnerable population. abstract_id: PUBMED:28763363 Ovarian Cancer in Elderly Patients: Patterns of Care and Treatment Outcomes According to Age and Modified Frailty Index. Objective: The present study assessed the predictive value of age and Modified Frailty Index (mFI) on the management of primary epithelial ovarian cancer (EOC) patients aged 70 years or older (elderly). Methods: A retrospective multicenter study selected elderly EOC patients treated between 2006 and 2014. Treatments were analyzed according to the following age group categories: (1) 70 to 75 years versus (2) older than 75 years, and mFI of less than 4 (low frailty) versus greater than or equal to 4 (high frailty). Results: Seventy-eight patients were identified (40 in age group 1 and 38 in age group 2). The mFI was greater than or equal to 4 in 23 women. Median age of low frailty and high frailty was not significantly different (75.6 vs 75.3). Comorbidities were equally distributed according to age, whereas diabetes, hypertension, obesity, and chronic renal failure were more frequent in the high-frailty group. Performance status was different only according to mFI. Twenty percent of age group 1 versus 55.3% of age group 2 underwent none or only explorative surgical approach (P = 0.003), whereas surgical approaches were similar in the 2 frailty groups. The rate of postoperative complications was higher in high-frailty patients compared with low-frailty patients (23.5% vs 4.3%; P = 0.03). Chemotherapy was administered to all the patients, a monotherapy regimen to 50% of them. No differences in toxicity were registered, except more hospital recovery in the high-frailty cohort. Median survival time was in favor of younger patients (98 versus 30 months) and less-frailty patients (56 vs 27 months). Conclusions: Elderly EOC patients can receive an adequate treatment, but patients who are older than 75 years can be undertreated, if not adequately selected. The pretreatment assessment of frailty through mFI could be suggested in the surgical and medical management. abstract_id: PUBMED:27047797 The Application and Outcome of Standard of Care Treatment in Elderly Women with Ovarian Cancer: A Literature Review over the Last 10 Years. The rising number and increasing longevity of the elderly population calls for improvements and potentially a more personalized approach to the treatment of cancer in this group. Elderly patients frequently present with a number of comorbidities, complicating surgery and chemotherapy tolerability. In the case of ovarian cancer, elderly women present with more advanced disease, making the issue of providing adequate treatment without significant morbidity critical. Most studies support the application of standard of care treatment to elderly women with ovarian cancer, yet it seems to be offered less frequently in the elderly. The objective of this review is to examine the application and outcome of standard of care treatment in elderly women with ovarian cancer. The aim is to ultimately improve the approach to treatment in this group. abstract_id: PUBMED:22843035 Ovarian cancer in elderly patients: a difference in treatment based on age? Purpose: The aim of this study was to examine if treatment strategies differ by age in the elderly population with ovarian cancer in daily clinical practice. Methods: A retrospective analysis of elderly patients with ovarian cancer who were referred to our institution between January 2007 and August 2010 was done. A univariate analysis for overall survival was estimated according to the Kaplan-Meier method, censoring surviving patients at the time of last follow-up. Results: We evaluated 32 elderly patients: 17 "young-old" patients (65-74 years old), 14 "old-old" patients (75-84 years old) and 1 "oldest-old" patient (≥85 years old). At last follow-up, 20 patients (62.5%) were alive and 12 patients (37.5%) were deceased. Median time follow-up was 18.52 months. Median overall survival was 19.05 months. Median age was 73.50 years. In the subgroup of "young-old" patients, there were less "high malignant potential" (64.3 vs. 70.0%) and grade 3 ovarian cancers (84.6 vs. 90.0%), less advanced stages (III-IV: 64.7 vs. 86.7%), higher number of optimal surgical procedures (50.0 vs. 30.0%) and more frequent use of chemotherapy (82.4 vs. 66.7%). Single agent carboplatin was administered in 81.8 vs. 77.8% of "young-old" and "old-oldest old" patients, and average number of lines was 2 vs. 1. Other characteristics were similar in the two subgroups ("young-old" vs. "old-oldest old" patients). By the univariate analysis, there was no statistical significance difference in overall survival (p=0.393) between the two subgroups, with only a positive trend for young-old patients. Conclusions: In old- and oldest-old patients, the characteristics of disease are worse and optimal treatment strategy is less frequently applied. The management of patients by multidisciplinary team is needed and it could better individualize and apply the optimal treatment approach. abstract_id: PUBMED:8482566 Treatment of advanced ovarian carcinoma in the elderly. This study retrospectively analyzes the treatment of advanced ovarian cancer (Stages III and IV) in elderly patients (&gt; or = 65) compared to that in younger patients (&lt; 65). The purpose of this study was to identify possible treatment bias toward the elderly and to statistically analyze the nature of these differences. Seventy patients were evaluated of which 29 were identified as elderly and 41 as young. All patients were treated with platinum-based chemotherapy. Chi 2, log rank, Kaplan-Meier, and Cox model analyses were performed for multiple variables including age, grade of tumor, adequacy of surgery, and dose intensity. The elderly significantly differed from the young in the following analyses: median length of hospitalization, 20 days vs 11 days (P &lt; 0.001); optimum surgery, 79.3% vs 97.5% (P = 0.02); initial chemotherapeutic dose reduction, 15.4% vs 0% (P = 0.02); median survival compared to age, 19.2 months vs 36.7 months (P &lt; 0.03). When survival analysis was performed comparing 17 elderly patients and 40 younger patients who had optimum surgery and optimum initial chemotherapy, the median survival remained essentially unchanged, 22.0 months vs 36.7 months. There were differences in treatment intensity between young and old, however, the indications generally were valid and when analyzed by the Kaplan-Meier and Cox model, these differences became insignificant. It was concluded that when elderly patients can undergo aggressive surgical and chemotherapeutic management, survival remained significantly decreased for aged compared to younger patients. Physician bias was not a major factor accounting for the poorer survival observed in elderly patients. Age was the most significant variable related to survival and could not be accounted for by differences in adequacy of surgery or dose intensity. abstract_id: PUBMED:26411955 Management of Ovarian Cancer in Elderly. As a result of increasing life expectancy, the cancer incidence in older population is on the rise. Ovarian cancer (OC) is predominantly the disease of elderly women. More than half of all OC occur in women older than 65 years. The incidence of the disease increases with the advancing age, peaking during 7(th) decade of life and remains elevated until the age of 80 years. With the changing demographic scenario the percentage of elderly patients is increasing and gynaecologic oncologists need to focus more on these patients and their specific needs. Due to their higher risk of morbidity and mortality compared to younger patients, elderly patients with advanced ovarian cancer are challenging to treat and are often treated less radically. Their outcome is impaired despite no consistent prognostic effect of age itself. To offer optimal radical management of the elderly women with ovarian cancer and to avoid suboptimal treatment, biological age and functional status need to be considered before individualized treatment plans are defined. Pretreatment assessment can be achieved by using different assessment tools. Patients can tolerate surgery and chemotherapy, as long as they are individually assessed for their medical, psychological, and functional capabilities before therapeutic intervention is initiated. Prospective trials involving elderly women with ovarian cancer are the need of the day to offer justified evidenced based optimal treatment for those who will be benefited from the treatment. abstract_id: PUBMED:26610814 Under-treatment of elderly patients with ovarian cancer: a population based study. Background: Ovarian cancer is the fourth most common cancer among women in France, and mainly affects the elderly. The primary objective of this study was to compare treatment of ovarian cancer according to age. Methods: All patients with invasive cancer (n=1151) diagnosed between 1997 and 2011 in the Herault Department of southern France were included. Demographic data (age, area of residence), cancer characteristics (stage, histology, grade) and treatment modality (type, period and location of treatment) were analysed. Univariate and multivariate logistic regression was used to compare treatment by age. Results: Ovarian cancer was less treated in elderly compared to younger patients, regardless of the type of treatment. This difference was more pronounced for chemotherapy, and was maximal for surgery followed by chemotherapy (odds ratio (OR) for surgery for patients aged &gt;70 vs those aged &lt;70 years=0.47 [0.24-0.91], OR for chemotherapy, age&gt;70 vs &lt;70=0.30 [0.16-0.55] and OR for surgery plus chemotherapy, age&gt;70 vs &lt;70=0.14 [0.08-0.28]). This effect of age was independent of other variables, including stage and grade. The probability of receiving standard treatment, in accordance with recommendations, was reduced by 50% in elderly patients compared to their younger counterparts. Overall and net survival of elderly patients with standard treatment was similar to those of younger patients treated outside standard treatment. Conclusions: Elderly women with ovarian cancer were therapeutically disadvantaged compared to younger women. Further studies including co morbidities are necessary to refine these results and to improve therapeutic management of elderly patients with ovarian cancer. abstract_id: PUBMED:16000216 Treatment patterns by decade of life in elderly women (&gt; or =70 years of age) with ovarian cancer. Objective: Elderly patients are less likely to receive surgery and platinum-based combination chemotherapy than younger patients. We evaluated multi-institutional management of ovarian cancer in the elderly. Methods: Charts of women with ovarian, primary peritoneal or fallopian tube cancer from 1/1996-6/2004, age &gt; or =70 years were reviewed. Age, stage, medical co-morbidities, surgery, chemotherapy, treatment modification, toxicity and survival were analyzed. Chi-square, logistic regression and survival analysis were used. Results: Of 131 patients, 90 were ages 70-79 (group 1 = G1) and 41 were &gt;80 years of age (group 2 = G2). Surgery was performed in 80 patients in G1; 25 patients in G2 (P = 0.001). Among patients who underwent surgery, optimal debulking and post-operative complications did not differ between groups. Ninety-five percent of patients received platinum-based therapy and 83% received combination platinum/paclitaxel in G1, compared to 90% and 41%, respectively, in G2 (P &lt; 0.001). Of those receiving platinum therapy, 36% in G1 and 41% in G2 required dose reductions or termination of therapy. Forty percent of G1 and 50% of G2 required a delay of therapy; the majority occurring in patients receiving combination therapy. Hematological toxicity increased with use of combination therapy, but not with advancing age or Charlson score. Successful debulking surgery significantly impacted survival, and when controlling for this factor, age was not a significant variable. Conclusion: The extreme elderly had a decreased likelihood of receiving surgery and combination chemotherapy despite equivalent co-morbidities. In this analysis, optimal surgical cytoreduction had the greatest impact on survival. abstract_id: PUBMED:15820566 Treatment of cancer and hematological malignancy in elderly people (Part II) Purpose: Fifty percents of cancer arise in people older than 65-year-old. Most clinical trials in cancer treatment are limited in patients younger than 65-year-old. We review literature-describing particularity of cancer treatment in elderly patients. Current Knowledge And Key Points: Therapeutic decisions should be based on an estimation of the patient's life expectancy, and risks and benefits should be weighted up accordingly. Geriatric oncology is made of a geriatric evaluation of patient and of knowledge of clinical trial about elderly patients. Future Prospects And Projects: We present in this issue the principle of geriatric evaluation and the results of recent clinical trial on elderly cancer patients. abstract_id: PUBMED:32414065 Management and Survival of Elderly and Very Elderly Patients with Ovarian Cancer: An Age-Stratified Study of 1123 Women from the FRANCOGYN Group. Elderly women with ovarian cancer are often undertreated due to a perception of frailty. We aimed to evaluate the management of young, elderly and very elderly patients and its impact on survival in a retrospective multicenter study of women with ovarian cancer between 2007 to 2015. We included 979 women: 615 women (62.8%) &lt;65 years, 225 (22.6%) 65-74 years, and 139 (14.2%) ≥75 years. Women in the 65-74 years age group were more likely to have serous ovarian cancer (p = 0.048). Patients &gt;65 years had more &gt;IIa FIGO stage: 76% for &lt;65 years, 84% for 65-74 years and 80% for ≥75 years (p = 0.033). Women ≥75 years had less standard procedures (40% (34/84) vs. 59% (104/177) for 65-74 years and 72% (384/530) for &lt;65 years (p &lt; 0.001). Only 9% (13/139) of women ≥75 years had an Aletti score &gt;8 compared with 16% and 22% for the other groups (p &lt; 0.001). More residual disease was found in the two older groups (30%, respectively) than the younger group (20%) (p &lt; 0.05). Women ≥75 years had fewer neoadjuvant/adjuvant cycles than the young and elderly women: 23% ≥75 years received &lt;6 cycles vs. 10% (p = 0.003). Univariate analysis for 3-year Overall Survival showed that age &gt;65 years, FIGO III (HR = 3.702, 95%CI: 2.30-5.95) and IV (HR = 6.318, 95%CI: 3.70-10.77) (p &lt; 0.001), residual disease (HR = 3.226, 95%CI: 2.51-4.15; p &lt; 0.001) and lymph node metastasis (HR = 2.81, 95%CI: 1.91-4.12; p &lt; 0.001) were associated with lower OS. Women &gt;65 years are more likely to have incomplete surgery and more residual disease despite more advanced ovarian cancer. These elements are prognostic factors for women's survival regardless of age. Specific trials in the elderly would produce evidence-based medicine and guidelines for ovarian cancer management in this population. Answer: Yes, there is a difference in treatment based on age for elderly patients with ovarian cancer. Studies have shown that elderly patients, particularly those older than 75 years, are less likely to receive standard treatment, which includes cytoreductive surgery and platinum-based doublet chemotherapy, compared to younger patients. This difference in treatment is more pronounced for chemotherapy and is maximal for surgery followed by chemotherapy (PUBMED:26411955, PUBMED:26610814). Elderly patients with ovarian cancer are often undertreated due to perceptions of frailty and the presence of comorbidities, which complicates surgery and chemotherapy tolerability (PUBMED:30640699, PUBMED:27047797). The treatment choices and outcomes for elderly women with ovarian cancer indicate that they are less frequently treated in accordance with treatment guidelines (PUBMED:30640699). Additionally, elderly patients are more likely to receive best supportive care only, and patient preference is an important reason to withhold standard treatment (PUBMED:30640699). The use of the Modified Frailty Index (mFI) has been suggested to help predict the management of elderly patients and could be used in the surgical and medical management to better select patients who can tolerate the treatment (PUBMED:28763363). Despite the presence of equivalent comorbidities, very elderly patients are less likely to receive surgery and combination chemotherapy (PUBMED:15820566). Furthermore, elderly patients with ovarian cancer present with more advanced disease and have a lower median survival compared to younger patients (PUBMED:27047797, PUBMED:22843035). The management of ovarian cancer in elderly patients requires a more personalized approach, taking into consideration their biological age and functional status before defining individualized treatment plans (PUBMED:26411955). In summary, elderly patients with ovarian cancer are often treated differently than younger patients, with a tendency towards less aggressive treatment strategies and a lower likelihood of receiving standard care, which impacts their overall survival outcomes.
Instruction: Can high-attenuation renal cysts be differentiated from renal cell carcinoma at unenhanced CT? Abstracts: abstract_id: PUBMED:17456870 Can high-attenuation renal cysts be differentiated from renal cell carcinoma at unenhanced CT? Purpose: To retrospectively determine if renal cell carcinoma can be differentiated from high-attenuation renal cysts at unenhanced computed tomography (CT) based on Hounsfield unit measurements and heterogeneity. Materials And Methods: The Human Investigation Committee at our institution approved this study with waiver of informed consent. This study was compliant with the HIPAA. Fifty-four pathologically proved renal cell carcinomas in 54 patients (36 men and 18 women; average age, 53 years; range, 23-90 years) and 56 high-attenuation renal cysts in 51 patients (30 men and 21 women; average age, 63 years; range, 28-86 years) were retrospectively evaluated at unenhanced CT. Two independent readers reviewed randomized unenhanced CT images and obtained Hounsfield unit readings of each mass. A subjective determination of lesion heterogeneity was also performed by using a four-point scale (1: homogeneous, 2: mildly heterogeneous, 3: moderately heterogeneous, 4: markedly heterogeneous). Statistical analysis was performed by using Bland-Altman regression tree, classification and regression tree, and Shapiro-Wilk normality test. Results: The average attenuation of cysts for reader 1 was 53.4 HU (range, 23-113 HU) and for reader 2 was 53.8 HU (range, 21-108 HU). The average attenuation of neoplasms for reader 1 was 34.7 HU (range, 21-60 HU) and for reader 2 was 38.4 HU (range, 22-60 HU). For cyst heterogeneity, a score of 1 was given in 55 of 56 (98%) cysts for reader 1 and in 53 of 56 (95%) cysts for reader 2. For neoplasm heterogeneity, a score of 1 was given in 35 of 54 (65%) neoplasms for reader 1 and in 36 of 54 (67%) for reader 2. Given the distribution of cyst and tumor attenuation values and lesion heterogeneity, a homogeneous mass measuring 70 HU or greater at unenhanced CT has a greater than 99.9% chance of representing a high-attenuation renal cyst. Conclusion: The findings from this study may help differentiate high-attenuation renal cysts from renal cell carcinomas at unenhanced CT and may suggest the next appropriate imaging study for definitive characterization. abstract_id: PUBMED:31386573 Utility of CT Texture Analysis in Differentiating Low-Attenuation Renal Cell Carcinoma From Cysts: A Bi-Institutional Retrospective Study. OBJECTIVE. The purpose of this study was to evaluate the utility of CT texture analysis (CTTA) in differentiating low-attenuation renal cell carcinoma (RCC) from renal cysts on unenhanced CT. MATERIALS AND METHODS. Ninety-four patients with low-attenuation RCC on unenhanced CT were compared with a cohort of 192 patients with benign renal cysts. CT characteristics (size and minimum, maximum, and mean attenuation) and CTTA features were recorded using an ROI approximately two-thirds the size of the mass. Masses were subjectively assessed by two expert genitourinary readers and two novice readers using a 5-point Likert scale (1 = definite cyst, 5 = definite renal cell carcinoma). Results of first-order CTTA and subjective evaluation were compared using ROC analysis. RESULTS. The group of 94 patients with low-attenuation RCC included 62 men and 32 women (mean age, 58.0 years). On unenhanced CT, the RCC were larger than 10 mm and of a median size of 50 mm with less than or equal to 20 HU (mean attenuation, 16 ± 4 HU). Of the RCC cohort, 83 were clear cell subtype. The cohort of 192 patients included 134 men and 58 women (mean age, 64.7 years) with benign renal cysts greater than 10 mm and a median size of 27 mm and less than or equal to 20 HU (mean attenuation, 9 ± 6 HU). The mean follow-up time was 6.2 years. Mean entropy in the low-attenuation RCC group (4.1 ± 0.7) was significantly higher than in the cyst group (2.8 ± 1.3, p &lt; 0.0001). Entropy showed an ROC AUC of 0.89, with sensitivity of 84% and specificity of 80% at threshold 3.9. The AUC was better than subjective evaluation by novice readers (AUC, 0.77) and comparable to subjective evaluation by two expert readers (AUC, 0.90). A model combining the three best texture features (unfiltered mean gray-level attenuation, coarse entropy, and kurtosis) showed an improved AUC of 0.92. CONCLUSION. High entropy revealed with CTTA may be used to differentiate low-attenuation RCC from cysts at unenhanced CT; this technique performs as well as expert readers. abstract_id: PUBMED:28389785 Renal cell carcinoma attenuation values on unenhanced CT: importance of multiple, small region-of-interest measurements. Objective: Since it has been suggested that benign renal cysts can be diagnosed at unenhanced CT on the basis of homogeneity and attenuations of 20 HU or less, we determined the prevalence of renal cell carcinomas (RCCs) with these characteristics using two different methods of measuring attenuation. Materials And Methods: After IRB approval, two radiologists obtained unenhanced attenuation values of 104 RCCs (mean size 5.6 cm) using a single, large region of interest (ROI), two-thirds the size of the mass. They were then determined if the masses appeared heterogeneous. Of RCCs measuring 20 HU or less, those which appeared homogeneous were re-measured with multiple (6 or more), small (0.6 cm2 or smaller) ROIs dispersed throughout the lesion. Masses with attenuations 20 HU or less were compared to those with masses with HU greater than 20 for any differences in demographic data. Results: Of 104 RCCS, 24 RCC had HU less than 20 using a large ROI. Of these, 21 appeared heterogeneous and 3 appeared homogeneous. Using multiple small ROIs, these three RCCs revealed maximum attenuation values above 20 HU (Range: 26-32 HU). A greater portion of RCCs measuring 20 HU or less using a large ROI were clear cell sub-type. There were no other differences. Conclusions: Renal cell carcinoma can measure 20 HU or less at unenhanced CT when a single large ROI is used. While most appear heterogeneous, some may appear homogeneous, but will likely reveal attenuations greater than 20 HU when multiple, small ROIs are used. This knowledge may prevent some RCCs from being misdiagnosed as cysts on unenhanced CT. abstract_id: PUBMED:35441533 Renal Lesion Characterization by Dual-Layer Dual-Energy CT: Comparison of Virtual and True Unenhanced Images. BACKGROUND. Prior studies have provided mixed results for the ability to replace true unenhanced (TUE) images with virtual unenhanced (VUE) images when characterizing renal lesions by dual-energy CT (DECT). Detector-based dual-layer DECT (dlDECT) systems may optimize performance of VUE images for this purpose. OBJECTIVE. The purpose of this article was to compare dual-phase dlDECT examinations evaluated using VUE and TUE images in differentiating cystic and solid renal masses. METHODS. This retrospective study included 110 patients (mean age, 64.3 ± 11.8 years; 46 women, 64 men) who underwent renal-mass protocol dlDECT between July 2018 and February 2022. TUE, VUE, and nephrographic phase image sets were reconstructed. Lesions were diagnosed as solid masses by histopathology or MRI. Lesions were diagnosed as cysts by composite criteria reflecting findings from MRI, ultrasound, and the TUE and nephrographic phase images of the dlDECT examinations. One radiologist measured lesions' attenuation on all dlDECT image sets. Lesion characterization was compared between use of VUE and TUE images, including when considering enhancement of 20 HU or greater to indicate presence of a solid mass. RESULTS. The analysis included 219 lesions (33 solid masses; 186 cysts [132 simple, 20 septate, 34 hyperattenuating]). TUE and VUE attenuation were significantly different for solid masses (33.4 ± 7.1 HU vs 35.4 ± 8.6 HU, p = .002), simple cysts (10.8 ± 5.6 HU vs 7.1 ± 8.1 HU, p &lt; .001), and hyperattenuating cysts (56.3 ± 21.0 HU vs 47.6 ± 16.3 HU, p &lt; .001), but not septate cysts (13.6 ± 8.1 HU vs 14.0 ± 6.8 HU, p = .79). Frequency of enhancement 20 HU or greater when using TUE and VUE images was 90.9% and 90.9% in solid masses, 0.0% and 9.1% in simple cysts, 15.0% and 10.0% in septate cysts, and 11.8% and 38.2% in hyperattenuating cysts. All solid lesions were concordant in terms of enhancement 20 HU or greater when using TUE and VUE images. Twelve simple cysts and nine hyperattenuating cysts showed enhancement of 20 HU or greater when using VUE but not TUE images. CONCLUSION. Use of VUE images reliably detected enhancement in solid masses. However, VUE images underestimated attenuation of simple and hyperattenuating cysts, leading to false-positive findings of enhancement by such lesions. CLINICAL IMPACT. The findings do not support replacement of TUE acquisitions with VUE images when characterizing renal lesions by dlDECT. abstract_id: PUBMED:30240301 Prevalence of Low-Attenuation Homogeneous Papillary Renal Cell Carcinoma Mimicking Renal Cysts on CT. Objective: The objective of our study was to determine the attenuation range of homogeneous papillary renal cell carcinomas (RCCs) on contrast-enhanced CT. Materials And Methods: This retrospective study was performed at two institutions from January 1, 2007, to January 1, 2017. Multiphasic CT studies with and without IV contrast material of 114 patients with pathologically proven papillary RCCs were independently reviewed by two sets of two abdominal radiologists. Seventy-two cases were excluded because of subjective lesion heterogeneity, leaving 42 homogeneous RCCs. Three ROIs were placed on all lesions for all CT phases, and the mean attenuations were calculated. Results: Mean lesion size was 2.8 cm (range, 1.2-11.0 cm). The attenuation range for each CT phase was as follows: unenhanced, 14.7-50.7 HU; corticomedullary, 32.2-99.5 HU; portal venous, 40.8-95.1 HU; nephrographic, 17.9-90.8 HU; and excretory, 18.0-73.0 HU. Two of 114 (1.8%; 95% CI, 0.2-6.5%) RCCs were homogeneous and less than 30 HU on the portal venous or nephrographic phase. One of these RCCs was a solid hypoenhancing mass, and the other was a homogeneous cystic RCC. Of the cases with an unenhanced phase, three of 107 (2.8%; 95% CI, 0.6-8.8%) were both homogeneous and were less than 20 HU in attenuation. Conclusion: Papillary RCCs are rarely both subjectively homogeneous and less than 20 HU at unenhanced CT and less than 30 HU at portal venous or nephrographic phase CT. abstract_id: PUBMED:18806171 Comparison of contrast-enhanced sonography with unenhanced sonography and contrast-enhanced CT in the diagnosis of malignancy in complex cystic renal masses. Objective: The objective of our study was to evaluate contrast-enhanced sonography in the diagnosis of malignancy in complex cystic renal masses. Materials And Methods: We analyzed a series of 40 cystic renal masses (diameter, 2-8 cm) with a complex pattern at contrast-enhanced CT in 40 consecutive subjects (18 men, 22 women; mean age +/- SD, 62 +/- 11 years). Each renal mass was scanned using sonography without contrast material and after IV injection of sulfur hexafluoride-filled microbubbles during the arterial (15-40 seconds) and venous (40-120 seconds from injection) phases. Two radiologists in consensus assessed onsite the enhancement patterns in the peripheral wall and intracystic septa and the evidence of solid endocystic components. Three blinded readers with 2, 6, and 10 years of experience in renal imaging performed a retrospective off-site interpretation of unenhanced sonography, contrast-enhanced sonography, and CT images and made a benign or malignant diagnosis according to refer ence diagnostic criteria for contrast-enhanced sonography and to the Bosniak classification for CT. Results: Final diagnoses comprised two multilocular cystic nephromas, two inflammatory and seven hemorrhagic cysts, and eight uncomplicated benign cysts and 21 cystic renal cell carcinomas. The overall diagnostic accuracy of contrast-enhanced sonography was better than unenhanced sonography and CT (contrast-enhanced sonography vs unenhanced sonography vs CT: reader 1, 83% vs 30% vs 75%; reader 2, 83% vs 30% vs 63%; reader 3, 80% vs 30% vs 70%; p &lt; 0.05, McNemar test). Conclusion: Contrast-enhanced sonography was found to be better than unenhanced sonography and CT in the diagnosis of malignancy in complex cystic renal masses. abstract_id: PUBMED:12819331 Distinction of renal cell carcinomas from high-attenuation renal cysts at portal venous phase contrast-enhanced CT. Purpose: To determine if renal cell carcinomas can be distinguished from high-attenuation renal cysts on portal venous phase contrast material-enhanced computed tomographic (CT) scans. Materials And Methods: Fifty-seven renal cell carcinomas and 37 high-attenuation (&gt;20 HU) renal cysts that were at least 1 cm in diameter were retrospectively identified in 90 patients who underwent portal venous phase contrast-enhanced CT. Two independent readers recorded the CT number and degree of internal heterogeneity (uniform or mildly, moderately, or markedly heterogeneous) of all lesions. Logistic regression analysis with adjustment for the two readers was used to examine the association between clinical and CT parameters and the final diagnosis. Stepwise logistic regression analysis was used to identify independent distinguishing variables, which were then incorporated in a classification and regression tree analysis to construct the most efficient classification algorithm. Results: Renal cell carcinomas were of significantly greater size (5.10 cm vs 2.84 cm, P &lt;.001), mean attenuation (101.2 HU vs 55.3 HU, P &lt;.001), and internal heterogeneity (P &lt;.001) than high-attenuation renal cysts. Stepwise logistic regression showed attenuation and internal heterogeneity were both independent variables associated with final diagnosis, but lesion size was not an independent distinguishing variable after adjustment for internal heterogeneity. Classification and regression tree analysis demonstrated that an attenuation greater than 70 HU or the presence of moderate or marked internal heterogeneity was an accurate sign of renal cell carcinoma, with a sensitivity of 91% (52 of 57) for both readers and a specificity of 92% (34 of 37) for reader 1 and 84% (31 of 37) for reader 2. Conclusion: On portal venous phase contrast-enhanced CT scans, attenuation greater than 70 HU or moderate or marked internal heterogeneity favor a diagnosis of renal cell carcinoma over a diagnosis of high-attenuation renal cyst. abstract_id: PUBMED:28328257 Differentiating Renal Neoplasms From Simple Cysts on Contrast-Enhanced CT on the Basis of Attenuation and Homogeneity. Objective: The purpose of this study is to compare the attenuation and homogeneity of renal neoplasms with those of cysts on contrast-enhanced CT. Materials And Methods: A total of 129 renal neoplasms and 24 simple cysts were evaluated. Two readers determined whether each mass was qualitatively heterogeneous or homogeneous. Mean, minimum, and maximum attenuation values were measured. Statistical analysis was performed. Results: A total of 116 heterogeneous renal cell carcinomas (RCCs) (99 clear cell, four papillary, four oncocytic, seven chromophobe, and two unclassified RCCs), 13 homogeneous RCCs (10 papillary, two oncocytic, and one chromophobe RCC), and 24 cysts (all of which were homogeneous) were evaluated. All homogeneous RCCs had mean attenuation values of more than 42 HU, whereas renal cysts had mean attenuation values of up to 30 HU (p &lt; 0.001). Two readers qualitatively and identically categorized all RCCs as homogeneous or heterogeneous (κ = 1.0; p &lt; 0.001). Conclusion: Homogeneous simple renal cysts can have mean attenuation values of up to 30 HU, as determined by contrast-enhanced CT, whereas homogeneous RCCs have mean attenuation values as low as 42 HU, with no overlap occurring between the two groups. These data suggest that further evaluation of a homogeneous renal mass with a mean attenuation value of 30 HU or less on a contrast-enhanced CT scan likely is unwarranted. abstract_id: PUBMED:3966136 High-density renal cysts in autosomal dominant polycystic kidney disease demonstrated by CT. Unenhanced abdominal CT scans of 35 patients with autosomal dominant polycystic kidney disease (ADPKD) showed multiple high-density (58-84 HU) renal cysts in 42.9% of patients, occasional high-density cysts in 25.7%, and no high-density cysts in 31.4%. These high-density cysts were usually subcapsular and were more frequent in patients with markedly enlarged kidneys and flank pain at the time of CT. Several were found to contain altered blood on pathological analysis. Follow-up CT often showed a reduction in cyst densities, although some cysts developed mural calcification and calcification of their contents. High-density cysts are probably produced by cyst hemorrhage. This may occur randomly as part of the natural history of the disease or may result from minor trauma to the enlarged kidneys. Renal carcinomas occur rarely in ADPKD and may occasionally be hyperdense. However, high-density cysts may usually be distinguished from carcinomas on CT by their smooth contours, sharp interfaces with renal parenchyma, homogeneity, and lack of contrast enhancement. abstract_id: PUBMED:8079852 Small (&lt; or = 3 cm) renal masses: correlation of spiral CT features and pathologic findings. Objective: We describe our 3-year experience using spiral CT in the evaluation of small renal masses in order to determine the usefulness of this technique for classifying the lesions and to correlate specific CT features with pathologic findings. Materials And Methods: We retrospectively analyzed spiral CT scans and results of pathologic examinations of surgically extirpated small (&lt; or = 3 cm) renal masses in 35 patients. The masses included 27 renal cell carcinomas, two transitional cell carcinomas, one leiomyoma, one angiomyolipoma, and four benign cysts. Several imaging features, including attenuation, pattern of contrast enhancement, presence and type of calcification, cyst wall, and septation, were correlated with pathologic findings. Results: Most renal cell carcinomas had a solid growth pattern (n = 19), had attenuation values on unenhanced scans of 20 H or greater (n = 26), and had attenuation values that increased by at least 10 H with contrast enhancement (n = 26). Only three renal cell carcinomas were mostly cystic on pathologic examination. Heterogeneous enhancement correlated with the presence of acellular regions (p = .02). Of 12 cystic masses, spiral CT showed the absence of a thick or nodular fibrous capsule in seven of nine masses (specificity, 0.78) and the absence of several (or nodular) septations in six of seven masses (specificity, 0.86) but was not as sensitive in detecting these features. Conclusion: Spiral CT can show many of the key imaging features of small renal masses used to distinguish between benign and malignant lesions. However, despite the theoretical benefits of volumetric CT, some lesions remain indeterminate and require surgical removal for diagnosis. Answer: Yes, high-attenuation renal cysts can be differentiated from renal cell carcinoma (RCC) at unenhanced CT. A study found that a homogeneous mass measuring 70 HU or greater at unenhanced CT has a greater than 99.9% chance of representing a high-attenuation renal cyst, which may help differentiate these cysts from RCCs (PUBMED:17456870). Additionally, another study indicated that renal cell carcinoma can measure 20 HU or less at unenhanced CT when a single large region of interest (ROI) is used. However, most RCCs appear heterogeneous, and those that appear homogeneous are likely to reveal attenuations greater than 20 HU when multiple, small ROIs are used, which may prevent some RCCs from being misdiagnosed as cysts on unenhanced CT (PUBMED:28389785). Furthermore, CT texture analysis (CTTA) has been shown to be useful in differentiating low-attenuation RCC from cysts on unenhanced CT, with high entropy revealed with CTTA being a distinguishing feature (PUBMED:31386573). Another study found that homogeneous simple renal cysts can have mean attenuation values of up to 30 HU on contrast-enhanced CT, whereas homogeneous RCCs have mean attenuation values as low as 42 HU, with no overlap between the two groups (PUBMED:28328257). However, it is important to note that some studies suggest caution when using virtual unenhanced (VUE) images from dual-layer dual-energy CT (dlDECT) to characterize renal lesions, as they may underestimate the attenuation of simple and hyperattenuating cysts, leading to false-positive findings of enhancement (PUBMED:35441533). In summary, while high-attenuation renal cysts can be differentiated from RCC at unenhanced CT using various methods, including Hounsfield unit measurements, lesion heterogeneity, and CT texture analysis, careful interpretation and additional imaging studies may be required for definitive characterization.
Instruction: Can Glypican3 be diagnostic for early hepatocellular carcinoma among Egyptian patients? Abstracts: abstract_id: PUBMED:24460300 Can Glypican3 be diagnostic for early hepatocellular carcinoma among Egyptian patients? Background: Because of the high prevalence of hepatocellular carcinoma (HCC) in Egypt, new markers with better diagnostic performance than alpha-feto protein (AFP) are needed to help in early diagnosis. The aim of this work was to compare the clinical utility of both serum and mRNA glypican3 (GPC3) as probable diagnostic markers for HCC among Egyptian patients. Materials And Methods: A total of 60 subjects, including 40 with HCC, 10 with cirrhosis and 10 normal controls were analyzed for serum GPC3 (sGPC3) by ELISA. GPC-3 mRNA from circulating peripheral blood mononuclear cells was amplified by RT-PCR. Both markers were compared to some prognostic factors of HCC, and sensitivity of both techniques was compared. Results: Serum glypican-3 and AFP were significantly higher in the HCC group compared to cirrhotic and normal controls (p&lt;0.001). Sensitivity and specificity were (95% each) for sGlypican-3, (82.5% and 85%) for AFP, and (100% and 90%) for Glypican3 mRNA , and (80% and 95%) for double combination between sGPC3 and AFP respectively. Conclusion: Both serum GPC-3 and GPC-3mRNA are promising diagnostic markers for early detection of HCC in Egyptian patients. RT- PCR proved to be more sensitive (100%) than ELISA (95%) in detecting glypican3. abstract_id: PUBMED:37795234 GPC3 gene expression and allelic discrimination of FZD7 gene in Egyptian patients with hepatocellular carcinoma. Background: Hepatocellular carcinoma (HCC) is the second leading cause of cancer-related deaths worldwide, and especially in Egypt. Early diagnosis of HCC greatly improves the survival and prognosis of patients. Low sensitivity and specificity of alpha-fetoprotein (AFP) has led to the demand for novel biomarkers of HCC. The aim of the present study was to evaluate the validity of frizzled-7 (FZD7) and glypican-3 (GPC3) gene expression as potential biomarkers for HCC early diagnosis, and to investigate the association between FZD7 rs2280509 polymorphism and HCC risk. Materials And Methods: Quantification of FZD7 and GPC3 gene expression by real-time quantitative reverse transcription polymerase chain reaction (qRT-PCR) assay, and genotyping FZD 7 (rs2280509 SNP) gene polymorphism using RT-PCR. Results: The current results revealed that FZD7 gene expression had a greater area under the curve (AUC) for identifying HCC than GPC3 gene expression and AFP levels. The combination of the three markers as a panel showed a better diagnostic performance with a greater AUC than any of the single markers alone (p &lt; 0.05). The FZD7 rs2280509 polymorphism (CT) was found to be significantly associated with an increased risk of HCC. The CT genotype and T allele were significantly more prevalent in the HCC group compared to either the cirrhosis (p = 0.03) or control groups (p = 0.0009 and 0.002; respectively). Conclusion: FZD7 and GPC3 gene expressions have a complementary role in early HCC detection, with a greater diagnostic sensitivity and accuracy than AFP. In addition, FZD7 rs2280509 polymorphism is significantly associated with an increased risk of HCC in the Egyptian population. abstract_id: PUBMED:37940547 Hepatocellular carcinoma with lung metastasis showing hemochromatosis in an Egyptian fruit bat (Rousettus aegyptiacus). After an Egyptian fruit bat (Rousettus aegyptiacus) in a zoo became emaciated and died, a necropsy revealed multiple nodules on the liver and lung surfaces. Microscopy revealed that the liver nodules consisted of neoplastic hepatocytes and showed metastasis in the lung lobes. Most of the neoplastic cells in the liver and lung showed positive labeling for HepPar-1, cytokeratin 19, glypican-3, and Ki-67. Hepatocellular degeneration and necrosis were diffuse in the liver parenchyma. Berlin blue staining revealed large amounts of iron in normal and neoplastic cells. Based on these pieces of evidence, this case was diagnosed as hepatocellular carcinoma with hemochromatosis. This is believed to be the first report of hepatocellular carcinoma in an Egyptian fruit bat that has been immunophenotypically examined in detail by pathological examination. abstract_id: PUBMED:29391759 Multi-parameter gene expression profiling of peripheral blood for early detection of hepatocellular carcinoma. Aim: In our previous study, we have built a nine-gene (GPC3, HGF, ANXA1, FOS, SPAG9, HSPA1B, CXCR4, PFN1, and CALR) expression detection system based on the GeXP system. Based on peripheral blood and GeXP, we aimed to analyze the results of genes expression by different multi-parameter analysis methods and build a diagnostic model to classify hepatocellular carcinoma (HCC) patients and healthy people. Methods: Logistic regression analysis, discriminant analysis, classification tree analysis, and artificial neural network were used for the multi-parameter gene expression analysis method. One hundred and three patients with early HCC and 54 age-matched healthy normal controls were used to build a diagnostic model. Fifty-two patients with early HCC and 34 healthy people were used for validation. The area under the curve, sensitivity, and specificity were used as diagnostic indicators. Results: Artificial neural network of the total nine genes had the best diagnostic value, and the AUC, sensitivity, and specificity were 0.943, 98%, and 85%, respectively. At last, 52 HCC patients and 34 healthy normal controls were used for validation. The sensitivity and specificity were 96% and 86%, respectively. Conclusion: Multi-parameter analysis methods may increase the diagnostic value compared to single factor analysis and they may be a trend of the clinical diagnosis in the future. abstract_id: PUBMED:24841158 Comparing prothrombin induced by vitamin K absence-II (PIVKA-II) with the oncofetal proteins glypican-3, Alpha feto protein and carcinoembryonic antigen in diagnosing hepatocellular carcinoma among Egyptian patients. Background: Hepatocellular carcinoma (HCC) is usually asymptomatic in the early stage and does not show elevated alpha-feto protein (AFP). AFP shows 60-80% sensitivity in diagnosing HCC. Glypican3 (GPC-3) is an oncofetal protein that is only detected in HCC cells but not in benign liver tissues, while Carcinoembryonic antigen (CEA) is expressed in various neoplasms including HCC. Although, it is not specific for HCC. Prothrombin induced by vitamin K absence-II (PIVKA-II) is an abnormal prothrombin protein that is increased in the serum of HCC patients. It has higher sensitivity and specificity compared to AFP. The aim of this study is to compare the clinical utility of PIVKA-II with GPC-3, AFP and CEA in diagnosing HCC. Patients And Methods: This study included 40 patients with HCC, 10 patients with cirrhosis as a benign control group, and 10 apparently healthy volunteers as normal controls. Serum samples were subjected to routine laboratory investigations, measurement of CEA, AFP using MEIA technique (Axsym), glypican3, and PIVKA-II using ELISA technique in the sera of all patients and controls. Results: All markers showed the highest results in the HCC group. Higher concentrations of PIVKA-II were detected in patients with splenomegaly, and in tumors with size (&gt;3cm). Combination of Glypican-3 and PIVKA-II showed the highest sensitivity, while GPC-3 alone and combination of GPC-3 and AFP showed the highest specificity to differentiate HCC from liver cirrhosis and normal controls. GPC-3, PIVKAII, and combination of both showed the highest sensitivity, while GPC-3 alone showed the highest specificity to differentiate HCC from liver cirrhosis. Conclusion: Glypican-3 is the only oncofetal antigen that showed comparable high diagnostic accuracy as PIVKA-II in diagnosing HCC among Egyptian patients. abstract_id: PUBMED:24369324 New genetic markers for diagnosis of hepatitis C related hepatocellular carcinoma in Egyptian patients. Background And Aim: Early detection of hepatocellular carcinoma (HCC) enhances effective and curative management. New genetic markers with distinct diagnostic ability are required. Aim: determine the expression of GPC3, PEG10, SERPINI1, MK and QP-C in the peripheral blood of HCC patients. Methods: 74 HCV patients were recruited and divided into three groups; chronic hepatitis (I), liver cirrhosis (II) and HCC (III). Demographics, laboratory and imaging data were collected. Child score and metastatic work up were completed. The expression of the five candidate genes in the peripheral blood was performed by qRT-PCR assay. Results: Groups were gender matched, age in group I was significantly lower than in groups II and III (37.7 vs 50.4 and 55.6, p value &lt;0.005). CHILD score; group II and III A/B/C = (7/5/6) and (20/6/3). AFP was significantly higher in group III than I and II (204 vs 3.9 and 6.9, p &lt; 0.01). In HCC group 69% of the lesions were &lt; 5 cm, and had 1-2 nodules; 14% had metastases. GPC3, PEG10, SERPINI1 and MK mRNA were significantly higher in the HCC group compared to the other groups while QP-C mRNA was higher in chronic hepatitis C group compared to other groups. The gene expression values in HCC patients were independent of the tumor size, AFP levels or extrahepatic metastasis. Combined measurement of the five gene markers showed 100% sensitivity and 33% specificity, 48% PPV and 100% NPV. Conclusion: GPC3, PEG10, SERPINI1 and MK are genetic markers that can represent a useful tool for detection of HCC. abstract_id: PUBMED:22820138 Application of HBx-induced anti-URGs as early warning biomarker of cirrhosis and HCC. Background: Hepatitis B virus (HBV) carriers are at high risk for the development of hepatocellular carcinoma (HCC), but there are no reliable markers that will identify such high-risk patients. HBV up-regulates the expression of selected genes (URGs) in the liver during chronic infection. These aberrantly expressed proteins trigger corresponding antibodies (anti-URGs) that appear prior to the detection of HCC. This study was undertaken to see if the anti-URGs could be used as early warning biomarker of HBV-induced liver cirrhosis and HCC. Methods: A cross sectional study using a total of 625 serum samples from HBV infected and uninfected controls were tested for the anti-URGs using specific ELISAs. Results: The number and specificity of anti-URGs correlated with the severity of liver disease Anti-URGs were predominantly present among patients with HBV-associated HCC (55.2%) and cirrhosis (60.7%), and at a lower frequency among patients with chronic hepatitis (35.8%), and at still lower frequencies in most asymptomatic carriers (12.3%) with normal ALT, among patients with chronic hepatitis C (38.5%) and blood donors (0.9%). These anti-URGs were rarely detected in sera from those with tumors other than HCC, except among HBV infected patients with cholangioicarcinoma and in some patients with drug induced hepatitis. 3 or more anti-URGs could precede the diagnosis of cirrhosis or HCC 11.8 months on average, and HBV hepatitis patients with 3 or more anti-URGs have much higher risk (5/20 vs 0/30) to develop cirrhosis and HCC than those patients with less anti-URGs. As the early warning biomarker, 3 or more anti-URGs were served as the threshold to separate the cirrhosis and HCC from others with a moderate sensitivity (58.3%) and specificity (80.0%), which was better than other biomarkers (AFP, AFP-L3, GPC3 and GP73) and would improve up to 70.3% when combined with another biomarker. Conclusions: The results of this clinical validation study suggest that the anti-URGs might have diagnostic/prognostic utility among patients at high risk for the development of cirrhosis and HCC. abstract_id: PUBMED:24186850 Evaluation of circulating Transforming growth factor-beta1, Glypican-3 and Golgi protein-73 mRNAs expression as predictive markers for hepatocellular carcinoma in Egyptian patients. Hepatocellular carcinoma (HCC) incidence is fast-growing especially in countries highly prevalent with viral hepatitis. Its poor prognosis has driven the research toward the discovery of sensitive markers for early detection. We investigated the usefulness of serum Transforming growth factor-beta1 (TGF-β1), Glypican-3 (GPC3), and Golgi protein-73 (GP73) mRNAs as early biomarkers in HCC Egyptian patients chronically infected with hepatitis C virus (HCV) in comparison with serum alpha-fetoprotein (AFP). Using semi-quantitative RT-PCR and densitometry analysis, circulating TGF-β1, GPC3, and GP73 mRNAs expressions were estimated in 15 healthy adults, 15 chronic HCV (CHC) patients and 25 HCC patients. Serum GP73 expression percentage in HCC group was significantly higher than controls (100 vs. 40 %, P ≤ 0.001) and when compared to elevated serum AFP levels (100 vs. 36 %, P ≤ 0.001). TGF-β1 and GP73 expression means were also higher in HCC patients than controls and CHC patients (P &lt; 0.05). GPC3 expression showed higher frequency in CHC patients compared to HCC group (80 vs. 28 %, P = 0.0016). According to the study cutoffs, serum TGF-β1 and GP73 mRNAs showed 60 and 96 % sensitivities for HCC diagnosis with 100 and 95 % specificities, respectively. Furthermore, elevated GP73 mRNA expression levels in early HCC were significantly increased compared to those of TGF-β1 mRNA and to high serum AFP (92.3 vs. 53.8 and 23.1 %; P = 0.03 and 0.0004, respectively). In conclusion, circulating TGF-β1 and GP73 mRNAs could be useful biomarkers for HCV-induced HCC diagnosis. Moreover, serum GP73 mRNA is sensitive for early cancer detection than AFP and TGF-β1 mRNA. However, these results need further validation studies. abstract_id: PUBMED:21669573 Oncofetal antigen glypican-3 as a promising early diagnostic marker for hepatocellular carcinoma. Background: Hepatocellular carcinoma (HCC) is characterized by a multi-cause, multi-stage and multi-focus process of tumor progression. Its prognosis is poor and early diagnosis is of utmost importance. This study was undertaken to investigate the dynamic expression of oncofetal antigen glypican-3 (GPC-3) and GPC-3 mRNA in hepatocarcinogenesis and to explore their early diagnostic value for HCC. Methods: A hepatoma model was induced in male Sprague-Dawley rats with 0.05% 2-fluorenylacetamide and confirmed by hematoxylin and eosin staining and gamma-glutamyltransferase (GGT) expression. Total RNA was purified and transcribed into cDNA by reverse transcription. Fragments of the GPC-3 gene were amplified by nested RT-PCR, and confirmed by sequencing. GPC-3 was analyzed by immunohistochemistry, Western blotting or ELISA. Results: Positive GPC-3 expression showed as brown granule-like staining localized in the cytoplasm. Histological examination of hepatocytes revealed three morphological stages of granule-like degeneration, atypical hyperplasia (precancerous), and cancer formation, with a progressive increase of liver total RNA and GGT expression. The incidence of liver GPC-3 mRNA and GPC-3, and serum GPC-3 was 100%, 100% and 77.8% in the HCC group, 100%, 100%, and 66.7% in the precancerous group, 83.3%, 83.3%, and 38.9% in the degeneration group, and no expression in the liver or blood of the control group, respectively. There was a positive correlation between liver GPC-3 mRNA and total RNA level (r=0.475, P&lt;0.05) or liver GPC-3 (r=1.0, P&lt;0.001) or serum GPC-3 (r=0.994, P&lt;0.001). Conclusion: Abnormal oncofetal antigen GPC-3 and GPC-3 mRNA expression in hepatocarcinogenesis may be promising molecular markers for early diagnosis of HCC. abstract_id: PUBMED:20101765 Present and future possibilities for early diagnosis of hepatocellular carcinoma. Hepatocellular carcinoma (HCC) represents the fifth most common cancer in the world, and the third most frequent oncological cause of death. The incidence of HCC is on the increase. HCC typically develops in patients with chronic liver diseases, and cirrhosis, usually with viral etiology, is the strongest predisposing factor. Nowadays HCC diagnosis is a multistage process including clinical, laboratory, imaging and pathological examinations. The prognosis of HCC is mostly poor, because of detection at an advanced, non-resectable stage. Potentially curative treatment (surgery) is limited and really possible only for cases with small HCC malignancies. For this reason, more effective surveillance strategies should be used to screen for early occurrence of HCC targeted to the population at risk. So far, the generally accepted serological marker is alpha-fetoprotein (AFP). Its diagnostic accuracy is unsatisfactory and questionable because of low sensitivity, therefore there is a strong demand by clinicians for new HCC-specific biomarkers. In this review, we will focus on other biomarkers that seem to improve HCC diagnosis, such as AFP-L3, des-gamma-carboxyprothrombin, alpha-l-fucosidase, gamma-glutamyl transferase, glypican-3, squamous cell carcinoma antigen, a new generation of immunoglobulin M-immunocomplexes, and very promising gene-expression profiling. Answer: Yes, Glypican3 (GPC3) can be diagnostic for early hepatocellular carcinoma (HCC) among Egyptian patients. Studies have shown that both serum GPC3 and GPC3 mRNA are promising diagnostic markers for the early detection of HCC in this population. In one study, serum glypican-3 and GPC3 mRNA from circulating peripheral blood mononuclear cells were analyzed and found to have high sensitivity and specificity for HCC, with GPC3 mRNA demonstrating 100% sensitivity and 90% specificity (PUBMED:24460300). Another study confirmed the utility of GPC3 gene expression as a potential biomarker for early HCC diagnosis, showing that it has a complementary role with frizzled-7 (FZD7) gene expression and provides greater diagnostic sensitivity and accuracy than alpha-fetoprotein (AFP) (PUBMED:37795234). Furthermore, research has indicated that GPC3 is an oncofetal protein that is only detected in HCC cells and not in benign liver tissues, making it a specific marker for HCC. It has been compared favorably with other markers such as prothrombin induced by vitamin K absence-II (PIVKA-II), AFP, and carcinoembryonic antigen (CEA) in diagnosing HCC among Egyptian patients (PUBMED:24841158). Additionally, the expression of GPC3, along with other genes, has been used to build diagnostic models with high sensitivity and specificity for classifying HCC patients and healthy individuals (PUBMED:29391759). In summary, the evidence suggests that GPC3 is a valuable biomarker for the early diagnosis of HCC among Egyptian patients, with several studies demonstrating its effectiveness in distinguishing HCC from non-cancerous conditions (PUBMED:24460300, PUBMED:37795234, PUBMED:24841158, PUBMED:29391759).
Instruction: Do holes in long-lasting insecticidal nets compromise their efficacy against pyrethroid resistant Anopheles gambiae and Culex quinquefasciatus? Abstracts: abstract_id: PUBMED:26310788 Do holes in long-lasting insecticidal nets compromise their efficacy against pyrethroid resistant Anopheles gambiae and Culex quinquefasciatus? Results from a release-recapture study in experimental huts. Background: Resistance of malaria vectors to pyrethroids threatens the effectiveness of long-lasting insecticidal nets (LLINs) as a tool for malaria control. Recent experimental hut and observational studies in Benin show that pyrethroid resistance reduces the insecticidal effect and personal protection of LLINs especially when they become torn. The World Health Organization has proposed a threshold for when nets are "too torn" at 1,000 cm(2) for rectangular holes and 790 cm(2) for round holes. This study examines whether there is a threshold above which LLINs no longer reduce malaria transmission. Methods: Intact and artificially-holed LLINs under three months old and untreated nets were tested by releasing mosquitoes from a susceptible Anopheles gambiae colony, a pyrethroid-resistant An. gambiae population and a resistant Culex quinquefasciatus population in closed experimental huts in Southern Benin, West Africa. The efficacy of LLINs and untreated nets was evaluated in terms of protection against blood feeding, insecticidal effect and potential effect on malaria transmission. Results: Personal protection by both LLINs and untreated nets decreased exponentially with increasing holed surface area, without evidence for a specific threshold beyond which LLINs could be considered as ineffective. The insecticidal effect of LLINs was lower in resistant mosquitoes than in susceptible mosquitoes, but holed surface area had little or no impact on the insecticidal effect of LLINs. LLINs with 22,500 cm(2) holed surface area and target insecticide content provided a personal protection of 0.60 (95 % CI 0.44-0.73) and a low insecticidal effect of 0.20 (95 % CI 0.12-0.30) against resistant An. gambiae. Nevertheless, mathematical models suggested that if 80 % of the population uses such nets, they could still prevent 94 % (95 % CI 89-97 %) of transmission by pyrethroid-resistant An. gambiae. Conclusions: Even though personal protection by LLINs against feeding mosquitoes is strongly reduced by holes, the insecticidal effect of LLINs is independent of the holed surface area, but strongly dependent on insecticide resistance. Badly torn nets that still contain insecticide have potential to reduce malaria transmission. The relationship between LLIN integrity and efficacy needs to be understood in order to guide LLIN distribution policy. abstract_id: PUBMED:29997072 Suscepibility of Anopheles gambiae sl and Culex quinquefasciatus to diverse insecticides in the city of N'Djamena. Mosquito resistance to insecticides remains a major concern for vector control programs based on chemical methods. Hence, a thorough knowledge of vector susceptibility to products recommended for public health is required for effective vector control programs. In this study, we assessed the susceptibility of wild Anopheles gambiae sl and Culex quinquefasciatus populations from N'Djamena (Chad) to four insecticide classes recommended for public health. The study took place from July to October 2014. Following WHO guidelines, bioassays were performed on two- to four-day-old, non-engorged female mosquitoes of both species, collected in the field. The insecticides used were deltamethrin (0.05%), permethrin (1%), DDT (4%), bendiocarb (0.01%), and malathion (5%). Anopheles gambiae sl showed full susceptibility to bendiocarb and malathion but was resistant to pyrethroids; mortality rates were 2% for deltamethrin, 2% for permethrin, and 0% for DDT. Culex quinquefasciatus was susceptible to malathion but resistant to the other insecticides, with a mortality rate of 22% for deltamethrin, 29% for permethrin, 23% for DDT, and 45% bendiocarb. The resistance of Anopheles gambiae sl and of Culex quinquefasciatus may affect the population's adherence to the use of insecticide-treated mosquito nets. Malaria control programs are advised to extend insecticide resistance monitoring to the main culicide species that bite human populations. abstract_id: PUBMED:35284898 Field evaluation of Veeralin, an alpha-cypermethrin + PBO long-lasting insecticidal net, against natural populations of Anopheles funestus in experimental huts in Muheza, Tanzania. The success of long-lasting insecticidal nets (LLIN) as the primary method for preventing malaria is threatened by pyrethroid resistance in Anopheles vectors. New generation long-lasting nets incorporating PBO synergist (piperonyl butoxide) with pyrethroid are designed to control insecticide-resistant mosquitoes. The efficacy of Veeralin® PBO LLINs was evaluated in experimental huts against wild free-flying pyrethroid-resistant Anopheles funestus (s.l.). Mosquito mortality, blood-feeding inhibition and personal protection were compared between untreated nets, standard LLINs and PBO/pyrethroid combination nets. Blood-feeding rates recorded with 20-times washed Veeralin were not significantly different from those with 20-times washed PermaNet 3.0 LLIN, a WHO Pre-Qualification Team (PQT) approved PBO/pyrethroid LLIN. This provides evidence that Veeralin LLIN provides similar blood-feeding inhibition to the standard approved LLIN and thus meets WHO PQT criteria for blood-feeding. Results show significantly higher mortality for Veeralin PBO LLINs against pyrethroid-resistant Anopheles funestus (s.l.) compared to DuraNet, a WHO PQT approved standard pyrethroid-only LLIN, both when unwashed and washed 20 times. The improved efficacy over a standard pyrethroid-only LLIN can be attributed to the effect of PBO in the Veeralin LLIN, hence meeting the Vector Control Advisory Group (VCAG) criteria for a resistance breaking LLIN. abstract_id: PUBMED:37367368 Ovary Dissection Is a Sensitive Measure of Sterility in Anopheles gambiae Exposed to the Insect Growth Regulator Pyriproxyfen. Pyriproxyfen (PPF) is an insect growth regulator used in the co-treatment of long-lasting insecticidal nets for its ability to sterilize female mosquitoes. To evaluate the efficacy of PPF-treated nets on mosquito reproductivity, most studies observe oviposition (egg-laying) rates in the laboratory. This technique has several technical disadvantages. Our study assessed if ovarial dissection could serve as an effective proxy for evaluating sterility in Anopheles gambiae mosquitoes. Blood-fed females were exposed to untreated or PPF-treated nets in cylinder assays and followed over several days to observe oviposition rates or egg development by dissection. For identifying PPF-exposed mosquitoes, both techniques demonstrated high sensitivity (oviposition: 99.1%; dissection: 100.0%), but for identifying non-exposed mosquitoes, specificity was significantly higher in the dissection group (52.5% vs. 18.9%). To assess whether dissection could be applied to nets treated with a pyrethroid or co-treated with a pyrethroid and PPF in tunnel tests, a blinded investigator performed dissections to predict the PPF exposure status across different treatment groups. The exposure status of dissected females was predicted with &gt;90% accuracy. We report that dissection is a sensitive technique to assess sterility in female Anopheles gambiae mosquitoes and can be used as a predictor of PPF exposure. abstract_id: PUBMED:19228349 Control of pyrethroid-resistant Anopheles gambiae and Culex quinquefasciatus mosquitoes with chlorfenapyr in Benin. Objective: To compare the efficacy of chlorfenapyr applied on mosquito nets and as an indoor residual spray against populations of Anopheles gambiae and Culex quinquefasciatus in an area of Benin that shows problematic levels of pyrethroid resistance. Method: Eight-week trial conducted in experimental huts. Results: Indoor residual spraying killed 82.9% of An. gambiae overall (mean mortality: 79.5%) compared to 53.5% overall (mean mortality: 61.7%) in the hut containing the lower dosed ITN. Analysis of data on a fortnightly basis showed high levels of mosquito mortality and blood-feeding inhibition during the first few weeks after treatment. Control of C. quinquefasciatus by the IRS and ITN interventions showed a similar trend to that of An. gambiae and though the average level of mortality was lower it was still much higher than with pyrethroid treatments against this population. Chlorfenapyr's reputation for being rather slow acting was evident particularly at lower dosages. The treatments showed no evidence of excito-repellent activity in this trial. Conclusion: Chlorfenapyr has the potential to control pyrethroid resistant populations of A. gambiae. There is a need to develop long-lasting formulations of chlorfenapyr to prolong its residual life on nets and sprayed surfaces. On nets it could be combined with a contact irritant pyrethroid to give improved protection against mosquito biting while killing pyrethroid-resistant mosquitoes that come into contact with the net. abstract_id: PUBMED:28095897 Efficacy of PermaNet® 3.0 and PermaNet® 2.0 nets against laboratory-reared and wild Anopheles gambiae sensu lato populations in northern Tanzania. Background: Mosquitoes have developed resistance against pyrethroids, the only class of insecticides approved for use on long-lasting insecticidal nets (LLINs). The present study sought to evaluate the efficacy of the pyrethroid synergist PermaNet® 3.0 LLIN versus the pyrethroid-only PermaNet® 2.0 LLIN, in an East African hut design in Lower Moshi, northern Tanzania. In this setting, resistance to pyrethroid insecticides has been identified in Anopheles gambiae mosquitoes. Methods: Standard World Health Organization bioefficacy evaluations were conducted in both laboratory and experimental huts. Experimental hut evaluations were conducted in an area where there was presence of a population of highly pyrethroid-resistant An. arabiensis mosquitoes. All nets used were subjected to cone bioassays and then to experimental hut trials. Mosquito mortality, blood-feeding inhibition and personal protection rate were compared between untreated nets, unwashed LLINs and LLINs that were washed 20 times. Results: Both washed and unwashed PermaNet® 2.0 and PermaNet® 3.0 LLINs had knockdown and mortality rates of 100% against a susceptible strain of An. gambiae sensu stricto. The adjusted mortality rate of the wild mosquito population after use of the unwashed PermaNet® 3.0 and PermaNet® 2.0 nets was found to be higher than after use of the washed PermaNet® 2.0 and PermaNet® 3.0 nets. Conclusions: Given the increasing incidence of pyrethroid resistance in An. gambiae mosquitoes in Tanzania, we recommend that consideration is given to its distribution in areas with pyrethroid-resistant malaria vectors within the framework of a national insecticide-resistance management plan. abstract_id: PUBMED:28212636 Experimental hut evaluation of a novel long-lasting non-pyrethroid durable wall lining for control of pyrethroid-resistant Anopheles gambiae and Anopheles funestus in Tanzania. Background: A novel, insecticide-treated, durable wall lining (ITWL), which mimics indoor residual spraying (IRS), has been developed to provide prolonged vector control when fixed to the inner walls of houses. PermaNet® ITWL is a polypropylene material containing non-pyrethroids (abamectin and fenpyroximate) which migrate gradually to the surface. Methods: An experimental hut trial was conducted in an area of pyrethroid-resistant Anopheles gambiae s.l. and Anopheles funestus s.s. to compare the efficacy of non-pyrethroid ITWL, long-lasting insecticidal nets (LLIN) (Interceptor®), pyrethroid ITWL (ZeroVector®), and non-pyrethroid ITWL + LLIN. Results: The non-pyrethroid ITWL produced relatively low levels of mortality, between 40-50% for An. funestus and An. gambiae, across all treatments. Against An. funestus, the non-pyrethroid ITWL when used without LLIN produced 47% mortality but this level of mortality was not significantly different to that of the LLIN alone (29%, P = 0.306) or ITWL + LLIN (35%, P = 0.385). Mortality levels for An. gambiae were similar to An. funestus with non-pyrethroid ITWL, producing 43% mortality compared with 26% for the LLIN. Exiting rates from ITWL huts were similar to the control and highest when the LLIN was present. An attempt to restrict mosquito access by covering the eave gap with ITWL (one eave open vs four open) had no effect on numbers entering. The LLIN provided personal protection when added to the ITWL with only 30% blood-fed compared with 69 and 56% (P = 0.001) for ITWL alone. Cone bioassays on ITWL with 30 min exposure after the trial produced mortality of &gt;90% using field An. gambiae. Conclusions: Despite high mortality in bioassays, the hut trial produced only limited mortality which was attributed to pyrethroid resistance against the pyrethroid ITWL and low efficacy in the non-pyrethroid ITWL. Hut ceilings were left uncovered and may have served as a potential untreated refuge. By analogy to IRS campaigns, which also do not routinely treat ceilings, high community coverage with ITWL may still reduce malaria transmission. Restriction of eave gaps by 75% proved an inadequate barrier to mosquito entry. The findings represent the first 2 months after installation and do not necessarily predict long-term efficacy. abstract_id: PUBMED:30068334 Field efficacy of a new deltamethrin long lasting insecticidal net (LifeNet©) against wild pyrethroid-resistant Anopheles gambiae in Benin. Background: Malaria vector control is mostly based on Long-Lasting Insecticidal Nets (LLIN). To date, all LLINs fully recommended by the World Health Organization Pesticide Scheme (WHOPES) are made of polyester or polyethylene. In this context, a new LLIN named LifeNet©, made of polypropylene fiber is developed. According to the manufacturer, LifeNet©is made of soft filament, has a greater mechanical strength, a superior insecticide wash resistance with a short insecticide regeneration time, a better flammability profile and a better environmental profile compared to polyester or polyethylene nets. Methods: Through a WHOPES supervised trial, the efficacy of LifeNet© was evaluated in Benin in experimental huts against free-flying wild mosquitoes. Results: LifeNet© has equal or better performances in terms of wash resistance, exophily, blood feeding inhibition and mortality compared to conventionally treated nets (CTN) treated with deltamethrin at 25 mg/m2 and washed to just before exhaustion. Conclusions: The efficacy of LifeNet© observed in this trial indicates that this net fulfill World Health Organization Pesticide Scheme (WHOPES) requirement for Long Lasting technology in Phase II. Throughout a Phase III trial currently ongoing in Southern Benin, the durability and the acceptability of this long-lasting insecticidal mosquito nets will be assessed under community conditions. abstract_id: PUBMED:11297094 Combined pyrethroid and carbamate 'two-in-one' treated mosquito nets: field efficacy against pyrethroid-resistant Anopheles gambiae and Culex quinquefasciatus. A new approach is proposed in the treatment of mosquito nets, using a 'two-in-one' combination of pyrethroid and non-pyrethroid insecticides applied to different parts of bednets. The objectives are mainly to overcome certain limitations of pyrethroid-impregnated bednets currently recommended for malaria control purposes. Apart from developing alternatives to pyrethroid dependency, we sought to counteract pyrethroid irritant effects on mosquitoes (excito-repellency) and resistance to pyrethroids. The idea takes advantage of the presumed host-seeking behaviour of mosquitoes confronted by a net draped over a bed, whereby the mosquito may explore the net from the top downwards. Thus, nets could be more effective if treated on the upper part with residual non-irritant insecticide (carbamate or organophosphate) and with a pyrethroid on the lower part. Sequential exposure to different insecticides with distinct modes of action is equivalent to the use of a mixture as a potential method of managing insecticide resistance. We also intended to improve the control of nuisance mosquitoes, especially Culex quinquefasciatus Say (Diptera: Culicidae) that often survive pyrethroids, in order to encourage public compliance with use of insecticide-treated nets (ITNs). Polyester bednets were pretreated with residual pyrethroid (bifenthrin 50 mg/m2 or deltamethrin 25 mg/m2) on the lower half and with carbamate (carbosulfan 300 mg/m2) on the upper half to minimize contact with net users. Unreplicated examples of these 'two-in-one' treated nets were field-tested against wild mosquitoes, in comparison with an untreated net and bednets treated with each insecticide alone, including PermaNet wash-resistant formulation of deltamethrin 50 mg/m2. Overnight tests involved volunteers sleeping under the experimental bednets in verandah-trap huts at Yaokofikro, near Bouaké in C te d'Ivoire, where the main malaria vector Anopheles gambiae Giles, as well as Culex quinquefasciatus Say, are highly resistant to pyrethroids. Efficacy of these ITNs was assessed in the huts by four entomological criteria: deterrency and induced exophily (effects on hut entry and exit), blood-feeding and mortality rates (immediate and delayed). Overall, the best impact was achieved by the bednet treated with carbosulfan alone, followed by 'two-in-one' treatments with carbosulfan plus pyrethroid. Blood-feeding rates were 13% An. gambiae and 17% Cx. quinquefasciatus in huts with untreated nets, but only 3% with carbosulfan ITNs, 7-11% with combined ITN treatment, 6-8% An. gambiae and 12-14% Cx. quinquefasciatus with pyrethroid alone. Mosquitoes that entered the huts were killed sooner by nets with combined treatment than by pyrethroid alone. Mortality-rates in response to ITNs with carbosulfan (alone or combined with pyrethroid) were significantly greater for Cx. quinquefasciatus, but not for An. gambiae, compared to ITNs with only pyrethroid. About 20% of sleepers reported potential side-effects (headache and/or sneezing) from use of ITN treated with carbosulfan alone. Further development of this new 'two-in-one' ITN concept requires a range of investigations (choice of effective products, cost-benefit analysis, safety, etc.) leading to factory production of wash-resistant insecticidal nets treated with complementary insecticides. abstract_id: PUBMED:22087506 Combining indoor residual spraying with chlorfenapyr and long-lasting insecticidal bed nets for improved control of pyrethroid-resistant Anopheles gambiae: an experimental hut trial in Benin. Background: Neither indoor residual spraying (IRS) nor long-lasting insecticidal nets (LLINs) are able to fully interrupt transmission in holoendemic Africa as single interventions. The combining of IRS and LLINs presents an opportunity for improved control and management of pyrethroid resistance through the simultaneous presentation of unrelated insecticides. Method: Chlorfenapyr IRS and a pyrethroid-impregnated polyester LLIN (WHO approved) were tested separately and together in experimental huts in southern Benin against pyrethroid resistant Anopheles gambiae and Culex quinquefasciatus. The bed nets were deliberately holed with either six or 80 holes to examine the effect of increasing wear and tear on protectiveness. Anopheles gambiae were genotyped for the kdr gene to assess the combination's potential to prevent the selection of pyrethroid resistance. Results: The frequency of kdr was 84%. The overall mortality rates of An. gambiae were 37% and 49% with the six-hole and 80-hole LLINs, respectively, and reached 57% with chlorfenapyr IRS. Overall mortality rates were significantly higher with the combination treatments (82-83%) than with the LLIN or IRS individual treatments. Blood feeding (mosquito biting) rates were lowest with the 6-hole LLIN (12%), intermediate with the 80-hole LLIN (32%) and highest with untreated nets (56% with the 6-hole and 54% with the 80-hole nets). Blood feeding (biting) rates and repellency of mosquitoes with the combination of LLIN and chlorfenapyr IRS showed significant improvement compared to the IRS treatment but did not differ from the LLIN treatments indicating that the LLINs were the primary agents of personal protection. The combination killed significantly higher proportions of Cx. quinquefasciatus (51%, 41%) than the LLIN (15%, 13%) or IRS (32%) treatments. Conclusion: The chlorfenapyr IRS component was largely responsible for controlling pyrethroid-resistant mosquitoes and the LLIN component was largely responsible for blood feeding inhibition and personal protection. Together, the combination shows potential to provide additional levels of transmission control and personal protection against pyrethroid-resistant mosquitoes, thereby justifying the additional resources required. Chlorfenapyr has potential to manage pyrethroid resistance in the context of an expanding LLIN/IRS strategy. Answer: The efficacy of long-lasting insecticidal nets (LLINs) against pyrethroid-resistant Anopheles gambiae and Culex quinquefasciatus can be compromised when the nets have holes. A study examining the impact of holes in LLINs found that personal protection against blood-feeding by both LLINs and untreated nets decreased exponentially with increasing holed surface area. However, there was no evidence for a specific threshold beyond which LLINs could be considered ineffective. The insecticidal effect of LLINs was lower in resistant mosquitoes than in susceptible ones, but the holed surface area had little or no impact on the insecticidal effect of LLINs. Even LLINs with a large holed surface area (22,500 cm^2) and containing the target insecticide content could still prevent a significant portion of malaria transmission if used by a high percentage of the population (PUBMED:26310788). Additionally, the presence of pyrethroid resistance in mosquito populations can affect the population's adherence to the use of insecticide-treated mosquito nets. Anopheles gambiae sl showed resistance to pyrethroids, with very low mortality rates when exposed to deltamethrin, permethrin, and DDT. Culex quinquefasciatus also demonstrated resistance to these insecticides, with mortality rates below 50% (PUBMED:29997072). To address the issue of pyrethroid resistance, new generation LLINs incorporating PBO synergist (piperonyl butoxide) with pyrethroid have been developed. These nets have shown significantly higher mortality against pyrethroid-resistant Anopheles funestus compared to standard pyrethroid-only LLINs, both when unwashed and washed 20 times (PUBMED:35284898). In conclusion, while holes in LLINs do reduce their efficacy in terms of personal protection against blood-feeding, the insecticidal effect remains even in nets with large holes, particularly if the population usage is high. The development of LLINs with additional synergists like PBO can further improve efficacy against pyrethroid-resistant mosquito populations.
Instruction: Fetal sex and preterm birth: are males at greater risk? Abstracts: abstract_id: PUBMED:36222951 Critical analysis of risk factors for intrapartum fetal death. Purpose: Intrapartum fetal death is devastating for both parents and medical caregivers. The purpose of this study was to evaluate the risk factors for intrapartum fetal death, which often influence management in subsequent pregnancies. Methods: This population-based cohort study included all singleton deliveries between the years 1991-2016 at Soroka University Medical Center. Trends over the years, risk factors for intrapartum fetal death and the effect of day of the week were examined. A generalized estimation equation (GEE) model was used to control for confounders. Results: During the study period 344,536 deliveries were recorded, of which 251 (0.1%) suffered intrapartum fetal death. Rates did not change significantly over the years (p = 0.130, using the chi-square test for trends). Preterm delivery, placental abruption, uterine rupture, shoulder dystocia, congenital malformations, severe preeclampsia and fetal malpresentation were significantly more common in pregnancies complicated by intrapartum death. Using a GEE model; shoulder dystocia (aOR 23.8, 95% CI 9.9-57.3, p &lt; 0.005), uterine rupture (aOR 19.0, 95% CI 7.0-51.4, p &lt; 0.05), preterm delivery (aOR 11.9, 95% CI 8.6-16.5, p &lt; 0.001), placental abruption (aOR 6.2, 95% CI 4.1-9.4, p &lt; 0.001), fetal malpresentation (aOR 3.6, 95% CI 2.6-4.9, p &lt; 0.005) and congenital malformations (aOR 2.6, 95% CI 1.9-3.5, p &lt; 0.05) were all independent risk factors for intrapartum fetal death. Weekend deliveries were not at increased risk of intrapartum fetal death. Conclusion: Several risk factors were identified for intrapartum fetal death, with shoulder dystocia demonstrating the highest risk. Unfortunately most factors are neither preventable nor predictable and interventions to decrease the rate of intrapartum fetal death remain elusive. abstract_id: PUBMED:29065730 Fetal hydrops and the risk of severe preeclampsia. Objective: To assess the incidence and severity of preeclampsia in pregnancies complicated by fetal hydrops. Methods: We performed a retrospective cohort study of singleton gestations from 2005 to 2008 in California. The primary predictor was fetal hydrops and the primary outcome was preeclampsia. Selected adverse maternal and neonatal events were assessed as secondary outcomes. Potential confounders examined included fetal anomalies, polyhydramnios, race/ethnicity, nulliparity, chronic hypertension, and gestational or pregestational diabetes mellitus. Results: We identified 337 pregnancies complicated by fetal hydrops, 70.0% had a concomitant fetal anomaly and 39.8% had polyhydramnios. Compared to the general population, hydrops was associated with an increased risk for severe preeclampsia (5.26 versus 0.91%, p &lt; .001) but not mild preeclampsia (2.86 versus 2.02%, p = .29). In multivariable analysis, fetal hydrops remained an independent risk factor for severe preeclampsia (as adjusted odds ratios (aOR) 3.13, 1.91-5.14). Hydrops was also associated with increased rates of eclampsia, acute renal failure, pulmonary edema, postpartum hemorrhage, blood transfusion, preterm birth, and neonatal death. Conclusions: We find that fetal hydrops is an independent risk factor for severe preeclampsia. In light of serious concerns for maternal and neonatal health, heightened surveillance for signs and symptoms of severe preeclampsia is warranted in all pregnancies complicated by fetal hydrops. abstract_id: PUBMED:16175748 Comparison of fetal behavior in low- and high-risk pregnancies. Meta-analyses were conducted on archival data of human fetal behavior to identify differential behavior among high-risk fetuses in pregnancies complicated by threatened preterm delivery, maternal hypertension or diabetes compared with low-risk fetuses in uneventful pregnancies, delivering as healthy, full-term infants. Data for a total of 493 fetuses (260 high risk, 233 low risk) from 23 weeks' gestation to term who participated in a study using a standardized protocol including observations of spontaneous and auditory-induced behavior were retrieved from our laboratory database. There were no differences in spontaneous behaviors when scored using clinical criteria for the nonstress test and biophysical profile; however, there were differences in the magnitude of the behaviors measured in the tests. Developmental differences were observed between those threatening to deliver early and the fetuses of hypertensive and diabetic mothers. The latter two groups differed little from one another but differed from low-risk fetuses in their response to auditory stimulation. We concluded that differences in behavior among high-risk groups suggest that atypical fetal behaviors may represent adaptation to condition specific insult rather than a generalized response to insult per se. The finding that high-risk fetuses showed atypical responses to auditory stimuli indicates a need to examine the relation between fetal auditory function and later language acquisition. abstract_id: PUBMED:23545793 Deviance in fetal growth and risk of autism spectrum disorder. Objective: Understanding the relationship between fetal growth and autism spectrum disorder (ASD) is likely to advance the search for genetic and nongenetic causes of ASD. The authors explored the associations between fetal growth, gestational age, and ASD with and without comorbid intellectual disability in a Scandinavian study population. Method: The authors conducted a matched nested case-control study within the Stockholm Youth Cohort that included all children ages 0-17 who resided in Stockholm County from 2001 to 2007 (N=589,114). The authors identified 4,283 children with ASD: 1,755 with intellectual disability and 2,528 without, and they selected 36,588 age- and sex-matched comparison subjects. ASD case subjects were ascertained from unique identifiers assigned to all Swedish residents and linkage with official registers covering all pathways of assessment or care of ASD in Stockholm County. The authors calculated z scores of deviance in fetal growth from a reference curve using records from the national Swedish Medical Birth Registry, which included ultrasound dating of gestational age as well as birth weight. Crude and adjusted odds ratios for ASD, ASD with intellectual disability, and ASD without intellectual disability were the main outcome measures. Results: ASD risk increased with fetal growth 1.50 standard deviations below and &gt;2.00 standard deviations above the mean for gestational age; the greatest risk was for fetal growth that was less than 2.00 standard deviations below the mean (adjusted odds ratio=1.70; 95% CI=1.44-2.01) or greater than 2.00 standard deviations above the mean (adjusted odds ratio=1.50; 95% CI=1.27-1.77). The same overall pattern was observed for ASD with and without intellectual disabilities. However, poor fetal growth (i.e., growth below the mean) was more strongly associated with ASD with intellectual disabilities than without. Regardless of fetal growth, preterm birth increased ASD risk. Conclusions: Deviance in fetal growth at either distributional extreme may be a significant antecedent to the development of ASD through genetic and/or nongenetic mechanisms. abstract_id: PUBMED:12411022 Clinical value of remote fetal monitoring network in high-risk pregnancy Objective: To investigate the clinical value of remote fetal monitoring network in high-risk pregnancy. Methods: Non-stress test (NST) was carried on in 116 high-risk gravida by remote fetal monitoring network (study group). One hundred high-risk pregnant women served as control group, were monitored by fetal movement counting daily and regular NST check-up in hospital. Results of fetal monitoring, neonatal prognosis and cesarean section were analyzed between the study and control group. Results: NST was applied more frequently in the study group (9.2 +/- 4.9) times than in control group (2.6 +/- 1.3) times (P &lt; 0.001). But the costs were similar. The incidences of neonatal asphyxia (10.3%) and preterm birth (12.6%) in the study group were lower than those in the control group (20.8% and 24.5%) (P &lt; 0.05). In the study group, there was significant difference in the rate of poor neonatal prognosis between normal and abnormal NST groups (64.8% and 17.7%) (P &lt; 0.05). There was no significant difference in the rates of cesarean section (75.9% and 75.0%, respectively) and poor neonatal prognosis of cesarean section with abnormal NST (71.7% and 60.6%, respectively) between the study and control groups (P &gt; 0.05). Conclusions: Remote fetal monitoring network can be used to improve the quality of fetal monitoring, especially in primary hospital. It decreased the rate of neonatal asphyxia and preterm delivery but didn't increase the cesarean section rate. It offers a reliable and economic method of self-monitoring for high-risk pregnant women. abstract_id: PUBMED:27370790 Fetal Growth and the Risk of Spontaneous Preterm Birth in a Prospective Cohort Study of Nulliparous Women. Previous studies have suggested an association between fetal growth restriction and the risk of spontaneous preterm birth (sPTB). However, addressing this association is methodologically challenging. We conducted a prospective cohort study of nulliparous women with a singleton pregnancy in Cambridge, United Kingdom (2008-2012). Ultrasonic fetal biometry was performed at 20 weeks of gestation as per routine clinical care. Participants also had blinded research ultrasonography performed at approximately 28 weeks. Biometric measurements were expressed as gestational-age-adjusted z scores. Fetal growth velocity was quantified by change in z score between 20 weeks and 28 weeks. Risk of sPTB, defined as delivery at ≥28 weeks and &lt;37 weeks associated with labor in the absence of induction, was analyzed using cause-specific Cox regression. Of 3,892 women, 98 (2.5%) had sPTB. When compared with the other decile groups, the lowest decile of growth velocity of the fetal femur between 20 and 28 weeks was associated with increased risk of sPTB (hazard ratio = 2.37, 95% confidence interval: 1.43, 3.93; P &lt; 0.001). Adjustment for maternal characteristics had no material effect (hazard ratio = 2.50, 95% confidence interval: 1.50, 4.14; P &lt; 0.001). There were no significant associations between other fetal measurements and risk of sPTB. To conclude, slow growth velocity of the fetal femur is associated with an increased risk of sPTB. abstract_id: PUBMED:12384267 The risk of fetal loss associated with Warfarin anticoagulation. Objectives: To examine if fetal risks associated with Warfarin anticoagulation during pregnancy may have been over-estimated at the time the drug was contraindicated during pregnancy. Methods: Seven case series with the same therapeutic objective for Warfarin anticoagulation published after 1980 were identified. The frequencies of fetal complications were calculated and compared with those of the 1980 compilation. Results: The frequencies of embryopathy, stillbirths, and neonatal deaths were similar to the 1980 database, but higher with respect to spontaneous abortions (24.1 vs. 8.6%) and premature deliveries (13.9 vs. 4.6%), and lower regarding live births (73.3 vs. 83.7%). Conclusions: Fetal risks associated with Warfarin anticoagulation during pregnancy have not been overestimated. Warfarin should not be given in cases where other anticoagulants do not increase the risk for the expecting mother. abstract_id: PUBMED:9386023 Antenatal corticosteroid therapy and fetal behaviour: a randomised study of the effects of betamethasone and dexamethasone. Objectives: To compare the effects of maternal betamethasone and dexamethasone administration on fetal behaviour and fetal heart rate variation. Design: A prospective randomised study of 60 women at increased risk of preterm delivery. Outcome Measures: Fetal heart rate and its variation, the incidence of fetal body and breathing movements. Setting: Obstetric unit, University Hospital, Utrecht, The Netherlands. Results: Following maternal betamethasone administration (day 2), fetal heart rate variation was reduced by 19% and fetal body and breathing movements by 49% and 85%, respectively. After dexamethasone treatment there was a significant increase in short term fetal heart rate variation (24%) on day 1. All values returned to baseline on day 4, indicating that no fetal deterioration had occurred during the course of the study period. Conclusion: Betamethasone, with probably the greater beneficial effect, has more effects on fetal behaviour and fetal heart rate variation than dexamethasone. When assessing fetal condition, due account needs to be taken of these effects. abstract_id: PUBMED:33359152 Joint Associations of Maternal-Fetal APOL1 Genotypes and Maternal Country of Origin With Preeclampsia Risk. Rationale & Objectives: Preeclampsia, which disproportionately affects Black women, is a leading cause of preterm delivery and risk for future hypertension and chronic kidney disease (CKD). Apolipoprotein L1 (APOL1) kidney risk alleles, common among Black individuals, contribute substantially to CKD disparities. Given the strong link between preeclampsia and CKD, we investigated whether maternal and fetal APOL1 risk alleles can jointly influence preeclampsia risk, and explored potential modifiers of the association between APOL1 and preeclampsia. Study Design: Nested case-control study. Setting & Participants: 426 Black mother-infant pairs (275 African Americans and 151 Haitians) from the Boston Birth Cohort. Exposure: Maternal and fetal APOL1 risk alleles. Outcomes: Preeclampsia. Analytical Approach: Logistic regression models with adjustment for demographic characteristics were applied to analyze associations between fetal and maternal APOL1 risk alleles and risk of preeclampsia and to investigate the effects of modification by maternal country of origin. Results: Fetal APOL1 risk alleles tended to be associated with an increased risk of preeclampsia, which was not statistically significant in the total genotyped population. However, this association was modified by maternal country of origin (P&lt;0.05 for interaction tests): fetal APOL1 risk alleles were significantly associated with an increased risk of preeclampsia among African Americans under recessive (odds ratio [OR], 3.6 [95% CI, 1.3-9.7]; P=0.01) and additive (OR, 1.7 [95% CI, 1.1-2.6]; P=0.01) genetic models but not in Haitian Americans. Also, maternal-fetal genotype discordance at the APOL1 locus was associated with a 2.6-fold higher risk of preeclampsia (P&lt;0.001) in African Americans. Limitations: Limited sample size in stratified analyses; self-reported maternal country of origin; pre-pregnancy estimated glomerular filtration rate (eGFR) and proteinuria data in mothers were not collected; unmeasured confounding social and/or environmental factors; no replication study. Conclusions: This study supports the hypothesis that fetal APOL1 kidney risk alleles are associated with increased risk for preeclampsia in a recessive mode of inheritance in African Americans and suggests that maternal-fetal genotype discordance is also associated with this risk. These conclusions underscore the need to better understand maternal-fetal interaction and their genetic and environmental factors as contributors to ethnic disparities in preeclampsia. abstract_id: PUBMED:11584485 Risk factors for antepartum fetal death. Objective: To determine the demographic, maternal, pregnancy-related and fetal risk factors for antepartum fetal death (APFD). Study Design: From our perinatal database between the years 1990 and 1997, 68,870 singleton birth files were analyzed. Fetuses weighing &lt; 1,000 g at birth and those with structural malformations and/or known chromosomal anomalies were excluded from the study. In order to determine independent factors contributing to APFD, a multiple logistic regression model was constructed. Results: During the study period there were 246 cases of APFD (3.6 per 1,000 births). The following obstetric factors significantly correlated with APFD in a multiple logistic regression model: preterm deliveries: small size for gestational age (SGA), multiparity (&gt; 5 deliveries), oligohydramnios, placental abruption, umbilical cord complications (cord around the neck and true knot of cord), pathologic presentations (nonvertex) and meconium-stained amniotic fluid. APFD was not significantly associated with advanced maternal age. Conclusion: APFD was significantly associated with several risk factors. Placental and umbilical cord pathologies might be the direct cause of death. Grand multiparity, oligohydramnios, meconium-stained amniotic fluid, pathologic presentations and suspected SGA should be carefully evaluated during pregnancy in order to decrease the incidence of APFD. Answer: The provided abstracts do not directly address the question of whether fetal sex is a risk factor for preterm birth. None of the abstracts mention fetal sex as a variable in their studies of risk factors for various fetal and pregnancy outcomes, including intrapartum fetal death (PUBMED:36222951), severe preeclampsia (PUBMED:29065730), fetal behavior in high-risk pregnancies (PUBMED:16175748), autism spectrum disorder (PUBMED:23545793), remote fetal monitoring in high-risk pregnancy (PUBMED:12411022), spontaneous preterm birth (PUBMED:27370790), fetal loss associated with Warfarin anticoagulation (PUBMED:12384267), effects of antenatal corticosteroid therapy on fetal behavior (PUBMED:9386023), maternal-fetal APOL1 genotypes and preeclampsia risk (PUBMED:33359152), and antepartum fetal death (PUBMED:11584485). Therefore, based on the information provided in these abstracts, it is not possible to determine whether males are at greater risk of preterm birth compared to females. Additional research that specifically investigates the relationship between fetal sex and the risk of preterm birth would be required to answer this question.
Instruction: Do subclinical epileptiform discharges really affect short-term memory in children? Abstracts: abstract_id: PUBMED:16610599 Do subclinical epileptiform discharges really affect short-term memory in children? Objective: Deficits in concentration, specific developmental disorders, and behaviour problems often impair the educational abilities of children and adolescents with epilepsy, even if the subjects are seizure-free. The impact of subclinical epileptiform discharges that persist despite adequate antiepileptic treatment is not yet understood. Some studies suggest that these lead to simultaneous transitory cognitive impairment, thereby affecting short-term memory functions. This study examines the impact of subclinical discharges on memory functions. Method: 40 seizure-free children (10.3 +/- 3.5 years) with subclinical epileptiform discharges were examined by means of computerized EEG-coupled tests focussing on visuo-spatial and verbal short-term memory, in order to assess the temporal relation between discharges and test performance. Results: No significant differences in cognitive performance were detected in phases with and without epileptiform discharges; neither discharges &gt; 1.5 sec, nor multiple discharges within a test impaired performance. Moreover, performance was independent of the localisation of discharges and of the time of their occurrence during the test. Conclusions: The present study showed no association between subclinical epileptiform discharges and error rates as indicators of transient cognitive impairment in visuo-spatial and verbal short-term memory. Hence, other mechanisms have to be taken into account to explain the known cognitive deficits in patients with epilepsy. abstract_id: PUBMED:37196736 Detection of subclinical epileptiform discharges in Alzheimer's disease using long-term outpatient EEG monitoring. Background: In patients with Alzheimer's disease (AD) without clinical seizures, up to half have epileptiform discharges on long-term in-patient electroencephalography (EEG) recordings. Long-term in-patient monitoring is obtrusive, and expensive as compared to outpatient monitoring. No studies have so far investigated if long-term outpatient EEG monitoring is able to identify epileptiform discharges in AD. Our aim is to investigate if epileptiform discharges as measured with ear-EEG are more common in patients with AD compared to healthy elderly controls (HC). Methods: In this longitudinal observational study, 24 patients with mild to moderate AD and 15 age-matched HC were included in the analysis. Patients with AD underwent up to three ear-EEG recordings, each lasting up to two days, within 6 months. Results: The first recording was defined as the baseline recording. At baseline, epileptiform discharges were detected in 75.0% of patients with AD and in 46.7% of HC (p-value = 0.073). The spike frequency (spikes or sharp waves/24 h) was significantly higher in patients with AD as compared to HC with a risk ratio of 2.90 (CI: 1.77-5.01, p &lt; 0.001). Most patients with AD (91.7%) showed epileptiform discharges when combining all ear-EEG recordings. Conclusions: Long-term ear-EEG monitoring detects epileptiform discharges in most patients with AD with a three-fold increased spike frequency compared to HC, which most likely originates from the temporal lobes. Since most patients showed epileptiform discharges with multiple recordings, elevated spike frequency should be considered a marker of hyperexcitability in AD. abstract_id: PUBMED:33889945 Spatiotemporal dynamics between interictal epileptiform discharges and ripples during associative memory processing. We describe the spatiotemporal course of cortical high-gamma activity, hippocampal ripple activity and interictal epileptiform discharges during an associative memory task in 15 epilepsy patients undergoing invasive EEG. Successful encoding trials manifested significantly greater high-gamma activity in hippocampus and frontal regions. Successful cued recall trials manifested sustained high-gamma activity in hippocampus compared to failed responses. Hippocampal ripple rates were greater during successful encoding and retrieval trials. Interictal epileptiform discharges during encoding were associated with 15% decreased odds of remembering in hippocampus (95% confidence interval 6-23%). Hippocampal interictal epileptiform discharges during retrieval predicted 25% decreased odds of remembering (15-33%). Odds of remembering were reduced by 25-52% if interictal epileptiform discharges occurred during the 500-2000 ms window of encoding or by 41% during retrieval. During encoding and retrieval, hippocampal interictal epileptiform discharges were followed by a transient decrease in ripple rate. We hypothesize that interictal epileptiform discharges impair associative memory in a regionally and temporally specific manner by decreasing physiological hippocampal ripples necessary for effective encoding and recall. Because dynamic memory impairment arises from pathological interictal epileptiform discharge events competing with physiological ripples, interictal epileptiform discharges represent a promising therapeutic target for memory remediation in patients with epilepsy. abstract_id: PUBMED:36108521 Epileptiform EEG discharges during sevoflurane anesthesia in children: A meta-analysis. Objective: To investigate the overall incidence and associated factors of epileptiform discharges in children during sevoflurane anesthesia. Methods: Our group systematically searched the PubMed, Cochrane library (Central) and EMBASE for the relevant trials from their inception until September 2020. The primary endpoint was the incidence of epileptiform discharges during sevoflurane induction. The secondary endpoints were the incidence of different types of epileptiform discharges, factors associated with these epileptiform events, and other adverse events such as seizure-like movements. Results: After screening of 713 records, eleven studies involving 448 participants were included into the final analysis. Meta-analysis indicated that the overall incidence of Epileptiform EEG discharges was 38.1% (95%confidence interval [CI], 19.1%-59.2%) during sevoflurane anesthesia in children. Subgroup analysis showed that the incidence of these EEG patters was lower when participants were inducted by using the low initial concentration of sevoflurane, compared with the high initial concentration sevoflurane (1.7%, 95%CI, 0.0% to 8.4% versus 47.7%, 95%CI, 25.5% to 70.3%, P &lt; 0.05). The longer exposure (&gt;3 min) of high concentration sevoflurane during induction showed higher rate of epileptiform discharges than a shorter exposure (≤3 min) (48.4%, 95%CI, 20.1% to 77.3% versus 5.7%, 95%CI, 0.00% to 23.5%; P &lt; 0.05). No significant difference for the incidence of epileptiform discharges was observed in subgroup analysis of addition of nitrous oxide (69.2%, 95%CI, 34.0% to 95.7% versus 41.3%, 95%CI, 15.6% to 69.7%, P﹥0.05) and type of EEG monitoring (26.9%, 95%CI, 3.8% to 60.7% versus 53.1%, 95%CI, 25.4% to 79.8%, P﹥0.05). Conclusions: The incidence of epileptiform EEG events in children during sevoflurane anesthesia varied from 19.1%-59.2%. The low initial concentration technique and shorter exposure time of high concentration sevoflurane may be associated with a decreased incidence of these epileptiform discharges in EEG. Significance: Epileptiform EEG discharges during sevoflurane anesthesia in children should arouse clinicians' attention. The use of low initial concentration technique and shorter exposure time of high concentration sevoflurane may be associated with a lower occurrence of these paradoxical events. abstract_id: PUBMED:3504400 Interactions of epileptiform EEG discharges and cognition. Interactions of subclinical epileptiform EEG discharges and cognitive functioning were studied in 91 patients with epilepsy, under video monitoring, to exclude overt clinical ictal phenomena during the investigations. A short-term memory test was presented as an engaging television game, in two versions, using spatial, or verbal material. Transitory cognitive impairment (TCI) was demonstrated during EEG discharges, either generalised or focal, in half the patients. Right-sided discharges were associated with impairment of the spatial task and left-sided with errors on the verbal version. TCI was demonstrable only when the discharge occurred during presentation of the material to be recalled. Increasing task difficulty, up to the patients' limit of performance, was associated with increasing susceptibility to TCI. Sixteen patients showed a significant effect of task on discharge rate, but in some this increased during the stimulus and in others while responding. There are thus complex interactions of epileptiform EEG activity on cognitive function and vice versa. abstract_id: PUBMED:34486107 Features of intracranial interictal epileptiform discharges associated with memory encoding. Objective: Interictal epileptiform discharges (IEDs) were shown to be associated with cognitive impairment in persons with epilepsy. Previous studies indicated that IED rate, location, timing, and spatial relation to the seizure onset zone could predict an IED's impact on memory encoding and retrieval if they occurred in lateral temporal, mesial temporal, or parietal regions. In this study, we explore the influence that other IED properties (e.g., amplitude, duration, white matter classification) have on memory performance. We were specifically interested in investigating the influence that lateral temporal IEDs have on memory encoding. Methods: Two hundred sixty-one subjects with medication-refractory epilepsy undergoing intracranial electroencephalographic monitoring performed multiple sessions of a delayed free-recall task (n = 671). Generalized linear mixed models were utilized to examine the relationship between IED properties and memory performance. Results: We found that increased IED rate, IEDs propagating in white matter, and IEDs localized to the left middle temporal region were associated with poorer memory performance. For lateral temporal IEDs, we observed a significant interaction between IED white matter categorization and amplitude, where IEDs with an increased amplitude and white matter propagation were associated with reduced memory performance. Additionally, changes in alpha power after an IED showed a significant positive correlation with memory performance. Significance: Our results suggest that IED properties may be useful for predicting the impact an IED has on memory encoding. We provide an essential step toward understanding pathological versus potentially beneficial interictal epileptiform activity. abstract_id: PUBMED:28919764 Levetiracetam is associated with decrease in subclinical epileptiform discharges and improved cognitive functions in pediatric patients with autism spectrum disorder. Objective: Subclinical epileptiform discharges (SEDs) are common in pediatric patients with autism spectrum disorder (ASD), but the effect of antiepileptic drugs on SEDs in ASD remains inconclusive. This physician-blinded, prospective, randomized controlled trial investigated an association between the anticonvulsant drug levetiracetam and SEDs in children with ASD. Methods: A total of 70 children with ASD (4-6 years) and SEDs identified by electroencephalogram were randomly divided into two equal groups to receive either levetiracetam and educational training (treatment group) or educational training only (control). At baseline and after 6 months treatment, the following scales were used to assess each individual's behavioral and cognitive functions: the Chinese version of the Psychoeducational Profile - third edition (PEP-3), Childhood Autism Rating Scale (CARS), and Autism Behavior Checklist (ABC). A 24-hour electroencephalogram was recorded on admission (baseline) and at follow-up. The degree of satisfaction of each patient was also evaluated. Results: Relative to baseline, at the 6-month follow-up, the PEP-3, CARS, and ABC scores were significantly improved in both the treatment and control groups. At the 6-month follow-up, the PEP-3 scores of the treatment group were significantly higher than those of the control, whereas the CARS and ABC scores were significantly lower, and the rate of electroencephalographic normalization was significantly higher in the treatment group. Conclusion: Levetiracetam appears to be effective for controlling SEDs in pediatric patients with ASD and was also associated with improved behavioral and cognitive functions. abstract_id: PUBMED:29913339 Incidence of epileptiform discharges in children during induction of anaesthesia using Propofol versus Sevoflurane. Objective: In pediatric patients, anaesthesia induction is often performed with intravenous Propofol or Sevoflurane inhalation. Although epileptiform discharges have been observed during inductions with Sevoflurane, their occurrence has not been investigated for i.v. Propofol inductions. The aim of this study is to compare the incidence of epileptiform discharges in children during anaesthesia induction using Propofol versus Sevoflurane. Methods: Prospective, observational cohort study in children aged 0.5-8 years undergoing elective surgery. Children were anaesthetized with either Propofol or Sevoflurane. Bi-frontal electroencephalograms electrodes were placed before start of anaesthesia. Visual electroencephalogram analysis was performed from start of anesthetic agent administration until Intubation with regard to identify epileptiform patterns, i.e. delta with spikes; rhythmic polyspikes; periodic, epileptiform discharges; or suppression with spikes. Results: 39 children were anaesthetized with Propofol, and 18 children with Sevoflurane. Epileptiform discharges were seen in 36% of the children in the Propofol group, versus 67% in the Sevoflurane group (p = 0.03). Incidence of the distinct types of epileptiform discharge differed for periodic, epileptiform discharges (Sevoflurane group 39% vs. Propofol group 3%; p &lt; 0.001). Higher concentration of Remifentanil (≥0.15 µg/kg/min) was associated with less frequent epileptiform discharges (Exp 5.8; CI 95% 1.6/21.2; p = 0.008). Conclusions: Propofol i.v. induction of anaesthesia in children triggers epileptiform discharges, whereas to a lesser extent than Sevoflurane does. Significance: Presuming that epileptiform discharges have an impact on postoperative brain function, it is advisable to use Propofol rather than Sevoflurane and higher level of Remifentanil for anaesthesia induction in children. abstract_id: PUBMED:9422252 Transient cognitive impairment during subclinical epileptiform electroencephalographic discharges. Short epileptiform discharges of 10 seconds or less in children with or without epilepsy are not noticed by clinical observation of the child. These so-called subclinical discharges can however disturb cognition and influence daily performance at school and at home. Several studies have been performed to show the negative effect of these epileptiform electroencephalographic (EEG) discharges on choice reaction time tests, short term memory tests (verbal and nonverbal), and on school performance tasks such as reading, writing, and arithmetic. About one-half of children with subclinical discharges will show transient cognitive impairment during these discharges; those with predominantly left-sided discharges are poorer on reading and those with right-sided discharges are poorer on visual spatial tasks. Suppression of the EEG discharges with the antiepileptic drug valproic acid improved cognitive performance in two of six children; failure was because of side effects of the drug or insufficient suppression of the discharges. In individual cases, suppression of the epileptiform EEG discharges can be beneficial. Evaluation of this effect is necessary by standardizing EEG and cognitive performance measures. Furthermore, the phenomenon of transient cognitive impairment (TCI) must be taken into account when evaluating results of psychological tests. abstract_id: PUBMED:37590406 Interictal epileptiform discharges affect memory in an Alzheimer's disease mouse model. Interictal epileptiform discharges (IEDs) are transient abnormal electrophysiological events commonly observed in epilepsy patients but are also present in other neurological diseases, such as Alzheimer's disease (AD). Understanding the role IEDs have on the hippocampal circuit is important for our understanding of the cognitive deficits seen in epilepsy and AD. We characterize and compare the IEDs of human epilepsy patients from microwire hippocampal recording with those of AD transgenic mice with implanted multilayer hippocampal silicon probes. Both the local field potential features and firing patterns of pyramidal cells and interneurons were similar in the mouse and human. We found that as IEDs emerged from the CA3-1 circuits, they recruited pyramidal cells and silenced interneurons, followed by post-IED suppression. IEDs suppressed the incidence and altered the properties of physiological sharp-wave ripples, altered their physiological properties, and interfered with the replay of place field sequences in a maze. In addition, IEDs in AD mice inversely correlated with daily memory performance. Together, our work implies that IEDs may present a common and epilepsy-independent phenomenon in neurodegenerative diseases that perturbs hippocampal-cortical communication and interferes with memory. Answer: The impact of subclinical epileptiform discharges on short-term memory in children has been a subject of investigation, with studies yielding varying results. One study found no significant differences in cognitive performance in phases with and without epileptiform discharges, suggesting that subclinical epileptiform discharges do not associate with transient cognitive impairment in visuo-spatial and verbal short-term memory in seizure-free children with these discharges (PUBMED:16610599). However, another study demonstrated that transient cognitive impairment (TCI) occurred during EEG discharges in half of the children with subclinical discharges, with right-sided discharges associated with impairment of spatial tasks and left-sided with errors on verbal tasks (PUBMED:9422252). This indicates that subclinical discharges can indeed disturb cognition and influence daily performance. Furthermore, a study on interactions of epileptiform EEG discharges and cognition in patients with epilepsy under video monitoring showed that TCI was only demonstrable when the discharge occurred during the presentation of material to be recalled, and the difficulty of the task increased susceptibility to TCI (PUBMED:3504400). This suggests that the timing and context of the discharges are relevant to their impact on cognitive function. In the context of epilepsy, interictal epileptiform discharges during associative memory processing were found to impair memory in a regionally and temporally specific manner by decreasing physiological hippocampal ripples necessary for effective encoding and recall (PUBMED:33889945). This supports the idea that subclinical epileptiform discharges can affect memory functions. In summary, while one study did not find an association between subclinical epileptiform discharges and short-term memory impairment in children (PUBMED:16610599), other research suggests that these discharges can indeed affect cognitive functions, including short-term memory, depending on their timing, location, and the cognitive demands of the task at hand (PUBMED:9422252; PUBMED:3504400; PUBMED:33889945).
Instruction: Fasting serum leptin levels in the analysis of body mass index cut-off values: are they useful for overweight screening in children and adolescents? Abstracts: abstract_id: PUBMED:9877255 Fasting serum leptin levels in the analysis of body mass index cut-off values: are they useful for overweight screening in children and adolescents? A school population-based survey in three provinces of central Italy. Objective: Body mass index (BMI) was determined in a population of school students from three provinces of central Italy. Fasting serum leptin concentrations were assayed in a large number of subjects from the same area, to determine their distribution as plotted against the standard deviation score (z-score) of BMI. Design, Subjects And Measurements: Height and weight were recorded from 31170 subjects (16175 male and 14995 female), aged 3-18 y, to construct BMI charts of children and adolescents from central Italy. Percentiles and z-score were calculated using the LMS method of Cole. Serum leptin concentrations were assayed in 1929 subjects (996 male and 933 female) after overnight fasting. Results: BMI percentiles of central Italy were higher than those from standards of other European and USA populations. When plotted against the z-score of BMI, serum leptin values were distributed according to an exponential curve, showing a steep pattern and a wide distribution, as BMI values increased. The hypothesis of the existence of two subgroups, based on a different relation between leptin and BMI, was verified and a separation point between the two subgroups was identified using cluster analysis, discriminant analysis and a novel method developed by our group, hereafter referred to as 'regression clustering'. This method allows identification of the value of the independent variable (z-score of BMI) which can be taken as a separation point. This analysis provided the best results and indicated the following separation points: central Italy standard, z-score = 0.72 (76.4th percentile) for males and z-score = 0.69 (75.5th percentile) for females; French standard (the one suggested for a European population by the European Childhood Obesity Group, ECOG), z-score = 1.46 (92.8th percentile) for males and z-score = 1.96 (97.5th percentile) for females. Similar but variable results were obtained when the same analysis was performed on serum leptin concentration, subdivided according to pubertal development (stage I, stage II-III, stage IV-V). Conclusions: Children and adolescents from central Italy had greater BMI percentiles when compared to other European populations. Fasting serum leptin concentrations showed a distribution pattern related to z-score, thus allowing to identification of two different subgroups. The z-scores of BMI, identified as separation points, indicated a trend to leptin production by adipocytes that could be taken as indicators of significant increases of fat mass. This study proposes criteria and a statistical approach that could be useful in the identification of BMI cut-off values when screening children and adolescents for overweight. abstract_id: PUBMED:9625360 Serum leptin levels in children and adolescents with insulin-dependent diabetes mellitus in relation to metabolic control and body mass index. The ob protein, termed leptin, is produced by adipocytes and is thought to act as an afferent satiety signal regulating weight through suppressing appetite and stimulating energy expenditure in humans and/or rodents. Insulin has been found to be a potent stimulator of leptin expression in rodents. It is unclear at present whether this insulin action is a direct or an indirect effect. To investigate whether leptin concentrations in children and adolescents with type 1 diabetes (IDDM) were related to metabolic status, body weight, body mass index and insulin treatment, we have measured leptin concentrations in serum from 13 newly diagnosed IDDM patients before the beginning of insulin treatment (8 girls, 5 boys, aged 4.7-17.5 years) and in 134 patients with IDDM during treatment (64 girls, 70 boys, aged 2.6-20.1 years) using a specific radioimmunoassay. The data from patients with diabetes were compared with normative data that were derived from a large cohort of healthy children and adolescents. Serum from children with newly diagnosed diabetes had significantly lower levels of leptin (mean 1.28+/-1.60 ng/ml, range 0.14-6.13 ng/ml) compared with healthy children (n=710) (mean 2.2 ng/ml, range 0.26-14.4ng/ml) and compared with insulin-treated children and adolescents (mean 5.18+/-5.48 ng/ml, range 0.26-29.77 ng/ml) (P&lt;0.0001) even after adjustment for gender and body mass index (BMI). Serum leptin levels in patients with IDDM were significantly correlated with BMI (r=0.42, P&lt;0.0001). Multiple regression analysis showed that age and BMI were significantly correlated with leptin levels, while duration of diabetes, mean HbA1c levels, insulin dose and plasma glucose, triglyceride and cholesterol levels were not. Females had higher serum leptin concentrations than males even when adjusted for BMI (P&lt;0.0001). Surprisingly and most importantly, leptin levels in insulin-treated young adult (Tanner stage 5) patients were significantly higher than values found in the healthy nondiabetic reference population when adjusted for sex, Tanner stage and BMI. These findings suggest that leptin levels in IDDM patients show a similar dependency on adipose tissue and age as in healthy, normal children. The data provide evidence that insulin may be of importance as a regulator of serum leptin levels in vivo not only in rodents but also in humans. It is hypothesized that the elevated BMI-adjusted leptin levels in adolescents with IDDM could indicate either that these patients may be oversubstituted by the intensified insulin therapy that they are receiving or that their body composition and body fat content may differ from that of healthy adolescents in the sense that they have a relative increase in fat mass. abstract_id: PUBMED:28728946 Serum galectin-1 levels are positively correlated with body fat and negatively with fasting glucose in obese children. Galectin-1, a recently identified peptide, is primarily released from the adipose tissue. Although galectin-1 was shown to have an anti-inflammatory effect, its specific function is not clearly understood. We aimed to evaluate the relationship of serum galectin-1 levels with clinical and laboratory parameters in childhood obesity. A total of 45 obese children (mean age: 12.1±3.1years) and 35 normal-weight children (mean age: 11.8±2.2years) were enrolled. Clinical [body mass index (BMI), waist circumference (WC), percentage of body fat and blood pressure] and biochemical [glucose, insulin, lipids, galectin-1, high-sensitive C-reactive protein (hsCRP) and leptin levels] parameters were assessed. Serum galectin-1, hsCRP and leptin levels were significantly higher in obese children than those in normal-weight children (12.4 vs 10.2ng/mL, p&lt;0.001; 3.28 vs 0.63mg/L, p&lt;0.001; 8.3 vs 1.2ng/mL, p&lt;0.001, respectively). In obese children, galectin-1 levels correlated negatively with fasting glucose (r=-0.346, p=0.020) and positively with fat mass (r=0.326, p=0.026) and WC standard deviation score (SDS) (r=0.451, p=0.002). The multivariate regression analysis demonstrated that serum galectin-1 levels were significantly associated with fasting glucose and WC SDS. This study showed that obese children had significantly higher galectin-1 levels in proportion to fat mass in obese cases than those in healthy children, which may be interpreted as a compensatory increase in an attempt to improve glucose metabolism. abstract_id: PUBMED:9347406 Leptin serum levels in normal weight and obese children and adolescents: relationship with age, sex, pubertal development, body mass index and insulin. Objective: Leptin, the product of the ob gene, is present in higher concentrations in blood of obese subjects than of lean subjects. There is scarce information on the role of leptin in the pathogenesis of human obesity and little is known about leptin serum levels in obese children. Design, Subjects And Measurements: To evaluate the influences of age, sex, pubertal development and weight excess on serum leptin levels, we have studied 390 obese subjects (OS) and 320 normal weight subjects (NWS) aged 5-16 y. Fasting insulin concentrations were assayed in NWS, and an oral glucose tolerance test was carried out in OS and total insulin area under the curve (TIA) was calculated. Results: Log-transformed values of leptin serum concentrations appeared to be distributed according to an acceptable Gaussian pattern. As observed in adults, serum leptin concentrations in children and adolescents were also increased (4-5 times) in OS as compared to NWS. In both males and females, subdivided according to pubertal stages, serum leptin varied significantly in stage IV-V as compared to the lower stages, with a reduction in males and an increase in females. On comparing the two sexes, greater serum leptin concentrations were observed in females of both NWS and OS. A significant linear correlation was found in both groups, subdivided according to sex and pubertal stage, between log values of serum leptin and standard deviation scores (SDS) of body mass index (BMI), and log-transformed relative body weight (RBW). Using partial correlation analysis in subjects subdivided according to sex and pubertal stages, log values of serum leptin and fasting insulin values, adjusted by age and SDS of BMI, correlated significantly with a weaker correlation in males than in females. In OS, the leptin concentrations correlated better with TIA than with fasting insulin. A weight reduction program (WRP) was carried out in 141 OS and significant reductions of serum leptin and fasting insulin were observed, showing a reduction of RBW. There was a correlation between the reduction of RBW and of serum leptin, but not of fasting insulin. No variation was found in non-responsive OS. RBW reduction correlated with leptin, but not with insulin (fasting and TIA), evaluated before the therapeutic program started. Conclusion: As observed in adults, obese children and adolescents have higher serum leptin concentrations. However, several conditions should be taken into account when evaluating leptin concentrations in children. There are differences, independent of BMI, relative to pubertal stage and sex, females having greater leptin concentrations than males. There is evidence of a possible role for leptin in the effectiveness of a weight reduction program in OS. abstract_id: PUBMED:16180336 Homeostatic model assessment (HOMA) index cut-off values to identify the metabolic syndrome in children. The aim of the study was to establish the best cut-off value for the homeostatic model assessment (HOMA) index in identifying children and adolescents with the metabolic syndrome. The study included 72 non-obese and 68 obese children aged 7 to 16 years. Obesity is defined using the criteria proposed by Cole et al., being included as metabolic syndrome variables waist circumference, systolic blood pressure, diastolic blood pressure and seric values of glucose, uric acid, fasting insulin, leptin, triglycerides and HDL-cholesterol. Children were considered as having the metabolic syndrome when four or more characteristics showed abnormal values. The HOMA index was calculated as the product of the fasting plasma insulin level (microU/mL) and the fasting plasma glucose level (mmol/L), divided by 22.5. HOMA index cut-offs from the 5th to the 95th percentile were used. A receiver operating characteristic (ROC) curve was generated using the different HOMA cut-offs for the screening of the metabolic syndrome. The areas under the ROC curve, 95% confidence intervals, and the point to the ROC curve closest to 1, were calculated. The area under the ROC curve was 0.863 (95% C.I.: 0.797, 0.930). The point closest to 1 corresponds to the 60th percentile of the HOMA index distribution in our sample. HOMA index value at the 60th percentile was 2.28. Cut-off values corresponding to a range of HOMA index from the 50 to the 75 percentile, showed similar distances to 1. HOMA index values for percentiles 50 to 75 ranged from 2.07 to 2.83. In conclusion, HOMA index could be a useful tool to detect children and adolescents with the metabolic syndrome. HOMA cut-off values need to be defined in the paediatric population; however, values near to 3 seem to be adequate. abstract_id: PUBMED:23272432 Effect of body mass index on serum leptin levels. Background: Leptin is product of ob gene, an adipose tissue derived hormone that plays a key role in the regulation of body fat mass by regulating appetite and metabolism while balancing energy intake and energy expenditure. The objective of the study was to evaluate possible association between serum Leptin levels and Body Mass Index (BMI) of gender in adult age group. Methods: Two-hundred-seventy subjects aged 20-50 years were randomly selected from general population of Abbottabad. After complete evaluation, demographic data was recorded and BMI calculated. The subjects were grouped on the basis on BMI. Non-fasting venous blood samples were drawn to measure serum Leptin and serum glucose levels. The data were analysed using SPSS-15. Results: Serum Leptin levels and differences between genders were significant in all body mass indices. For normal BMI group the mean values for leptin were 2.6 +/- 1.5 etag/ml in men, and 17.3 +/- 10.2 etag/ml in women. For Group-2 mean leptin levels were 9.9 +/- 6.8 etag/ml in men, and 34.8 +/- 13.6 etag/ml in women. For Group-3 BMI comprising obese subjects mean values were 21.3 +/- 14.2 etag/ml in men, and 48.21 +/- 21.2 etag/ml in women (p&lt;0.001). Conclusion: A progressive increase in serum leptin concentration was observed with an increase in BMI. Significant difference between leptin concentrations in either gender was found in normal, overweight and obese subjects. abstract_id: PUBMED:27358533 Assessment of left ventricular mass index could predict metabolic syndrome in obese children. Background: Childhood obesity is a major risk factor for cardiovascular diseases in children and adults. Objectives: The purpose of this study was to evaluate the serum leptin level and the cardiac changes in normotensive obese children and to study the relationship between left ventricular mass index (LVMI) and serum leptin with the parameters of metabolic syndrome (MS) in obese children. Methods: This study was conducted in al Jeddani Hospital and Ibn Sina College Hospital in Saudi Arabia in the period from July 2012 to December 2013, and included 82 obese children. Their mean age was 10.2 ± 2.8 years; they were divided into 25 obese children with MS and 57 obese children without MS, and 40 healthy age- and sex-matched children were also included in the study as a control group. All children were subjected to clinical assessment including standing height, body weight, body mass index (BMI), waist circumference (WC), and blood pressure measurements. All children received an echocardiographic examination (2-dimensional, M-mode, Doppler, and tissue Doppler echocardiograpy) and laboratory assessment of serum leptin level, fasting glucose, fasting insulin, the homeostatic model assessment for insulin resistance (HOMA) index, total cholesterol, triglycerides, and high- and low-density lipoprotein profile. Results: BMI, BMI standard deviation score, WC, fasting glucose, fasting insulin, HOMA index and the serum leptin level were significantly higher in obese children compared to control group (p &lt; 0.05). The LVMI were increased in the obese compared to the control group (p &lt; 0.001) while left ventricle systolic and diastolic functions did not differ in obese versus control group (p &gt; 0.05). There was a significant positive correlation between both LVMI and serum leptin level in comparison to BMI, WC, fasting glucose, fasting insulin, HOMA, triglycerides, and low-density lipoprotein in all obese children, especially the MS group. However, there was a significant negative correlation between both LVMI and serum leptin level in comparison to high-density lipoprotein. Conclusion: Assessment of LVMI as routine echocardiographic examinations and serum leptin level might be a feasible and reliable method for the evaluation of obesity and its related cardiovascular risks during childhood that can predict metabolic syndrome and insulin resistance. abstract_id: PUBMED:9854461 Serum leptin level: possible association with haematopoiesis in adolescents, independent of body mass index and serum insulin. 1. The obese gene product leptin, secreted exclusively from adipocytes, was discovered to serve as a satiety factor and to play an important role in regulating body weight. In adults, the serum leptin level reportedly increases with the degree of obesity. Leptin receptors are expressed in various tissues, and recent in vitro studies suggest a role for leptin in haematopoiesis. 2. The present study was designed to clarify the relationship between serum leptin and body mass index, peripheral blood cell counts, serum cholesterol, high-density lipoprotein-cholesterol, insulin and cortisol levels in 299 Japanese male adolescents aged 15-16 years. 3. With simple linear correlation, log [serum leptin] showed a strong correlation with body mass index (r = 0.56), log [insulin] (r = 0.36) and leucocyte count (r = 0.22) (P &lt; 0.001 for all). There were also correlations with systolic blood pressure, erythrocyte count, haematocrit and high-density lipoprotein-cholesterol (P &lt; 0.01 for all). Even after adjustment for body mass index and log [insulin], log [leptin] correlated with leucocyte (P = 0.004) and erythrocyte (P = 0.057) counts. Stepwise multiple regression analyses revealed log [leptin] to correlate significantly with body mass index, log [insulin] and the leucocyte count (P &lt; 0.005 for all, r2 = 0.399). 4. To our knowledge, this is the first clinical study to show the possible association of serum leptin level with blood cell counts, independent of body mass index and serum insulin. We conclude that these data further support a role for leptin in haematopoiesis. abstract_id: PUBMED:37242271 Leptin Concentrations Determine the Association between High-Sensitivity C-Reactive Protein Levels and Body Mass Index in Prepubertal Children. Obesity is associated with the presence of low-grade inflammation even during childhood. The dysregulation in the secretion of adipokines, such as leptin, which occurs in obesity states, could be associated with an increase in inflammatory factors already at an early age. In this cross-sectional study, we aimed to investigate the role of leptin levels in the association between body mass index (BMI) and high-sensitivity C-reactive protein (hs-CRP) in healthy schoolchildren. Leptin and hs-CRP levels were analyzed in two pediatric cohorts comprising 684 prepubertal children and 763 adolescents. hs-CRP concentrations correlated significantly with BMI and leptin levels in prepubertal males and females as well as in adolescents. However, after adjusting for leptin concentration, no significant correlation was observed between hs-CRP and BMI in prepubertal children, while the correlations remained significant in adolescents. The same differences were observed when analyzed BMI according to hs-CRP tertile after adjusting for leptin; mean BMI was not significantly different between hs-CRP tertile in prepubertal children but was significantly different in adolescents. In conclusion, the fact that leptin concentrations determine the association of BMI with hs-CRP levels in prepubertal children, but not in adolescents, suggests a role for leptin in low-grade inflammation at early ages, while other factors seem to contribute to hs-CRP levels later in life. abstract_id: PUBMED:18055420 Ghrelin and leptin levels in obese adolescents. Relationship with body fat and insulin resistance. Objective: Ghrelin and leptin levels are influenced by body fat (BF%), pubertal stage and possibly insulin resistance (IR). The aim of our study was: 1) To compare fasting ghrelin and leptin levels between obese and non-obese, adolescents, 2) to investigate possible correlations of these hormones with BF %, as well as IR. Design: Twenty obese insulin resistant (IR) adolescents, twenty obese non IR (NIR) and fifteen healthy non-obese, age-matched adolescents were studied. In all participants, height, weight, body mass index (BMI) and BF % were measured. Fasting glucose, insulin, ghrelin and leptin levels were determined. IR was assessed using HOMA-IR index. Results: BMI, BF %, insulin and HOMA-IR values were positively correlated with leptin and negatively with ghrelin levels. A negative correlation between circulating leptin and ghrelin levels was found. A suggestive positive correlation between leptin levels and BF %, independent of BMI, was also observed (P=0.075). Ghrelin levels were significantly correlated with insulin levels and HOMA-IR, independent of BMI (P=0.077). Conclusions: Obesity and IR may play an important role in the release of ghrelin as well as in the negative correlation between ghrelin and leptin. Answer: Fasting serum leptin levels have been studied as potential indicators for overweight screening in children and adolescents. The research indicates that serum leptin concentrations are distributed according to an exponential curve as BMI values increase, suggesting a relationship between leptin levels and body fat mass (PUBMED:9877255). This relationship is further supported by findings that show serum leptin levels are significantly correlated with body mass index (BMI) in children and adolescents with insulin-dependent diabetes mellitus (IDDM) (PUBMED:9625360). Additionally, leptin levels are higher in obese subjects compared to normal-weight subjects and vary significantly with pubertal stages, with females generally having greater leptin concentrations than males (PUBMED:9347406). The study on the relationship between serum leptin levels and body mass index (BMI) in children and adolescents from central Italy found that BMI percentiles were higher than those from other European and USA populations. The study proposed criteria and a statistical approach that could be useful in identifying BMI cut-off values when screening for overweight in this demographic (PUBMED:9877255). In conclusion, fasting serum leptin levels appear to be useful in the analysis of BMI cut-off values for overweight screening in children and adolescents. The correlation between leptin levels and BMI, as well as the distribution pattern of leptin in relation to BMI z-scores, supports the potential use of leptin as a biomarker for significant increases in fat mass, which is a key factor in determining overweight status (PUBMED:9877255). However, it is important to consider that leptin levels can also be influenced by factors such as pubertal development, sex, and metabolic conditions like IDDM (PUBMED:9625360; PUBMED:9347406). Therefore, while leptin can be a useful tool, it should be used in conjunction with other assessments for a comprehensive evaluation of overweight and obesity in the pediatric population.
Instruction: Does radical nephrectomy with immunochemotherapy have any superiority over embolization alone in metastatic renal cell carcinoma? Abstracts: abstract_id: PUBMED:15263794 Does radical nephrectomy with immunochemotherapy have any superiority over embolization alone in metastatic renal cell carcinoma? A preliminary report. Introduction: We evaluated the results and effects of radical nephrectomy followed by immunochemotherapy and embolization alone on the survival of patients with metastatic renal cell carcinoma. Patients And Methods: The study included 20 patients with histologically confirmed renal cell carcinoma. Ten patients were in the combined therapy group and the other 10 patients who were unable to undergo nephrectomy because of poor performance status or unresectable tumor were in the embolization group. Radical nephrectomy was performed on patients with good performance status (WHO criteria 0-1). Immunochemotherapy (interferon alpha 2a and 5-fluorouracil) was started within 1 month after surgery. A dose of 9 x 10(6) U/day interferon alpha 2a was subcutaneously administered 3 times a week. A dose of 750 mg/m2 5-fluorouracil was administered intravenously during 4 h in the first 5 days of treatment. 5-Fluorouracil therapy was converted to weekly intervals after the first 12 days. Combined therapy was continued for 3 months. Ethanol was used for transarterial embolization. The main renal arteries and parasitic arteries of the tumor were embolized. Results: There were no significant differences in age distribution, sex, affected side, tumor size and T stage between the groups. After completion of the combined therapy, 6 patients showed progression at the first control. Only 1 patient (10 %) had stable disease throughout the 10 months after combined therapy. One patient died of myocardial infarction on the 4th day in the embolization group. While progressive disease within the first 3 months was detected in 6 patients, the other 3 patients (30%) had stable disease for 14, 17 and 55 months, respectively. There was no complete response in any group and no patient was alive (died of renal cell carcinoma) at the time of the analysis of the study data. Whereas the median survival time was 11 months (1-80) (mean +/- SE: 22.2 +/- 9.1) in the combined group, this time was a median of 1 month (1-74) (mean +/- SE: 17.5 +/- 8.6) in the embolization group. There was no statistically significant difference in survival time between the groups (p &gt; 0.05). Conclusion: In this preliminary report, the clinical findings in embolization-group patients were definitively worse than the nephrectomy plus immunochemotherapy-group patients. In spite of these differences, combination therapy using radical nephrectomy and immunochemotherapy could not show superiority to embolization alone, especially in terms of survival time. abstract_id: PUBMED:10683065 Comparison of preoperative embolization followed by radical nephrectomy with radical nephrectomy alone for renal cell carcinoma. A series of 474 patients with renal cell carcinoma (RCC), who had radical nephrectomy during a period of 15 years, was studied to assess the prognostic significance of various pathologic parameters (tumor stage [pT], lymph node status, metastasis, tumor grade, venous involvement) and value of preoperative embolization of renal artery. There were: 20 (4%) pT1, 204 (43%) pT2, 245 (52%) pT3, and 5 (1%) pT4 patients. All 474 patients underwent nephrectomy including a group of 118 (25%) patients (24 pT2, 90 pT3, and 4 pT4) who underwent preoperative embolization of the renal artery. To compare treatment outcomes in embolized patients with RCC, a group of 116 (24%) nonembolized patients with RCC was selected. This group was matched for sex, age, stage, tumor size, and tumor grade, with the embolized patients (p&lt;0.01). All important prognostic factors were studied as to their influence on survival by the treatment group. The overall 5- and 10-year survival was 62% and 47%, respectively. The 5- and 10-year survival rates were significantly better (p&lt;0.01) for patients with pT2 than for those with pT3 tumors (79% vs. 50% and 59% vs. 35%, respectively). Involvement of regional lymph nodes (N+) was an important prognostic factor for survival in patients with pT3 tumors. The 5-year survival for pT3 N+ was 39%, compared with 66% in those with pT3N0 (p&lt;0.01). Preoperative embolization was also an important factor influencing survival. The overall 5- and 10-year survival for 118 patients embolized before nephrectomy was 62% and 47%, respectively, and it was 35% and 23%, respectively, for the matched group of 116 patients treated with surgery alone (p = 0.01). The most important finding of this study was an apparent importance of preoperative embolization in improving patients' survival. This finding needs to be interpreted with caution and confirmed in a prospective randomized trial. abstract_id: PUBMED:10658462 Laparoscopic radical nephrectomy for renal cell carcinoma--transperitoneal anterior approach Purpose: To evaluate the usefulness of transperitoneal anterior approach in laparoscopic radical nephrectomy for patients with renal cell carcinoma. Material: Ten patients (Eight males and two females) with small renal cell carcinoma less than 7 cm in diameter were treated with laparoscopic radical nephrectomy between June 1997 and May 1999. Clinical stage was T1aN0M0 in 6 cases and T1bN0M0 in 4 cases. Method: Of the 10 patients, the initial four received preoperative embolization of the renal artery. Under general anesthesia laparoscopic radical nephrectomy via transperitoneal anterior approach was performed in all patients. This procedure facilitates initiated early access to the renal vessels as does open radical nephrectomy. The kidney was dissected en bloc with Gerota's fascia and delivered from the small skin incision without morcellation. Result: Two patients needed conversion to open surgery. The mean operating time of eight patients except for conversion cases was 247 minutes and the mean blood loss was 258 ml. There was no major complication associated with laparoscopic procedure. Conclusion: Transperitoneal anterior approach in laparoscopic radical nephrectomy is a suitable and useful procedure for the treatment of small renal cell carcinoma because it enables us to avoid possible tumor manipulation by early access to the renal vessels as open radical nephrectomy. abstract_id: PUBMED:10647648 Complete radical nephrectomy and vena caval thrombectomy during circulatory arrest. Purpose: A modification of radical nephrectomy for renal carcinoma and vena caval tumor thrombectomy for supradiaphragmatic tumor extension under hypothermic circulatory arrest is presented. Materials And Methods: Nephrectomy was performed during the circulatory arrest time in 16 consecutive patients during the last 4 years. Results: Average hypothermic circulatory arrest time increased by 6 minutes. By elimination of manipulation of tumor and thrombus before circulatory arrest potential thrombus fragmentation and embolization were minimized. There were no significant differences in blood loss or complications compared to a prior series of 10 patients undergoing the procedure using conventional techniques. Conclusions: Performance of complete radical nephrectomy along with venal caval embolectomy during circulatory arrest increases the safety of the procedure without significant morbidity. abstract_id: PUBMED:34754596 Percutaneous Lymphatic Maceration and Glue Embolization for High-output Chylous Ascites after Robot-assisted Laparoscopic Nephrectomy and Lymphadenectomy. To propose minimally invasive percutaneous techniques in the management of high output chylous ascites, a known potential complication of retroperitoneal surgery associated with significant morbidity and mortality. Management has traditionally been based on successful treatment reported in the literature. However, refractory or high-output leaks often prove difficult to treat and there is little evidence on superior management. We report percutaneous maceration and embolization for the management of high-volume abdominal chyle leak after robot-assisted laparoscopic (RAL) radical nephrectomy and lymph node dissection for renal cell carcinoma. A 68-year-old male with incidentally found renal cell carcinoma underwent RAL radical nephrectomy with paraaortic lymph node dissection. He initially improved after surgery but developed significant abdominal pain and distension approximately 7 weeks postoperative. This proved to be chyloperitoneum. Conservative management was initiated, but after continued high-output (&gt;1 L) fluid drainage, we pursued adjunct intervention involving Interventional Radiological percutaneous procedures. This included lymphatic maceration and glue embolization of leaking lymphatics. The patient tolerated the percutaneous procedures well with significant improvement in drain output ultimately leading to complete resolution of ascites without further complication. Similar interventions have previously been reported in the literature for cases of chylothorax with success. However, there is a lack of reports on utilizing this minimally invasive procedure for chyloperitoneum after retroperitoneal urologic surgery. We report our successful experience with percutaneous lymphatic maceration and embolization for high output chylous ascites after RAL radical nephrectomy with lymphadenectomy. We believe that early initiation utilizing these percutaneous techniques can achieve timely resolution and should be considered in the management of these patients. abstract_id: PUBMED:23925272 Radiofrequency ablation versus radical nephrectomy: clinical outcomes for stage T1b renal cell carcinoma. Purpose: To compare clinical outcomes of radiofrequency (RF) ablation retrospectively with those after radical nephrectomy in patients with stage T1b renal cell carcinoma (RCC). Materials And Methods: This retrospective study was approved by the institutional review board, and the requirement to obtain written informed consent was waived. From June 2002 to March 2012, 60 patients (mean age, 65.2 years; age range, 39-86 years) with a single RCC measuring 4.1-7.0 cm (stage T1b) underwent RF ablation (n = 21) or radical nephrectomy (n = 39). Selective renal artery embolization was performed before RF ablation in eight patients. The overall, RCC-related, and disease-free survival rates, the percentage decrease in the glomerular filtration rate (GFR), and safety were compared by using the log-rank (survival), paired and Student t (GFR), and Fisher exact (safety) tests. Results: The overall survival rate was significantly lower in the RF ablation group than in the radical nephrectomy group (48% vs 97% at 10 years, respectively; 95% confidence interval [CI]: 12.4%, 76.7% vs 78.2%, 99.5%; P &lt; .009). The RCC-related survival rate (94% [95% CI: 62.6%, 99.1%] with RF ablation vs 100% with radical nephrectomy at 10 years) and the disease-free survival rate (88% [95% CI: 59.2%, 96.9%] with RF ablation vs 84% [95% CI: 60.6%, 94.3%] with radical nephrectomy at 10 years, P = .99) were comparable between the two groups. No treatment-related deaths occurred. Although major complication rates were similar between the two patient groups (8.0% [two of 25 patients] vs 5.1% [two of 39 patients], P = .61), the percentage decrease in the GFR was significantly lower in the RF ablation group than in the radical nephrectomy group at the last follow-up (12.5% ± 23.4 vs 32.3% ± 20.8, respectively; P &lt; .003). Conclusion: RF ablation is a safe procedure for patients at substantial surgical risk for radical nephrectomy, providing comparable RCC-related and disease-free survival and preserving renal function. abstract_id: PUBMED:20081645 The role of embolization in the management of tumour recurrence after radical nephrectomy. This article presents the case of a 56-year-old woman who developed a tumour recurrence following radical nephrectomy. This was invading the descending colon and causing severe gastrointestinal haemorrhage. Angiography revealed neovascularization from T11 and T12 intercostal arteries, which were successfully embolized percutaneously. abstract_id: PUBMED:21516476 Chronic pulmonary diseases are independent risk factors for complications after radical nephrectomy. Aim: We aimed to identify the prognostic factors and the new parameters such as Charlson's comorbidity index (CCI) that might predict postoperative complication rates in a radical nephrectomy cohort. We also evaluated the correlation of CCI with the Clavien postoperative complication scale (CPCS). Materials And Methods: Perioperative characteristics of 47 patients undergoing radical nephrectomy were recorded. Following items were assessed: preoperative patient characteristics including age, gender, CCI, American Society of Anesthesiologists (ASA) physical status classification system category, renal and hepatic functions, type of nephrectomy incision, operative time, clinical stage and histopathological subtype of the tumor, and preoperative co-morbid conditions including diabetes mellitus, hypertension, chronic pulmonary disease, peptic ulcers, renal and hepatic dysfunction. Postoperative complications were defined as death, wound infection, pneumonia, atelectasis, pulmonary emboli, anemia, sepsis, cardiac arrhythmia, myocardial infarction, and deep vein thrombosis. In addition, postoperative complications were also graded according to the CPCS and accepted as those occurring within 30 days. Results: Preoperative chronic pulmonary diseases were found to be significant risk factors for the development of postoperative complications. Age adjusted odds ratio was 7.112 for chronic pulmonary disease. The mean CCI in patients who did not develop any postoperative complication was 4.49 ± 1.95, whereas it was 5.75 ± 2.60 for patients who developed postoperative complications (P = 0.138). In Spearman correlation analysis, CCI value was found to be significantly correlated with CPCS grade (P = 0.011, rho value = 0.366). Conclusion: Presence of chronic pulmonary disease is a strong predictor of postoperative complications after radical nephrectomy. Patients with higher preoperative CCI scores may have higher postoperative CPCS grades. Additional studies are warranted. abstract_id: PUBMED:6887394 Results of radical nephrectomy with extensive lymphadenectomy for renal cell carcinoma. Radical nephrectomy with extensive lymphadenectomy was performed by the same surgeon on 104 consecutive patients with renal cell carcinoma. No significant differences in over-all survival owing to age, sex and preoperative embolization were noted. Only extension of the disease proved to be important as a prognostic factor. In the light of the statistical analysis of this variable the prognosis for patients with renal cell carcinoma has been classified as 1) good--patients with tumors confined to the kidney, 2) intermediate--patients with perirenal fat involvement, lymph node metastases and venous extension, and 3) poor--patients with distant metastases. abstract_id: PUBMED:12050516 Hand assisted laparoscopic radical nephrectomy for renal cell carcinoma with inferior vena caval thrombus. Purpose: To our knowledge we present the initial clinical report of hand assisted laparoscopic radical nephrectomy for renal cell carcinoma with tumor thrombus extending into the inferior vena cava. Materials And Methods: A 76-year-old man was referred to our medical center with a 12.5 x 10 cm. stage T3b right renal tumor extending into the inferior vena cava. The caval thrombus was limited and completely below the level of the hepatic veins. After preoperative renal embolization via the hand assisted transperitoneal approach the right kidney was completely dissected with the renal hilum. Proximal and distal control of the inferior vena cava was obtained with vessel loops and a single lumbar vein was divided between clips. An endoscopic Satinsky vascular clamp was placed on the inferior vena cava just beyond its juncture with the right renal vein, thereby, encompassing the caval thrombus. The inferior vena cava was opened above the Satinsky clamp and a cuff of the inferior vena cava was removed contiguous with the renal vein. The inferior vena cava was repaired with continuous 4-zero vascular polypropylene suture and the Satinsky clamp was then removed. A literature search failed to reveal any similar reports of laparoscopic radical nephrectomy for stage T3b renal cell cancer. Results: Surgery was completed without complication with an estimated 500 cc blood loss. Pathological testing confirmed stage T3b grade 3 renal adenocarcinoma with negative inferior vena caval and soft tissue margins. Conclusions: The introduction of vascular laparoscopic instrumentation and the hand assisted approach enabled us to extend the indications for laparoscopic radical nephrectomy to patients with minimal inferior venal caval involvement. Answer: Based on the provided abstracts, radical nephrectomy followed by immunochemotherapy does not show superiority over embolization alone in terms of survival time for patients with metastatic renal cell carcinoma. In a preliminary report, there was no statistically significant difference in survival time between patients who underwent radical nephrectomy with immunochemotherapy and those who received embolization alone (PUBMED:15263794). Despite some clinical findings suggesting worse outcomes for patients in the embolization group, the combination therapy could not demonstrate an advantage in survival. However, another study highlighted the potential benefits of preoperative embolization, showing that it was an important factor influencing survival. Patients who underwent preoperative embolization before nephrectomy had significantly better 5- and 10-year survival rates compared to those treated with surgery alone (PUBMED:10683065). This suggests that while radical nephrectomy with immunochemotherapy may not be superior to embolization alone, preoperative embolization itself could be beneficial when combined with radical nephrectomy. It is important to note that the findings from these studies should be interpreted with caution and that more research, such as prospective randomized trials, may be needed to confirm the results and to fully understand the role of these treatments in managing metastatic renal cell carcinoma.
Instruction: Does Conservative Surgery for Breast Carcinoma Still Require Axillary Lymph Node Evaluation? Abstracts: abstract_id: PUBMED:27840075 Does Conservative Surgery for Breast Carcinoma Still Require Axillary Lymph Node Evaluation? A Retrospective Analysis of 1156 Consecutive Women With Early Breast Cancer. Background: The role of axillary surgery for early breast carcinoma treated with conservative surgery and radiotherapy is currently the subject of considerable investigation. Recent studies have supported the noninferiority of avoiding axillary surgery in terms of overall survival when sentinel lymph node biopsy (SLNB) presents ≤ 2 positive lymph nodes, thus sparing the patients from complications. There are some ongoing studies investigating the possibility of omitting SLNB. Axillary study seems to be sufficiently replaced by SLNB for staging the disease. Axillary surgery maintains a therapeutic role in the presence of &gt; 2 metastatic lymph nodes at SLNB. Patients And Methods: We performed a retrospective analysis of 1156 patients with early breast cancer to estimate the real incidence of patients with T1 tumors presenting &gt; 2 metastatic lymph nodes. Results: Of the 1156 cases, only 106 (9.2%) had &gt; 2 axillary metastatic lymph nodes. More specifically, 38 (4.3%) of 884 T1 cases, and 6 (2.3%) of 257 of T1b cases had &gt; 2 metastatic lymph nodes. Conclusion: The advantage of axillary surgery seems to be limited only to a specific subgroup of T1 patients who are undergoing conservative surgery plus radiotherapy. The ongoing studies on avoiding SLNB will likely prove the noninferiority of omitting biopsy because these studies are conducted in the whole population of early breast cancers. It is necessary to identify more accurately the subpopulation of patients who may benefit from axillary surgery. abstract_id: PUBMED:36702672 Axillary lymph node dissection: Dead or still alive? Although sentinel lymph node biopsy is now the primary method of axillary staging and is therapeutic for patients with limited nodal disease, axillary lymph node dissection (ALND) is still necessary for staging in groups where sentinel lymph node biopsy has not been proven to be accurate and to maintain local control in those with a heavy axillary tumor burden. Additionally, newer approaches to systemic therapy tailored to risk level sometimes necessitate knowledge of the number of involved axillary nodes which can only be obtained with ALND. Ongoing trials will address whether there are additional circumstances where radiotherapy can replace ALND. abstract_id: PUBMED:36342647 Conservative axillary surgery is emerging in the surgical management of breast cancer. Axillary lymph node dissection (ALND) has been the standard axillary treatment for breast cancer for a long time. However, ALND is associated with postoperative morbidities, including local sensory dysfunction, reduced shoulder mobility and most notably arm lymphedema. Recently, ALND can be avoided not only in clinically node-negative (cN0) patients with negative sentinel lymph nodes (SLNs), but also in patients with less than 3 positive SLNs receiving breast radiation, axillary radiation, or a combination of the two. Moreover, SLN biopsy has been adopted for use in clinically node-positive (cN +) patients presenting as cN0 after neoadjuvant chemotherapy (NAC); ALND may be avoided in cN + patients who convert to SLN-negative following NAC. Patients who undergo SLN biopsy alone have less postsurgical morbidities than those who undergo ALND. Nevertheless, ALND is still required in a select group of patients. A variety of conservative approaches to ALND have been developed to spare arm lymphatics to minimize arm lymphedema. These conservative procedures seem to decrease the incidence of lymphedema without increasing axillary recurrence. In the era of effective multimodality therapy, full conventional ALND removing all microscopic axillary disease may now be unnecessary in both cN0 patients and cN + patients. Regardless, emerging procedures for ALND should still be considered as investigational approaches, as further studies with longer follow-up are necessary to determine the safety of conservative ALND to spare arm lymphatics. abstract_id: PUBMED:38001613 Conservative Axillary Surgery May Prevent Arm Lymphedema without Increasing Axillary Recurrence in the Surgical Management of Breast Cancer. Axillary lymph node dissection (ALND) has been associated with postoperative morbidities, including arm lymphedema, shoulder dysfunction, and paresthesia. Sentinel lymph node (SLN) biopsy emerged as a method to assess axillary nodal status and possibly obviate the need for ALND in patients with clinically node-negative (cN0) breast cancer. The majority of breast cancer patients are eligible for SLN biopsy only, so ALND can be avoided. However, there are subsets of patients in whom ALND cannot be eliminated. ALND is still needed in patients with three or more positive SLNs or those with gross extranodal or matted nodal disease. Moreover, ALND has conventionally been performed to establish local control in clinically node-positive (cN+) patients with a heavy axillary tumor burden. The sole method to avoid ALND is through neoadjuvant chemotherapy (NAC). Recently, various forms of conservative axillary surgery have been developed in order to minimize arm lymphedema without increasing axillary recurrence. In the era of effective multimodality therapy, conventional ALND may not be necessary in either cN0 or cN+ patients. Further studies with a longer follow-up period are needed to determine the safety of conservative axillary surgery. abstract_id: PUBMED:38190968 Can we avoid axillary lymph node dissection in patients with node positive invasive breast carcinoma? Objectives: The indications and modalities of breast and axillary surgery are undergoing profound change, with the aim of personalizing surgical management while avoiding over-treatment. To update best practices for axillary surgery, four questions were selected by the Senology Commission of the Collège National des Gynécologues et Obstétriciens Français (CNGOF), focusing on, firstly, the definition and evaluation of targeted axillary dissection (TAD) techniques; secondly, the possibility of surgical de-escalation in case of initial lymph node involvement while performing initial surgery; thirdly, in case of surgery following neo-adjuvant systemic therapy (NAST), and fourthly, contra-indications to de-escalation of axillary surgery to allow access to particular adjuvant systemic therapies. Methods: The Senology Commission based its responses primarily on an analysis of the international literature, clinical practice recommendations and national and international guidelines. Results: Firstly, TAD is a technique that combines excision of clipped metastatic axillary node(s) and the axillary sentinel lymph nodes (ASLNs). The detection rate and sensitivity are increased but it still needs to be standardized and practices better evaluated. Secondly, TAD represents an alternative to axillary clearance in cases of metastatic involvement of a single node that can be resected. Thirdly, neither TAD nor ASLN alone is recommended in France after NAST outside of clinical trials, although it is used in several countries in cases of complete pathological response in the lymph nodes, and when at least three lymph nodes have been removed. Fourthly, as some adjuvant targeted therapies are indicated in cases of lymph node invasion of more than three lymph nodes, the place of TAD in this context remains to be defined. Conclusion: Axillary surgical de-escalation can limit the morbidity of axillary clearance. Having proved that TAD does not reduce patient survival, it will most probably replace axillary clearance in well-defined indications. This will require prior standardization of the method and its indications and contra-indications, particularly to enable the use of new targeted therapies. abstract_id: PUBMED:38186556 Technetium-99-Guided Axillary Lymph Node Identification: A Case Report of a Novel Technique for Targeted Lymph Node Excision Biopsy for Node Positive Breast Cancer After Neoadjuvant Chemotherapy. Targeted axillary lymph node identification for breast cancer involves localization and removal of previously marked metastatic lymph nodes after the completion of neoadjuvant chemotherapy (NACT), when clinical and radiological complete responses of the axillary nodes are achieved. Traditionally, axillary lymph node dissection is performed for patients with node positive disease, but the high rates of pathological complete responses now seen after NACT have ushered in lower morbidity techniques such as sentinel lymph node excision biopsies, targeted axillary lymph node dissection and targeted axillary lymph node identification (clip node identification) in node positive disease which has converted to clinical/radiologically node negative. The latter two techniques often require the use of expensive seeds and advanced localization techniques. Here we describe the case of a 59-year-old woman who was diagnosed with node positive invasive breast cancer who was sequenced with NACT. We developed a novel technique, where technetium-99m was injected directly into a previously clipped metastatic axillary lymph node which was then localized with the Neoprobe gamma detection system intra-operatively and removed. This is a relatively low-cost technique that can be easily introduced in limited resourced health systems where radio-guided sentinel lymph node biopsies are already being performed. abstract_id: PUBMED:33342782 Diagnostic accuracy of sentinel lymph node biopsy in determining the axillary lymph node metastasis. Background: Sentinel lymph node biopsy (SLNB) is accepted as the standard procedure to determine the axillary lymph node metastasis in breast cancer at early stage. However, in many cases with sentinel lymph node positivity, the axilla does not contain any tumor cells. As a result, the accuracy of SLNB to predict axillary lymph node metastasis must be evaluated. Patients And Methods: Thousand hundred and fourteen women operated for breast cancer were retrospectively examined. Breast cancer patients without axillary metastasis on clinical examination who had undergone SLNB were included in the study. Sentinel lymph node positivity and axillary lymph node positivity were compared. Results: Among 1114 women operated for breast cancer, 230 were clinically node negative preoperatively and undergone SLNB. Eighty-three (36%) of the patients were SLNB positive and undergone axillary dissection. Forty-three (51.8%) of them had tumor positive axillary lymph nodes and 40 (48.2%) of them had tumor negative axillary lymph nodes. Interpretation: In 48.2% of the patients, positive sentinel lymph node does not demonstrate a positive axilla. This finding supports sparing axillary dissection in patients with favorable prognostic factors even if the sentinel lymph node is found to be positive. Conclusion: Axillary lymph node dissection(ALND) may be spared even if there are macrometastatic sentinel lymph nodes in patients with favorable tumor types who will undergo breast-conserving surgery. abstract_id: PUBMED:31662523 Axillary Dissection vs. no Axillary Dissection in Breast Cancer Patients With Positive Sentinel Lymph Node: A Single Institution Experience. Background/aim: Axillary surgery of breast cancer patients is undergoing a paradigm shift, as axillary lymph node dissection's (ALND) usefulness is being questioned in the treatment of patients with tumor-positive sentinel lymph node biopsy (SLNB). The aim of this study was to investigate the overall survival (OS) and relapse-free survival (RFS) of patients with positive SLNB treated with ALND or not. Patients And Methods: We investigated 617 consecutive patients with cN0 operable breast cancer with positive SLNB undergoing mastectomy or conservative surgery. A total of 406 patients underwent ALND and 211 were managed expectantly. Results: No significant difference in OS and RFS was found between the two groups. The incidence of loco-regional recurrence in the SLNB-only group and the ALND group was low and not significant. Conclusion: The type of breast cancer surgery and the omission of ALND does not improve OS or RSF rate in cases with metastatic SLN. abstract_id: PUBMED:33673071 Dedicated Axillary MRI-Based Radiomics Analysis for the Prediction of Axillary Lymph Node Metastasis in Breast Cancer. Radiomics features may contribute to increased diagnostic performance of MRI in the prediction of axillary lymph node metastasis. The objective of the study was to predict preoperative axillary lymph node metastasis in breast cancer using clinical models and radiomics models based on T2-weighted (T2W) dedicated axillary MRI features with node-by-node analysis. From August 2012 until October 2014, all women who had undergone dedicated axillary 3.0T T2W MRI, followed by axillary surgery, were retrospectively identified, and available clinical data were collected. All axillary lymph nodes were manually delineated on the T2W MR images, and quantitative radiomics features were extracted from the delineated regions. Data were partitioned patient-wise to train 100 models using different splits for the training and validation cohorts to account for multiple lymph nodes per patient and class imbalance. Features were selected in the training cohorts using recursive feature elimination with repeated 5-fold cross-validation, followed by the development of random forest models. The performance of the models was assessed using the area under the curve (AUC). A total of 75 women (median age, 61 years; interquartile range, 51-68 years) with 511 axillary lymph nodes were included. On final pathology, 36 (7%) of the lymph nodes had metastasis. A total of 105 original radiomics features were extracted from the T2W MR images. Each cohort split resulted in a different number of lymph nodes in the training cohorts and a different set of selected features. Performance of the 100 clinical and radiomics models showed a wide range of AUC values between 0.41-0.74 and 0.48-0.89 in the training cohorts, respectively, and between 0.30-0.98 and 0.37-0.99 in the validation cohorts, respectively. With these results, it was not possible to obtain a final prediction model. Clinical characteristics and dedicated axillary MRI-based radiomics with node-by-node analysis did not contribute to the prediction of axillary lymph node metastasis in breast cancer based on data where variations in acquisition and reconstruction parameters were not addressed. abstract_id: PUBMED:35864309 Non-invasive predictors of axillary lymph node burden in breast cancer: a single-institution retrospective analysis. Purpose: Axillary staging is an important prognostic factor in breast cancer. Sentinel lymph node biopsy (SNB) is currently used to stage patients who are clinically and radiologically node-negative. Since the establishment that axillary node clearance (ANC) does not improve overall survival in breast-conserving surgery for patients with low-risk biological cancers, axillary management has become increasingly conservative. This study aims to identify and assess the clinical predictive value of variables that could play a role in the quantification of axillary burden, including the accuracy of quantifying abnormal axillary nodes on ultrasound. Methods: A retrospective analysis was conducted of hospital data for female breast cancer patients receiving an ANC at our centre between January 2018 and January 2020. The reference standard for axillary burden was surgical histology following SNB and ANC, allowing categorisation of the patients under 'low axillary burden' (2 or fewer pathological macrometastases) or 'high axillary burden' (&gt; 2). After exploratory univariate analysis, multivariate logistic regression was conducted to determine relationships between the outcome category and candidate predictor variables: patient age at diagnosis, tumour focality, tumour size on ultrasound and number of abnormal lymph nodes on axillary ultrasound. Results: One hundred and thirty-five patients were included in the analysis. Logistic regression showed that the number of abnormal lymph nodes on axillary ultrasound was the strongest predictor of axillary burden and statistically significant (P = 0.044), with a sensitivity of 66.7% and specificity of 86.8% (P = 0.011). Conclusion: Identifying the number of abnormal lymph nodes on preoperative ultrasound can help to quantify axillary nodal burden and identify patients with high axillary burden, and should be documented as standard in axillary ultrasound reports of patients with breast cancer. Answer: Conservative surgery for breast carcinoma has traditionally included axillary lymph node evaluation to stage the disease and guide treatment decisions. However, the necessity of axillary lymph node dissection (ALND) is being reconsidered, especially in cases of early breast cancer treated with conservative surgery and radiotherapy. Recent studies have supported the noninferiority of avoiding axillary surgery when sentinel lymph node biopsy (SLNB) presents with ≤ 2 positive lymph nodes, thus sparing patients from complications such as lymphedema, shoulder dysfunction, and paresthesia (PUBMED:27840075; PUBMED:36342647; PUBMED:38001613). SLNB has emerged as a method to assess axillary nodal status and possibly obviate the need for ALND in patients with clinically node-negative (cN0) breast cancer. The majority of breast cancer patients are eligible for SLNB only, so ALND can be avoided in many cases. However, ALND is still necessary for staging in groups where SLNB has not been proven to be accurate and to maintain local control in those with a heavy axillary tumor burden (PUBMED:36702672; PUBMED:36342647; PUBMED:38001613). Ongoing trials are investigating whether radiotherapy can replace ALND in additional circumstances. Conservative approaches to ALND have been developed to spare arm lymphatics and minimize arm lymphedema, and these procedures seem to decrease the incidence of lymphedema without increasing axillary recurrence (PUBMED:36342647; PUBMED:38001613). In the context of neoadjuvant chemotherapy (NAC), ALND may be avoided in clinically node-positive (cN+) patients who convert to SLN-negative following NAC. Targeted axillary dissection (TAD) and other techniques are being evaluated as alternatives to axillary clearance in cases of metastatic involvement of a single node that can be resected (PUBMED:38190968; PUBMED:38186556). In conclusion, while conservative surgery for breast carcinoma is moving towards less invasive axillary surgery, axillary lymph node evaluation remains an important component of breast cancer management. The decision to perform ALND or SLNB should be individualized based on the tumor characteristics, patient factors, and the results of ongoing clinical trials. The goal is to personalize surgical management to avoid overtreatment while ensuring adequate disease control (PUBMED:27840075; PUBMED:36702672; PUBMED:36342647; PUBMED:38001613; PUBMED:38190968; PUBMED:38186556).
Instruction: Is routine urinary tract investigation necessary for children with monosymptomatic primary nocturnal enuresis? Abstracts: abstract_id: PUBMED:11597547 Is routine urinary tract investigation necessary for children with monosymptomatic primary nocturnal enuresis? Objectives: To investigate in a prospective study the role of bladder function and to compare the results of urinary tract ultrasonography and urinalysis in children with and without primary nocturnal enuresis because, although this is a common problem in children, the etiology and mechanisms of the disorder have not been elucidated. Methods: The study included 106 children with monosymptomatic primary nocturnal enuresis and a control group of 57 children with no history of voiding dysfunction, aged 5 to 19 years. All children underwent urinalysis, bladder and upper urinary tract ultrasonography, and uroflowmetry. The bladder capacity, bladder wall thickness, and postvoid residual volume were measured using ultrasonography. The findings were compared between the enuresis and control groups according to age: 5 to 9 years, 10 to 14 years, and 15 to 19 years. Results: The mean age was 9.6 +/- 3.1 years in the nocturnal enuresis group and 9.4 +/- 3.3 years in the control group (P = 0.727). The mean number of defecations per week was significantly lower statistically in the enuresis group than in the control group in the age categories of 5 to 9 years and 10 to 14 years (P = 0.038 and P = 0.018, respectively), and the mean number of urinations per day was significantly higher statistically in the enuresis group than in the control group in the age groups of 5 to 9 years and 10 to 14 years (P = 0.002 and P = 0.001, respectively). The bladder capacity, bladder wall thickness, postvoid residual volume, uroflowmetry maximal flow rate, and average flow rate were not significantly different statistically between the children with primary nocturnal enuresis and the control group in the three age brackets. Urinary infection was detected in 2 children (1.88%) in the nocturnal enuresis group and none of the children in the control group (P = 0.547). Upper urinary tract abnormalities detected by ultrasonography were seen in 3 children (2.83%) in the nocturnal enuresis group and 1 child (1.75%) in the control group, revealing no statistical significance (P = 0.671). Conclusions: Our findings show that the ultrasonographic and uroflowmetry findings on bladder function and the upper urinary system and the incidence of urinary infection are similar in children with and without nocturnal enuresis. Obtaining a voiding and elimination diary in conjunction with a good history may be beneficial in children with monosymptomatic primary nocturnal enuresis. In addition, routine urinalysis may be unnecessary in the evaluation of children with monosymptomatic primary nocturnal enuresis after obtaining a careful and complete history of the voiding dysfunction. abstract_id: PUBMED:18924504 The evaluation of urinary tract dysfunction in children with monosymptomatic primary nocturnal enuresis Unlabelled: The reason for our search was various investigations about urinary tract dysfunctions in enuretic children. Aim: The aim of our study was estimation of lover urinary tract function in children with monosymptomatic primary nocturnal enuresis without positive reaction for a long non pharmacological therapy. Material And Methods: 54 children after 9-12 months behavioral therapy and short pharmacological treatment (desmopresin) was undergoing urodynamic investigation (uroflowmetry and cystometry). Results: Urodynamic disorders was found in 44/54 of estimated children. In 34 of children it was overactive bladder, in 6 patients we found detrusor-sphincter discoordination. Five children had decreased bladder capacity. Next to non pharmacological treatment we used anticholinergic or Baclofen depending on the results of urodynamic tests. The response to the treatment (non bedwetting at all) we observed in 34 children (in 9 of them after 3 months of therapy, in 16 after 6 months of therapy and in 12 after 12 months of therapy). The rest of children had decreased number of wet night per month. Conclusion: The pharmacological treatment of urodynamic disorders helps to children with monosymptomatic primary nocturnal enuresis to lost this symptom. abstract_id: PUBMED:25936863 Advances in the management of primary monosymptomatic nocturnal enuresis in children. Intermittent incontinence of urine in a sleeping child who has previously been dry for less than 6 months without any other lower urinary tract symptoms is considered to be primary monosymptomatic nocturnal enuresis (PMNE). Although, most children outgrow PMNE with age, the psychological impact on the child warrants parental education and patient motivation and treatment. Motivational therapy, alarm therapy and drug therapy are the mainstay of treatment. Motivational and alarm therapy have better success rates than drug therapy alone. Desmopressin is the commonly used first-line drug and is best for short-term relief. Other drugs such as anti-cholinergics, imipramine and sertraline are used in resistant cases. This review focuses on the assessment and treatment of PMNE. abstract_id: PUBMED:33293883 Voiding Disorders in Pediatrician's Practice. Voiding disorders result usually from functional disturbance. However, relevant organic diseases must be excluded prior to diagnosis of functional disorders. Additional tests, such as urinalysis or abdominal ultrasound are required. Further diagnostics is necessary in the presence of alarm symptoms, such as secondary nocturnal enuresis, weak or intermittent urine flow, systemic symptoms, glucosuria, proteinuria, leukocyturia, erythrocyturia, skin lesions in the lumbar region, altered sensations in the perineum. Functional micturition disorders were thoroughly described in 2006, and revised in 2015 by ICCS (International Children's Continence Society) and are divided into storage symptoms (increased and decreased voiding frequency, incontinence, urgency, nocturia), voiding symptoms hesitancy, straining, weak stream, intermittency, dysuria), and symptoms that cannot be assigned to any of the above groups (voiding postponement, holding maneuvers, feeling of incomplete emptying, urinary retention, post micturition dribble, spraying of the urinary stream). Functional voiding disorders are frequently associated with constipation. Bladder and bowel dysfunction (BBD) is diagnosed when lower urinary tract symptoms are accompanied by problems with defecation. Monosymptomatic enuresis is the most common voiding disorder encountered by pediatricians. It is diagnosed in children older than 5 years without any other lower urinary tract symptoms. Other types of voiding disorders such as: non-monosymptomatic enuresis, overactive and underactive bladder, voiding postponement, bladder outlet obstruction, stress or giggle incontinence, urethrovaginal reflux usually require specialized diagnostics and therapy. Treatment of all types of functional voiding disorders is based on non-pharmacological recommendations (urotherapy), and such education should be implemented by primary care pediatricians. abstract_id: PUBMED:24783173 Lower Urinary Tract Urological Abnormalities and Urodynamic Findings of Physiological Urinary Incontinence Versus Non-mono Symptomatic Nocturnal Enuresis in Children. Background: Although 98% of children attain daytime bladder control by three years of age, urinary incontinence is regarded physiological up to the fifth year of life. Objectives: This study aimed to assess whether lower urinary tract urological abnormalities and abnormal urodynamic findings are infrequent in children with physiological urinary incontinence in contrast to those with non-monosymptomatic nocturnal enuresis (NMNE). Patients And Methods: During a three-year period (2007-2009), 66 neurologically normal children including 51 children (34 girls, 17 boys) older than five years of age with NMNE and intermittent daytime incontinence, and 15 children with physiological urinary incontinence (eight girls and seven boys) aged four to five years of age without any known urological abnormalities were enrolled in the study. Patients with neurologic deficits or known urological anomalies were excluded from the study. Kidney-bladder ultrasonography, voiding cystourethrography (VCUG), and urodynamic studies were performed to evaluate the anatomy of urinary tract and bladder function. Results: Urinary tract infection was found in 23 (34.8%) children, 17 (33.3%) and 6 (40%) patients with NMNE and physiological urinary incontinence, respectively. Out of 48 patients who underwent VCUG, vesicoureteral reflux (VUR) was found in seven and eight children younger and older than five years of age, respectively. Abnormal urodynamic findings were reported in 5 (62.5%) of eight children younger than five-year-old, and 14 (63.6%) of 22 patients older than 5-year-old. Conclusions: VUR might be more frequent in children with physiological urinary incontinence than the normal population, and might be as common as NMNE with intermittent daytime incontinence. abstract_id: PUBMED:33186836 Behavioral problems in children with primary monosymptomatic nocturnal enuresis. Background And Objectives: Primary monosymptomatic nocturnal enuresis (PMNE) is a common distressing condition to children and parents. This study aimed to determine frequencies, severities and characteristics of behavioral problems with PMNE. Methods: This cross-sectional study included 80 children with PMNE (age: 12.58 ± 1.24 yrs.; boys = 58, girls = 22) and 60 healthy children. Behavioral symptoms were assessed by Strength and Difficulties Questionnaire (SDQ). Results: This study included 80 children (boys/girls ratio = 2.64:1) with PMNE. They had mean age of 12.58 ± 1.24 yrs. The majority (70%) had good response to medical treatment. Compared to controls, children with enuresis had higher frequencies of emotional, conduct and hyperactivity-inattention symptoms and peer relationship and prosocial problems and higher total (P = 0.001) and different subscales' scores of SDQ. There was an overlap of behavioral problems in 52.2% of children with nocturnal enuresis. Compared to children without behavioral symptoms, children with behavioral symptoms were significantly older at age at presentation (P = 0.046) regardless of gender, residence and type or response to medications. Multiple regression analysis showed that emotional [β = 0.053 (95%CI = 0.037-0.084), P = 0.024] and hyperactivity-inattention symptoms [β = 0.063 (95%CI = 0.028-0.097), P = 0.001] were significantly associated with enuresis independent to other problems. Conclusion: PMNE is associated with higher risk of behavioral problems particularly emotional and hyperactivity-inattention symptoms indicating externalizing and internalizing problems, therefore, the importance of early non-pharmacological or/and drug interventions. The comorbid behavioral disorders should be treated separately according to evidence-based recommendations to prevent persistence of enuresis and the development of psychiatric disorders in the future. abstract_id: PUBMED:36852848 Correlation between serum copeptin and urinary aquaporin-2 levels in children with primary monosymptomatic nocturnal enuresis. Objectives: To determine the utility of serum copeptin and urinary aquaporin-2 (AQP2) levels in diagnosing primary monosymptomatic nocturnal enuresis (PMNE) in children. Methods: This study comprised 58 children (30 males and 28 females), aged 9.7 (±2.9) years with PMNE enuresis. Another 29 children (16 males and 13 females) aged 10.2 (±3.3) without nocturnal enuresis (NE) were recruited as a control group. History taking, clinical examination, and assessment of serum copeptin (blood) and AQP-2 levels (urine) were performed in all participants. Results: Serum levels of copeptin, potassium and urinary AQP-2, and urine creatinine levels were lower in the PMNE group compared to the control group (p &lt; 0.001 for all). No significant differences in body mass index, urine specific gravity, serum sodium, serum creatinine, or estimated glomerular filtration rate were observed between groups. This study evaluated both serum copeptin and AQP-2 levels in healthy and enuretic children. Conclusions: In this study, serum levels of copeptin (blood) and AQP2 (urine) were significantly lower in enuretic patients compared to healthy controls. Further, the measurement of urinary AQP-2 levels is more practical than serum copeptin levels due to lower invasiveness. abstract_id: PUBMED:18204235 Overactive bladder in children should be strictly differentiated from monosymptomatic nocturnal enuresis. Introduction: To elucidate the prevalence of concomitant overactive bladder in children with a chief compliant of primary enuresis, and to evaluate the pathogenetic difference between monosymptomatic and non-monosymptomatic enuresis. Materials And Methods: All patients had evidence of primary enuresis. Neurogenic bladder was excluded. Urinary symptoms were evaluated by taking a history from the affected children and parents, and using a questionnaire and bladder diary. Voiding cystourethrography was performed for non-responders to exclude urinary abnormalities. Results: Eighty-eight patients (9.8 +/- 3.1 years old) were enrolled. Initial assessment demonstrated that 24% had undervalued overactive bladders and 9% had constipation. Voiding cystourethrography was performed in 25%, demonstrating mechanical urethral obstructions in 8% and vesicoureteral reflux in 4.5%. Constipation, a history of urinary tract infections, mechanical obstructions and vesicoureteral reflux were strongly associated with non-monosymptomatic enuresis. Conclusions: Patients with non-monosymptomatic enuresis had a different clinical background, therapeutic response and pathogenetic abnormalities. In children an overactive bladder should be strictly differentiated from monosymptomatic enuresis. abstract_id: PUBMED:34016082 Low compliance contribute to insufficient Desmopressin response of primary monosymptomatic nocturnal enuresis and the role of voiding school. Aims: To evaluate the impact of compliance on the therapeutic effects of Desmopressin, as well as the importance of establishing the voiding school for low-compliance children in primary monosymptomatic enuresis treatment. Methods: Eighty-nine patients with primary monosymptomatic enuresis treated with Desmopressin were observed during the 2017-2020 at University Children's Hospital Belgrade, Serbia. The average patients age was 7.7 ± 2.4 years; 65 (73%) were boys and 24 (27%) % were girls. After the 3 months of Desmopressin treatment, the effect of therapy was evaluated according to the compliance. After the treatment, low-compliance patients and their parents were suggested to visit a voiding school. Results: A significant decrease in the median enuresis frequency was noticed during the Desmopressin treatment (25.0 (20.0-26.0) vs 10.0 (2.0-17.0) per month, before vs after treatment, respectively) (p &lt; 0.001). Patients with low compliance had a poorer response to Desmopressin (p &lt; 0.001). An median enuresis reduction in the good compliance group was 92.3% (86.7 -95%), while in the low compliance group was 28.6% (16.7-43.3%). After attending voiding school, there was a significant increase in compliance (p &lt; 0.001), associated with an median percent decrease in enuresis of 84.0% (75.0-95.5%) (p &lt; 0.001). Conclusion: Compliance considerably influences the beneficial effects of Desmopressin. Patients with poor therapeutic effects should be evaluated for compliance and introduced to voiding school. abstract_id: PUBMED:34373073 Influence of constipation on enuresis. Introduction: Constipation has classically been considered as a risk factor of enuresis, although there are increasingly more publications that report a similar prevalence of constipation in both enuretics and non-enuretics. Objective: To determine the influence of constipation in monosymptomatic and non-monosymptomatic enuresis, and to find out the prevalence of the three disorders, as well as the lower urinary tract dysfunction and bladder-bowel dysfunction in the population. Material And Method: A cross-sectional observational prevalence study on a representative population sample of 5-9 year-old school boys and girls of Galicia, Spain. A questionnaire was completed in the schools on urinary and bowel habits, which included questions from the Paediatric Lower Urinary Tract Scoring System (PLUTSS) diagnostic questionnaire and grading of the lower urinary tract dysfunctions. The enuresis was diagnosed using the International Children's Continence Society (ICSS), and if it was also associated with diurnal symptoms, it was also classified as non-monosymptomatic enuresis. The constipation was evaluated using the Rome III criteria and the adapted Bristol stool scale. Results: A total of 772 questionnaires were included in the study. The prevalence of constipation was 20% and that of enuresis was 9.1% (62.9% monosymptomatic enuresis and 37.1% non-monosymptomatic), with the prevalence of bladder-bowel dysfunction being 5.2%. It was observed that constipation had no influence on the presence of monosymptomatic enuresis, but it did have an influence on non-monosymptomatic enuresis and lower urinary tract dysfunctions, where it was a significant risk factor. Conclusions: The differential diagnosis between monosymptomatic and non-monosymptomatic enuresis is essential in the initial evaluation of the patient with enuresis, since the therapeutic management and the characteristics of both disorders are different, with constipation only being a risk factor in cases of non-monosymptomatic enuresis. Answer: Routine urinary tract investigation may not be necessary for children with monosymptomatic primary nocturnal enuresis (PMNE). A study including 106 children with PMNE found that ultrasonographic and uroflowmetry findings on bladder function and the upper urinary system, as well as the incidence of urinary infection, were similar in children with and without nocturnal enuresis. The study concluded that obtaining a voiding and elimination diary, along with a good history, may be beneficial, and routine urinalysis may be unnecessary in the evaluation of children with PMNE after obtaining a careful and complete history of the voiding dysfunction (PUBMED:11597547). However, it is important to differentiate PMNE from other types of enuresis or voiding disorders, as the presence of other symptoms or conditions may necessitate further investigation. For example, non-monosymptomatic enuresis, overactive bladder, and bladder-bowel dysfunction (BBD) usually require specialized diagnostics and therapy (PUBMED:33293883). Additionally, constipation has been identified as a significant risk factor for non-monosymptomatic enuresis and lower urinary tract dysfunctions, but not for PMNE (PUBMED:34373073). In cases where children with PMNE do not respond to long non-pharmacological therapy, urodynamic investigations may reveal underlying dysfunctions such as overactive bladder or detrusor-sphincter discoordination, which can then be treated pharmacologically (PUBMED:18924504). Furthermore, compliance with treatment has been shown to significantly influence the beneficial effects of Desmopressin, a common medication used for PMNE, and attending a voiding school can improve compliance and treatment outcomes (PUBMED:34016082). In summary, while routine urinary tract investigation may not be necessary for all children with PMNE, careful assessment is required to ensure that other symptoms or conditions are not present, which would warrant further investigation and a tailored approach to treatment.
Instruction: Should the skin cancer examination be taught in medical school? Abstracts: abstract_id: PUBMED:12224981 Should the skin cancer examination be taught in medical school? Background: The fact that thin melanomas are associated with a greater than 95% survival rate, while later, more deeply invasive melanomas have a 5-year survival rate of less than 10%, demonstrates the potential personal and public health impact of early detection. The majority of patients with skin lesions are seen by nondermatologists who infrequently counsel patients about skin cancer prevention or perform a complete skin examination as part of routine care. We documented the antecedents of physician practice by evaluating medical students' observation, training, performance, and self-reported skill level for the skin cancer examination and sun protection counseling. Methods: Surveys were administered and completed in classrooms and student workshops in each of the 4 medical school years during the spring of 1996 and 1997. We concentrate our analysis on the graduating fourth-year students. Results: Of the 302 fourth-year students enrolled at Boston University School of Medicine, Boston, Mass, in 1996 and 1997, 223 (74%) completed surveys. Among fourth-year students, 52% rated themselves as unskilled in skin cancer examinations. Twenty-eight percent of fourth-year students had never observed a skin cancer examination, 40% had received no training, and 35% had never practiced the examination. However, fourth-year students reporting at least 1 opportunity to observe, train, or practice an examination were 3 times as likely to report themselves as moderately to very skilled as students without such opportunities. Conclusion: If medical student training rates for the skin cancer examination are equally low elsewhere, as is likely, the present data suggest that even brief additions to the current curriculum, integrated into systems teaching, would augment student exposure and likely boost student skill levels. abstract_id: PUBMED:24898482 Curricular factors associated with medical students' practice of the skin cancer examination: an educational enhancement initiative by the integrated skin exam consortium. Importance: As medical school curricula become progressively integrated, a need exists to optimize education related to the skin cancer examination (SCE) for melanoma, a relevant competency gap that influences secondary prevention efforts. Objectives: To identify curricular factors associated with medical students' confidence, intent, and performance regarding the SCE. Design, Setting, And Participants: Survey-based cross-sectional study from the Integrated Skin Exam Consortium at accredited US medical schools among a volunteer sample of second-year students representing 8 geographically varied public and private institutions. Students were administered a questionnaire to assess characteristics, curricular exposures, and educational and practical experiences related to skin cancer, as well as knowledge of melanoma risk and a detection method. Main Outcomes And Measures: Primary outcomes were confidence in performing the SCE, intent to perform an integrated skin examination, and actual performance of the SCE. Results: Physical diagnosis session and clinical encounter were most predictive of confidence in performance of the SCE (odds ratios [ORs], 15.35 and 11.48, respectively). Other curricular factors associated with confidence included instruction time of at least 60 minutes on skin cancer (OR, 6.35), lecture on the SCE (OR, 7.54), knowledge of melanoma risk (OR, 3.71), and at least 1 opportunity to observe the SCE (OR, 2.70). Physical diagnosis session and at least 4 opportunities to observe the SCE were most predictive of intent to perform an integrated skin examination (ORs, 4.84 and 4.72, respectively). Other curricular factors associated with intent included knowledge of melanoma risk (OR, 1.83), clinical encounter (OR, 2.39), and at least 1 opportunity to observe the SCE (OR, 1.95). Clinical encounter, physical diagnosis session, and at least 1 opportunity to observe the SCE were most predictive of performance of the SCE (ORs, 21.67, 15.48, and 9.92, respectively). Other curricular factors associated with performance included instruction time of at least 60 minutes on skin cancer (OR, 2.42) and lecture on the SCE (OR, 5.04). Conclusions And Relevance: To augment the practice of the SCE among medical students, course directors may design an integrated curriculum that includes at least 60 minutes of instruction related to melanoma and the SCE, a description of the integrated skin examination as part of the physical diagnosis course, and education on high-risk demographic groups and anatomic sites specific to men and women and on the ABCDEs of melanoma, and at least 1 opportunity to observe the SCE. abstract_id: PUBMED:16618862 Skin cancer examination teaching in US medical education. Objective: To determine physician preparation for performing the skin cancer examination (SCE). Design: We evaluated medical students' observation, training, and practice of the SCE; hours spent in a dermatology clinic; and self-reported skill level for the SCE by a self-administered survey. Participants: Graduating students at 7 US medical schools during the 2002-2003 academic year. Main Outcome Measures: Percentages of students reporting SCE skill observation, training, and practice. Results: Of 934 students, 659 (70.6%) completed surveys. Twenty-three percent of students had never observed an SCE, 26.7% had never been trained to perform an SCE, and 43.4% had never examined a patient for skin cancer. Only 28.2% rated themselves as somewhat or very skilled in the SCE. This rate dropped to 19.7% among 553 students who had not completed a dermatology elective. Compared with students without training, students who had been trained at least once in the SCE were 7 times more likely to rate themselves as being somewhat or very skilled in the SCE. Sixty-nine percent of students agreed that insufficient emphasis in their medical training was placed on learning about the SCE. Conclusions: This survey documents the need for more consistent training of medical students in SCE. Even brief curricular additions would augment students' perceived skill levels and improve practice patterns and competencies of future physicians. More frequent and improved SCEs might result in earlier detection of melanoma and nonmelanoma skin cancers by nondermatologists, with significant public health benefits. abstract_id: PUBMED:36753386 Medical school dermatology education: a scoping review. Dermatological diseases are widespread and have a significant impact on the quality of life of patients; however, access to appropriate care is often limited. Improved early training during medical school represents a potential upstream solution. This scoping review explores dermatology education during medical school, with a focus on identifying the factors associated with optimizing the preparation of future physicians to provide care for patients with skin disease. A literature search was conducted using online databases (Embase, MEDLINE, CINAHL and Scopus) to identify relevant studies. The Joanna Briggs Institute methodological framework for scoping reviews was used, including quantitative and qualitative data analysis following a grounded theory approach. From 1490 articles identified, 376 articles were included. Most studies were from the USA (46.3%), UK (16.2%), Germany (6.4%) and Canada (5.6%). Only 46.8% were published as original articles, with a relatively large proportion either as letters (29.2%) or abstracts (12.2%). Literature was grouped into three themes: teaching content, delivery and assessment. Core learning objectives were country dependent; however, a common thread was the importance of skin cancer teaching and recognition that diversity and cultural competence need greater fostering. Various methods of delivery and assessment were identified, including computer-aided and online, audiovisual, clinical immersion, didactic, simulation and peer-led approaches. The advantages and disadvantages of each need to be weighed when deciding which is most appropriate for a given learning outcome. The broader teaching-learning ecosystem is influenced by (i) community health needs and medical school resources, and (ii) the student and their ability to learn and perform. Efforts to optimize dermatology education may use this review to further investigate and adapt teaching according to local needs and context. abstract_id: PUBMED:10397480 Medical students' knowledge, attitudes, skills, and practices of cancer prevention and detection. Background: Surveys of U.S. physicians show deficiencies in cancer detection and counseling skills. Thus, there is a compelling need to provide skills teaching during medical school for cancers with preventable mortality and for counseling techniques for smoking prevention and cessation. Methods: In advance of the integration of initiatives for cancer education into the medical school curriculum, the authors conducted a baseline survey of students' knowledge, attitudes, skills, practices, observation, and training (KASPOT) related to cancer education. Eighty-one percent of Boston University School of Medicine students (n = 499) completed surveys. Results: The students reported higher levels of KASPOT for breast and cervical cancers, compared with skin cancer examination or tobacco use cessation or prevention counseling. More than half of third- and fourth-year students reported that too little emphasis was given to cancer control education. Conclusions: It appears that students' practice and skills for detection of the most common cancer (skin cancer), and for cancers with the greatest mortality (tobacco-related cancers) are deficient. Revisions in medical students' curricula should seek to address these shortcomings. abstract_id: PUBMED:26410358 The Skin Cancer Objective Structured Clinical Examination (SCOSCE): A multi-institutional collaboration to develop and validate a clinical skills assessment for melanoma. Background: Assessing medical students on core skills related to melanoma detection is challenging in the absence of a well-developed instrument. Objective: We sought to develop an objective structured clinical examination for the detection and evaluation of melanoma among medical students. Methods: This was a prospective cohort analysis of student and objective rater agreement on performance of clinical skills and assessment of differences in performance across 3 schools. Results: Kappa coefficients indicated excellent agreement for 3 of 5 core skills including commenting on the presence of the moulage (k = 0.87, 95% confidence interval 0.77-0.96), obtaining a history for the moulage (k = 0.84, 95% confidence interval 0.74-0.94), and making a clinical impression (k = 0.80, 95% confidence interval 0.68-0.92). There were no differences in performance across schools with respect to 3 of 5 core skills: commenting on the presence of the moulage (P = .15), initiating a history (P = .53), and managing the suspicious lesion (P value range .07-.17). Overall, 54.2% and 44.7% of students commented on the presence of the moulage and achieved maximum performance of core skills, respectively, with no difference in performance across schools. Limitations: Limitations include overall sample size of students and schools. Conclusion: The Skin Cancer Objective Structured Clinical Examination represents a potentially important instrument to measure students' performance on the optimal step-by-step evaluation of a melanoma. abstract_id: PUBMED:22281164 Recognition of melanoma: a dermatologic clinical competency in medical student education. Background: Non-dermatologist physicians are well positioned for opportunistic melanoma detection; however, education in the skin cancer examination is limited during medical school and traditionally lecture-based. Simulating melanoma cases provides a means to demonstrate whether proficiency in knowledge and recognition of melanoma images translates into improved clinical skill. Objective: To evaluate medical student recognition and appropriate response to a prosthetic melanoma placed on a standardized patient (SP) during a simulated clinical encounter. Methods: In this pilot study, prosthetic mimics of melanoma were placed on the backs of SPs unbeknownst to a convenience sample of 59 second-year medical students. The study took place during clinical skills practice sessions with SPs conducted from February to April 2010 at Mount Sinai School of Medicine (New York, NY). SPs presented with non-dermatologic chief complaints typical for an acute office visit. All students had the opportunity to attend a lecture on the clinical signs of melanoma 2 to 4 months earlier, for which pre-test and post-test data were collected. Results: Recognition and evaluation of a prosthetic melanoma as determined by querying the SPs and reviewing the students' examination notes. During the SP encounter, 37 students (63%) asked about the melanoma moulage; of those, 25 (68%) made recommendations for further evaluation. The moulage was documented in 17 examination notes (43%). Thirty-three students (56%) asked about the skin on review of systems, although this did not predict moulage detection. Conclusions: Prosthetic mimics of melanoma are useful tools for assessing skin cancer awareness and detection skills among medical students. abstract_id: PUBMED:22827632 Medical students and sun prevention: knowledge and behaviours in France. Background: High sun exposure is a major risk factor of skin cancer, and physicians are first-line players in a sun-risk prevention campaign. Objectives: To survey medical students at the end of their general medical education about their knowledge and behaviours concerning natural and artificial sun risk and its prevention. Methods: A 32-question survey was e-mailed to fifth or sixth year medical school students or first-year resident, attending residency-exam preparatory courses in Paris. Results: Among 1,374 students, 570 (41.7%) completed the questionnaires. General aspects of sun-protection measures were known to &gt;75% but responses to specific questions, mainly the impact of environmental conditions on sun risk, were frequently erroneous. Although most students declared using sunscreen and avoiding peak hours, 39% never or exceptionally (&lt;1/year) performed skin self-examination. Fewer than one-third wore long sleeves or cap/hat in the sun, and tanning beds were used by 13.5%, but their regulations remained unknown by 30-68%. Conclusion: The sixth year of medical school signals the end of non-specialized training. Students' results and practices were almost comparable to those of the French general population. Medical schools might not be providing adequate sun-protection education programmes, perhaps diminishing the ability of future doctors to educate patients. abstract_id: PUBMED:33264527 The Matter of the Missing Melanoma: A Case Series. Melanoma of the skin is the fifth most common malignancy in the U.S. In 2020, an estimated 6,850 people will die from melanoma of the skin. Further, given the broad distribution in ages affected, melanoma has a tremendous burden on health care costs and years of productive life lost. Every year, melanoma cases may go undetected, or have a significant delay in diagnosis, which may lead to poor outcomes. Here, three cases of melanoma encountered in the author's first clinical year of medical school, each of which had been overlooked, misdiagnosed, or lost to follow-up are reported. Implementing an integrated skin exam into every physical examination for high-risk patients and introducing preventative healthcare education into communities could help alleviate the burden of delay in diagnoses. abstract_id: PUBMED:26063475 Correspondence between pigmented lesions identified by melanoma patients trained to perform partner-assisted skin self-examination and dermatological examination. Background: Skin self-examination (SSE) training interventions can increase understanding of melanoma early detection criteria and promote SSE. However, there remains a need to evaluate whether intervention participants can apply such early detection skills to accurately identify concerning, or potentially malignant, pigmented lesions during full body SSE. Methods: We assessed SSE accuracy using data from a randomized control trial of a SSE skills training intervention designed to promote partner-assisted SSE among melanoma patients. In the trial, patient-partner pairs were administered the training intervention and performed monthly SSE to identify, evaluate, and track concerning pigmented skin lesions. Patients received a total body skin examination by a dermatologist approximately 4-months postintervention. SSE accuracy was assessed as the correspondence between the specific concerning pigmented lesions identified by 274 study pairs during SSE with those identified during dermatological examination. We also examined whether lesions that were biopsied during the study were identified prior to biopsy during SSE. Results: Approximately three in four of the concerning lesions identified by pairs during SSE were also identified during the dermatological exam. There were 81 biopsies performed during the study and pairs had identified 73% of the corresponding lesions during SSE. Of the five melanoma detected, three were identified during SSE. Conclusion: Melanoma patients and partner taught to do SSE using an evidence-based program developed a high degree of correspondence with the study dermatologist in identifying concerning lesions. Impact: This study provides novel evidence that supports the accuracy of full-body SSE for the patient identification of concerning lesions. Answer: Yes, the skin cancer examination should be taught in medical school. Evidence suggests that early detection of melanoma significantly improves survival rates, with thin melanomas associated with a greater than 95% survival rate compared to less than 10% for more deeply invasive melanomas (PUBMED:12224981). However, many nondermatologists, who often see patients with skin lesions first, do not routinely perform complete skin examinations or counsel patients about skin cancer prevention (PUBMED:12224981). Medical students' self-reported skill levels in skin cancer examinations are low, with many having never observed, received training, or practiced the examination (PUBMED:12224981). Curricular factors such as physical diagnosis sessions, clinical encounters, and opportunities to observe the skin cancer examination (SCE) are associated with increased confidence and intent to perform SCE among medical students (PUBMED:24898482). Moreover, students who receive training in SCE are more likely to rate themselves as skilled in performing the examination (PUBMED:16618862). A scoping review also highlights the importance of skin cancer teaching in medical school and suggests that various methods of delivery and assessment can be adapted according to local needs and context (PUBMED:36753386). Additionally, medical students' knowledge and practices concerning sun prevention and skin cancer are not adequate, indicating a need for improved education in medical schools (PUBMED:22827632). Furthermore, the development of clinical skills assessments such as the Skin Cancer Objective Structured Clinical Examination (SCOSCE) can help measure medical students' performance in melanoma detection (PUBMED:26410358). Simulated clinical encounters with prosthetic melanomas have also been used to evaluate medical students' skin cancer awareness and detection skills (PUBMED:22281164). In conclusion, the current evidence supports the need for more consistent and effective training in skin cancer examination in medical school curricula to improve future physicians' competencies and practice patterns, which could lead to earlier detection of melanoma and nonmelanoma skin cancers with significant public health benefits (PUBMED:16618862).
Instruction: Is extracorporeal shock wave lithotripsy the preferred treatment option for elderly patients with urinary stone? Abstracts: abstract_id: PUBMED:28442896 Comparison of safety and outcomes of shock wave lithotripsy between elderly and non-elderly patients. Background: This study compared the clinical outcomes of extracorporeal shock wave lithotripsy between elderly (aged $65 years) and non-elderly (aged &lt;65 years) patients. Methods: A retrospective review of medical records was performed on 483 (non-elderly: 245, elderly: 238) patients with upper urinary tract stones who underwent shock wave lithotripsy between 2007 and 2015. The demographic data, stone parameters, stone-free rate, retreatment rate, and complication rate were analyzed in both elderly and non-elderly patient groups. Results: There was no significant difference between non-elderly and elderly patients in terms of stone-free rate (46.5% vs 41.1%, P&gt;0.05) regardless of stone site or stone size and overall retreatment rate (41.6% vs 37.0%, P&gt;0.05). Elderly patients had a higher complication rate than non-elderly patients (15.5% vs 23.5%, P=0.026). The most common complication was flank pain. Receiver operating characteristic curves predicted that elderly patients (cutoff value: 65 years of age) had a higher risk of complications and that patients with smaller stones (cutoff value: 0.8 cm) had a higher stone-free rate. Conclusion: This study showed that elderly patients with upper urinary tract stones undergoing shock wave lithotripsy had comparable efficacy for stone-free rates and retreatment rates, but higher complication rates. abstract_id: PUBMED:31179884 The clinical role of X-ray computed tomography to predict the clinical efficiency of extracorporeal shock wave lithotripsy. Aim: To evaluate the clinical efficiency of computed tomography for diagnostics of patients with urolithiasis and the choice of treatment strategy. Material And Methods: The study was carried out at the Urological Clinic of I.M. Sechenov First Moscow State Medical University and included 1044 patients with urinary stones. The ultimate goal of this study was to predict the clinical efficiency of extracorporeal shock wave lithotripsy using a combination of computed tomography and densitometry. Extracorporeal shock wave lithotripsy was performed on "Siemens Lithostar Plus," "Siemens Modularis Uro," and "Dornier Gemini" lithotripters. Statistical analysis of clinical data included evaluation of individual sampling groups and calculation of weighted arithmetic mean ( M). Results: The efficiency of extracorporeal shock wave lithotripsy has been determined primarily using X-ray analysis of the concrement outlines and the structure (homogeneous or heterogeneous) of its central zone. However, in terms of efficiency and repetition rate (the number of fragmentation procedures required for complete clearance) of extracorporeal shock wave lithotripsy, the mean density of the concrement along the whole length of its three-dimensional structure (expressed in Hounsfield units) appeared to be the most reliable and informative predictive index in this study. Conclusion: The combination of computed tomography with densitometry in the treatment of patients with urolithiasis allows one (1) to determine the exact localization, size, X-ray structure, and structural density of urinary stones and (2) to predict, on the basis of densitometric data histograms, the clinical efficiency and repetition rate of extracorporeal shock wave lithotripsy with due regard to the X-ray structure of peripheral and central zones, and mean density (in Hounsfield units) of urinary concrements. abstract_id: PUBMED:37728800 Experience of high polymer gel pad assisted ultrasound monitoring in the treatment of infant urolithiasis during extracorporeal shock wave lithotripsy. In the extracorporeal shock wave lithotripsy for infants, we used a medical polymer gel pad to assist ultrasonic positioning, so that the ultrasonic probe could be far away from the shock wave energy field. Although not affecting the ultrasonic positioning and monitoring effect, we discussed the protective effect of this method on the ultrasonic probe. A retrospective analysis was made on 21 infants (0-3 years old) who received ESWL in our hospital from June 2021 to February 2023. After the stones were accurately located by B-ultrasound before surgery, a 4 * 5 * 10 cm medical polymer gel pad was placed between the skin and the ultrasonic probe to keep the ultrasonic probe away from the shock wave energy field. The B-ultrasonic wave source locked the target stone through the gel pad, and the lithotripter Dornier Compact Delta II was used for lithotripsy. The extracorporeal shock wave lithotripsy was completed under the whole process of B-ultrasonic monitoring. All patients completed the surgery under ultrasound monitoring, and there were no abnormalities in the ultrasound probe during the surgery. The average stone size was 0.60 ± 0.21 cm, the surgical time was 39.8 ± 13.8 min, and the total energy of lithotripsy was 7.41 ± 4.35 J. There were no obvious complications in all patients after the surgery. After 2 weeks of ultrasound examination, the success rate of lithotripsy in 21 patients reached 85.7%. We believe that the use of the gel pad increases the distance between the ultrasonic probe and the skin, leaving the probe away from the shock wave energy field, avoiding the damage of the shock wave source to the ultrasonic probe, and does not affect the monitoring effect of ultrasound on stones and the success rate of lithotripsy, which is worthy of further promotion in the field of children's urinary stones. abstract_id: PUBMED:31387108 Particularities and Efficacy of Extracorporeal Shock Wave Lithotripsy in Children. Background: Extracorporeal shock wave lithotripsy (ESWL) was first introduced in paediatric population in 1986. Given the more frequent recurrence in children, compared to adults, urinary stones treatments should require minimal invasive treatment methods. In this study, we aimed to evaluate the profile of the young patient with lithiasis who can benefit from ESWL, analysing the experience of 2 clinical departments. Materials And Methods: We have retrospectively reviewed the medical records of 54 children who underwent ESWL for urolithiasis. ESWL success rate was defined as stone-free status or the presence of clinically insignificant residual fragments. Data were analysed using the STATA 14.2. Results: In our study, the incidence of renal-ureteral calculi is significantly higher in girls (68.5%), compared to boys (31.5%). In total, 83.3% of patients showed a favourable outcome after treatment and the remaining 16.7% showed minimal complications. The presence of complications and remaining calculi was correlated to children age. The overall stone free rate was 88.9%. For calculus of 8.5 mm, only one ESWL session is recommended. Conclusions: The high percentage of cases with favourable outcome indicate that ESWL treatment is effective, considering the minimal cost, minimal invasiveness, repeatability and no need for general anaesthesia. abstract_id: PUBMED:11956471 Extracorporeal shock wave lithotripsy in children. Purpose: We analyzed the efficacy of extracorporeal shock wave lithotripsy in children. Materials And Methods: Between 1987 and 2000, 15,423 patients with urinary calculi were treated at our institution, of whom 86 (0.56%) children 3 to 14 years old underwent lithotripsy using the Siemens Lithostar Plus (Siemens Medical Systems, Iselin, New Jersey). A total of 121 calculi in 90 urinary tracts were treated requiring 149 extracorporeal shock wave lithotripsy sessions. The stones were caliceal in 62 cases, renal pelvic in 35, ureteral in 18 and staghorn in 6. Followup consisted of nephrotomogram or ultrasound 1 day and 1 to 3 months postoperatively. Results: Complete removal of all stone fragments was achieved in 97.6% of the stones after 3 months. Re-treatment was necessary in 24 patients (27.9%). All patients were treated as outpatients with intravenous sedation in 54 (62.8%), general anesthesia in 13 (15.1%) and no anesthesia in 19 (22.1%). Complications were present in 8 patients (10.7%) who had colic and received medical treatment followed by uneventful recovery. Conclusions: Extracorporeal shock wave lithotripsy using the Lithostar Plus has been demonstrated to be an effective noninvasive procedure to treat radiopaque and even radiolucent or slightly opaque urinary calculi in children. abstract_id: PUBMED:3786688 Extracorporeal shock-wave lithotripsy. Extracorporeal shock-wave lithotripsy is a new technique for treatment of symptomatic renal and upper ureteral calculi. After biplanar fluoroscopic localization of the targeted calculus, serial shock waves cause calculus disintegration, with subsequent spontaneous transurethral passage of the fragments. Excellent clinical results and patient enthusiasm have resulted. Physicians using extracorporeal shock-wave lithotripsy must also be adept at percutaneous, ureteroscopic, and standard surgical stone removal methods to deal with complex clinical stone presentations. abstract_id: PUBMED:11111524 Management of the extracorporeal shock wave lithotripsy patient. Over 2 million Americans experience kidney and urinary stone disease each year. Early treatments resulted in high mortality and morbidity rates. With the advent of extracorporeal shock wave lithotripsy less than 20 years ago, treatment for this disease has become far safer with more rapid recovery and fewer complications. The selection of patients eligible for extracorporeal shock wave lithotripsy is dependent on the location and size of the stones and the overall health of the patient. This article discusses the different treatment modalities used for stone disease and the different methods currently available for extracorporeal shock wave lithotripsy. Preprocedure preparation of the patient and postoperative care for this population is reviewed in detail. abstract_id: PUBMED:26251744 Pain control using pethidine in combination with diazepam compared to diclofenac in combination with hyoscine-n-butyl bromide: in patients undergoing extracorporeal shock wave lithotripsy. Introduction: Extracorporeal Shock Wave Lithotripsy (ESWL) remains the preferred least invasive treatment for urinary tract stones. The main purpose of this study was to compare two treatment modalities for pain control during the ESWL procedure. Material And Methods: From 2013 to 2014, 220 patients received ESWL for kidney stones. Before the procedure, the weight and height were measured to determine the body mass index (BMI); in addition, oxygen saturation and the pulse of the patients, as well as pain level were determined. The pain control provided included two different methods: diclofenac sodium plus hyoscine-N-butyl bromide in the first group and pethidine plus diazepam in the second group. The pain level of the patients was determined using two different scales: the Wong-Baker and the Visual Analogue scales (VAS). At the end of three sessions, all patients were evaluated for the stone fragmentation rate by plain abdominal X-ray, and the findings were recorded and analyzed. Results: A total of 220 patients were enrolled in this study. There were 91 patients in the first group (diclofenac sodium + hyoscine-N-butyl bromide) (male/female: 63/28) and 129 (male/female: 83/46) patients in the second group (pethidine HCL +diazepam). The mean age with SD according to each group was 42.03 (±16.43) and 42.56 (±14.23), respectively (p = 0.8). With regard to pain scores (using the Wong-Baker and VAS scales), the responses were significantly lower in the second group (p &lt;0.001). Conclusions: Pethidine in combination with diazepam was superior to diclofenac and Hyoscine-N-butyl bromide for pain in patients undergoing ESWL. abstract_id: PUBMED:32897020 Characteristics and efficiency of extracorporeal shock wave lithotripsy in children using ultrasound guidance Introduction: The aim of the study was to retrospectively evaluate the efficiency of extracorporeal shock wave lithotripsy (ESWL) with ultrasound guidance and the results of treatment of children with urolithiasis, of different age groups depending on the stone location and size. Materials And Methods: A total of 135 children and adolescents, aged from 1 to 18 years treated by ESWL at IZMIRLIAN Medical Center from 2005 to 2016 were enrolled in this study. Number, size and location of stones were evaluated. A total of 153 ESWL sessions were performed, including two sessions in 9 cases, three sessions in 1 case, and four sessions in 2 cases. Intra- and posttreatment stone fragmentation rate, number of ESWL sessions, shock-wave pulses per session, stone-free rate, and number of ancillary procedures depending on the stone location and size as well as radiation exposure were evaluated. Follow-up observation was performed after 14, 30 and 90 days. Results: In most cases, there was a good stone fragmentation after ESWL. From 200 to 3000 low-energy pulses with a frequency 1-2 Hz and power of 30-50 units on Modulith SLK scale were delivered. Number of shock-wave pulses up to 1000, 2000 and 3000 was sufficient in 18.3%, 51.9% and 26.8% cases, respectively. SFR in children aged 1-5 years was 95.2%, compared to 100%, 100% and 83.7% in patients in the 6-10, 11-15 and 16-18 year age groups, respectively. Complete stone clearance was achieved in patients of 1-5 and 6-10 years after two weeks. In only one patient in the 11- to 15-year age group, all fragments passed after two weeks, while in 4 adolescents in the 16- 18-year age group SFR was achieved after more than 3 months. In addition, in other 3 patients in this age group ESWL was unsuccessful. Conclusion: Our results suggest high efficiency of ESWL under ultrasound guidance in children and adolescents. Absence of X-ray exposure, minimal complication rate, non-invasiveness and high efficiency allow to consider ESWL as a method of choice for the treatment of urolithiasis in children and adolescents. abstract_id: PUBMED:14639848 Treatment of childhood lithiasis using extracorporeal shock wave lithotripsy Objectives: We evaluated the efficiency of extracorporeal shock wave lithotripsy (ESWL) in the treatment of pediatric urolithiasis, analyzing results and complications in our series. Methods/results: 20 patients (16 males and 4 females) with ages from 2 to 17 years were evaluated. Twelve patients presented left side lithiasis (60%). The size of the stone was measured in cm2 areas. 6 cases (30%) presented with lower calyx stones and 5 cases (25.9%) with stones in the pelvic ureter. 2 patients required JJ catheter insertion. Mean number of shock waves per session was 2,650 (range: 1,000-3,800). The Dornier Lithotripter S and Dornier MFL-5000 lithotripsy machines were used in the range of 10-120 Kv and 14-23 Kv respectively. Follow-up evaluation was performed at one week and three months from treatment, including abdomen x-ray to obtain fragmentation and elimination percentages. In our series total fragmentation was achieved in 100% of the cases, without any patient with partial fragmentation (fragments &gt; 5 mm). Elimination percentages were satisfactory, with a 65% total elimination and 30% partial elimination (residual fragments &lt; 5 mm) within one week. One patient had no elimination at all. Conclusions: After the results obtained in our series, we consider external shock wave lithotripsy the first treatment option in cases of pediatric urolithiasis. Answer: Extracorporeal shock wave lithotripsy (ESWL) is a treatment option for patients with urinary stones, including the elderly population. However, the preference for ESWL as a treatment option for elderly patients with urinary stones depends on various factors, including the safety and outcomes of the procedure in this specific age group. According to a study comparing clinical outcomes of ESWL between elderly (aged ≥65 years) and non-elderly (aged <65 years) patients, elderly patients had comparable efficacy for stone-free rates and retreatment rates but higher complication rates compared to non-elderly patients. The most common complication was flank pain. This suggests that while ESWL can be effective in elderly patients, there is an increased risk of complications (PUBMED:28442896). The decision to use ESWL in elderly patients should also consider individual patient factors, such as the location and size of the stones, the overall health of the patient, and the presence of comorbidities. The selection of patients eligible for ESWL is dependent on these factors (PUBMED:11111524). In general, ESWL is a less invasive treatment option for urinary tract stones and has become safer with more rapid recovery and fewer complications since its advent (PUBMED:11111524). However, the higher complication rate in elderly patients indicates that careful consideration and patient selection are necessary when considering ESWL as a treatment option for this population (PUBMED:28442896). In conclusion, while ESWL is a treatment option for urinary stones in elderly patients, it is not necessarily the preferred option for all individuals in this age group. The decision should be made on a case-by-case basis, taking into account the specific risks and benefits for each patient.
Instruction: Is immunising all patients with chronic lung disease in the community against influenza cost effective? Abstracts: abstract_id: PUBMED:9578860 Is immunising all patients with chronic lung disease in the community against influenza cost effective? Evidence from a general practice based clinical prospective cohort study in Utrecht, The Netherlands. Study Objective: There is little information on the potential benefit of immunising all patients with chronic lung disease in the community against influenza. The clinical effectiveness and economic benefit was established of the influenza vaccination programme in a general practice based cohort of adult patients with chronic lung disease followed up during the 1995/96 influenza A epidemic. Design: A prospective cohort study from October 1995 to March 1996. Setting: The study was undertaken in the Utrecht General Practices Network with six large group practices, covering a total population of approximately 50,000 patients in the Netherlands. Patients: Computerised medical records of 1696 patients with chronic lung disease aged over 18 years with an indication for vaccination according to the Dutch GP guidelines were reviewed. Main Results: The overall attack rate of any complication, including all cause death, low respiratory tract infection, and acute cardiac disease was 15%. Exacerbations of lung disease were most frequent (13%). Death, pneumonia, and acute cardiac disease were mainly limited to patients &gt; or = 65 years. No effectiveness of the immunisation programme could be established in patients 18-64 years (n = 1066), after controlling for baseline prognosis in multivariable logistic regression analysis. In vaccinees &gt; or = 65 years (n = 630), the occurrence of any complication was reduced by 50% (95% CI 17, 70%). The economic benefit was estimated at 50 Pounds per elderly vaccinee. Conclusions: This study suggests that in the Netherlands immunisation of elderly patients with chronic lung disease against influenza is effective and cost-saving, hence these patients should be given high priority. More, preferably experimental, studies are needed to establish whether adult lung patients under 65 years in the community will also benefit from vaccination. abstract_id: PUBMED:33855100 Clinical Features and Outcomes of Community-Acquired Pneumonia Caused by Haemophilus Influenza. Background: Long regarded as the second most common cause of community-acquired pneumonia (CAP), Haemophilus influenzae has recently been identified with almost equal frequency as pneumococcus in patients hospitalized for CAP. The literature lacks a detailed description of the presentation, clinical features, laboratory and radiologic findings, and outcomes in Haemophilus pneumonia. Methods: During 2 prospective studies of patients hospitalized for CAP, we identified 33 patients with Haemophilus pneumonia. In order to provide context, we compared clinical findings in these patients with findings in 36 patients with pneumococcal pneumonia identified during the same period. We included and analyzed separately data from patients with viral coinfection. Patients with coinfection by other bacteria were excluded. Results: Haemophilus pneumonia occurred in older adults who had underlying chronic lung disease, cardiac conditions, and alcohol use disorder, the same population at risk for pneumococcal pneumonia. However, in contrast to pneumococcal pneumonia, patients with Haemophilus pneumonia had less severe infection as shown by absence of septic shock on admission, less confusion, fewer cases of leukopenia or extreme leukocytosis, and no deaths at 30 days. Viral coinfection greatly increased the severity of Haemophilus, but not pneumococcal pneumonia. Conclusions: We present the first thorough description of Haemophilus pneumonia, show that it is less severe than pneumococcal pneumonia, and document that viral coinfection greatly increases its severity. These distinctions are lost when the label CAP is liberally applied to all patients who come to the hospital from the community for pneumonia. abstract_id: PUBMED:37598051 Impact of viral detection in patients with community-acquired pneumonia: An observational cohort study. Purpose: The presence of a respiratory virus in patients with community-acquired pneumonia (CAP) may have an impact on the bacterial etiology and clinical presentation. In this study we aimed to assess the role of viral infection in the bacterial etiology and outcomes of patients with CAP. Methods: We performed a retrospective study of all adults hospitalized with CAP between November 2017 and October 2018. Patients were classified according to the presence of viral infection. An unvaried and a multivaried analysis were performed to identify variables associated with viral infection and clinical outcomes. Results: Overall 590 patients were included. A microorganism was documented in 375 cases (63.5%). A viral infection was demonstrated in 118 (20%). The main pathogens were Streptococcus pneumoniae (35.8%), Staphylococcus aureus (2.9%) and influenza virus (10.8%). A trend to a higher rate of S. aureus (p=0.06) in patients with viral infection was observed. Patients with viral infection had more often bilateral consolidation patterns (17.8% vs 10.8%, p=0.04), respiratory failure (59.3% vs 42.8%, p=0.001), ICU admission (17.8% vs 7%, p=0.001) and invasive mechanical ventilation (9.3% vs 2.8%, p=0.003). Risk factors for respiratory failure were chronic lung disease, age &gt;65 years, positive blood cultures and viral infection. Influenza, virus but no other respiratory viruses, was associated with respiratory failure (OR, 3.72; 95% CI, 2.06-6.73). Conclusions: Our study reinforces the idea that co-viral infection has an impact in the clinical presentation of CAP causing a more severe clinical picture. This impact seems to be mainly due to influenza virus infection. abstract_id: PUBMED:12242314 Hospitalizations and deaths due to respiratory illnesses during influenza seasons: a comparison of community residents, senior housing residents, and nursing home residents. Background: Although research indicates that influenza is a major cause of morbidity and mortality among older adults, few studies have tried to identify which seniors are particularly at risk of experiencing complications of influenza. The purpose of this study was to compare hospitalizations and deaths due to respiratory illnesses during influenza seasons among seniors (aged 65+) living in the community, senior residences (apartments reserved for seniors), and nursing homes. Methods: Using administrative data, all hospital admissions and deaths due to respiratory illnesses (pneumonia and influenza, chronic lung disease, and acute respiratory diseases) were identified for all individuals aged 65 and older living in Winnipeg, Canada (approximately 88,000 individuals) during four influenza seasons (1995-1996 to 1998-1999). Results: Hospitalization and death rates for respiratory illnesses increased significantly during influenza seasons, compared to fall periods (e.g., 42.7 vs 25.2 hospitalizations per 1000 population aged 80 and older). Moreover, hospitalization rates for pneumonia and influenza, chronic lung disease, and acute respiratory diseases were higher among individuals living in senior residences (42.5 per 1000 for all respiratory illnesses combined) than their counterparts living in the community (22.8 per 1000). Furthermore, deaths due to pneumonia and influenza and chronic lung disease were higher among senior housing residents (4.2 per 1000) than community residents (2.6 per 1000) and were particularly high among nursing home residents (52.1 per 1000). Conclusions: Individuals living in seniors residences are at increased risk of being hospitalized for and dying of respiratory illnesses during influenza seasons. Given that influenza vaccination is currently the best method to reduce influenza-associated illnesses among seniors, this suggests that influenza vaccination strategies should be targeted at this population. abstract_id: PUBMED:26631055 Risk factors for hospital admission in the 28 days following a community-acquired pneumonia diagnosis in older adults, and their contribution to increasing hospitalisation rates over time: a cohort study. Objectives: To determine factors associated with hospitalisation after community-acquired pneumonia (CAP) among older adults in England, and to investigate how these factors have contributed to increasing hospitalisations over time. Design: Cohort study. Setting: Primary and secondary care in England. Population: 39,211 individuals from the Clinical Practice Research Datalink, who were eligible for linkage to Hospital Episode Statistics and mortality data, were aged ≥ 65 and had at least 1 CAP episode between April 1998 and March 2011. Main Outcome Measures: The association between hospitalisation within 28 days of CAP diagnosis (a 'post-CAP' hospitalisation) and a wide range of comorbidities, frailty factors, medications and vaccinations. We examined the role of these factors in post-CAP hospitalisation trends. We also looked at trends in post-CAP mortality and length of hospitalisation over the study period. Results: 14 comorbidities, 5 frailty factors and 4 medications/vaccinations were associated with hospitalisation (of 18, 12 and 7 considered, respectively). Factors such as chronic lung disease, severe renal disease and diabetes were associated with increased likelihood of hospitalisation, whereas factors such as recent influenza vaccination and a recent antibiotic prescription decreased the odds of hospitalisation. Despite adjusting for these and other factors, the average predicted probability of hospitalisation after CAP rose markedly from 57% (1998-2000) to 86% (2009-2010). Duration of hospitalisation and 28-day mortality decreased over the study period. Conclusions: The risk factors we describe enable identification of patients at increased likelihood of post-CAP hospitalisation and thus in need of proactive case management. Our analyses also provide evidence that while comorbidities and frailty factors contributed to increasing post-CAP hospitalisations in recent years, the trend appears to be largely driven by changes in service provision and patient behaviour. abstract_id: PUBMED:17549758 Computer-based multivariate economic analysis of neonatal-intensive-care-unit-based influenza vaccine administration to parents in a low-socio-economic, urban setting. Objective: Trivalent inactivated influenza vaccine has been shown to reduce the number of influenza-related outpatient visits and hospitalizations of children up to 24 months of age. The American Academy of Pediatrics, Centers for Disease Control, and Advisory Committee on Immunization Practices recommend that the influenza vaccine be administered to the first-person contacts of infants less than 6 months of age. However, the economic implications of increasing immunization rates by using the neonatal intensive care unit (NICU) as an arena to capture the parents of these infants has not been fully evaluated. We sought to examine the direct and indirect costs of a program to administer the influenza vaccine to parents in the NICUs of a cohort of tertiary-care units primarily serving a low socioeconomic population. Methods: The probabilities of infants being hospitalized because of infection from contact and of the efficacy of prophylaxis used in the present study were based on published results where possible, with an estimated 10% reduction in hospitalization of patients whose parents had received the vaccine. Variables in the 3- and 4-tiered analyses included chronic lung disease status, estimation if patients had siblings, vaccination status of siblings, seroconversion rate of vaccine, and parental vaccination status. Two thousand six hundred and thirty-two patients were analyzed using 2003 admission data from the New York City Regional Perinatal Center, which encompasses 11 level III NICUs. Hospitalization costs, indirect costs, and outpatient costs were assessed using previously published standard calculations. Results: On the basis of this computer model, costs were $188 per patient per influenza season, including $6.80 per patient in outpatient costs. Administration of an NICU-based influenza vaccine increased costs to $200 per patient per influenza season, but decreased outpatient costs to $1.40 per patient. For cost savings to equal costs of vaccine administration, there must be either a 20% reduction in influenza hospitalizations of NICU patients or an increase in the sample size per influenza season to 4000 patients. Conclusions: The cost of administration of the influenza vaccine to parents of NICU patients is higher than the financial burden of influenza in this population as long as the sibling immunization rate remains low. Cost savings do not occur until the treated cohort increases to 4000 patients or the incidence of lung disease in this population increases. Further studies are needed to validate the cost savings of this mode and more accurately assess the financial savings. abstract_id: PUBMED:10715759 Targeting pneumococcal vaccination to high-risk groups: a feasibility study in one general practice. The Department of Health recommends pneumococcal vaccination opportunistically or when immunising against influenza. This was a study in one general practice to assess the feasibility of targeting patients for pneumococcal vaccination in primary care. We also examined the rate of uptake of pneumococcal vaccine in identified risk groups after one year of a pneumococcal vaccination programme. A self-administered questionnaire was given to patients attending for influenza vaccine between September and December 1996. A total of 551/747 (73.8%) patients returned completed questionnaires. Few patients receiving influenza vaccination (133/509, 26%) were aware of pneumococcal vaccine. Only 55/108 (51%) of those given influenza vaccination were in a clinical risk group for pneumococcal vaccine. Attitudes towards vaccination were more positive and intention to take up pneumococcal vaccination significantly greater in high-risk patients compared to those who were not in a risk group. A targeted vaccination campaign directed at high-risk patients, both opportunistically and those attending for influenza vaccination over one year, resulted in the following proportions of patients in at-risk groups being vaccinated: coronary disease 144/312 (46%), diabetes 79/132 (60%), splenectomy 2/2 (100%), chronic obstructive airways disease and asthma 135/700 (19%), and chronic renal failure 5/9 (56%). Most doses of pneumococcal vaccine (336/463; 73%) were delivered to patients in high-risk groups. We conclude that a well-organised pneumococcal vaccination campaign can improve coverage of at-risk patients in general practice. Programmes to increase patient awareness of the vaccine, improved availability of vaccine, and practice guidelines, would help to target the vaccine to at-risk patients. Patients with chronic lung disease and asthma were particularly difficult to define and target in this study. A review of the UK guidelines, aligning those for pneumococcal and influenza vaccination and including patients over 65 years, would improve the logistics of vaccine delivery. abstract_id: PUBMED:10665892 The health and economic benefits associated with pneumococcal vaccination of elderly persons with chronic lung disease. Background: More than 50% of the elderly population has not received pneumococcal vaccination. Uncertainty regarding the benefits of immunization, particularly for noninvasive disease, may contribute to the underuse of pneumococcal vaccine. Objective: To assess the health and economic benefits associated with pneumococcal vaccination. Methods: We conducted a 2-year retrospective cohort study among all elderly members of a staff-model managed care organization who had a baseline diagnosis of chronic lung disease. The study outcomes were assessed over 2 years, from November 15, 1993, through November 14, 1995, and included hospitalizations for pneumonia and influenza, death, and hospitalization costs. Using administrative data, we compared these outcomes for vaccinated and unvaccinated subjects using multivariate models to control for subjects' baseline demographic and health characteristics. The additive benefits of combined influenza and pneumococcal vaccination were also assessed for the 2 influenza seasons included in the study. Results: There were 1898 subjects. Pneumococcal vaccination was associated with significantly lower risks for pneumonia hospitalizations (adjusted risk ratio [RR], 0.57; 95% confidence interval [CI], 0.38-0.84; P=.005) and for death (adjusted RR, 0.71; 95% CI, 0.56-0.91; P = .008). For the control outcome of all nonpneumonia hospitalizations, rates did not differ significantly between the 2 groups (adjusted RR, 0.91; 95% CI, 0.77-1.07; P= .24). During the influenza seasons included in the study, the benefits of pneumococcal and influenza vaccinations were additive, with an adjusted RR of 0.28 (95% CI, 0.14-0.58; P&lt;.001) for the number of hospitalizations for pneumonia and influenza among persons who had received both vaccinations compared with those who had received neither and an adjusted odds ratio of 0.18 (95% CI, 0.11-0.31; P&lt;.001) for death. Over the 2-year outcome period, pneumococcal vaccination was also associated with direct medical care cost savings. Conclusions: Pneumococcal vaccination of elderly persons with chronic lung disease was associated with fewer hospitalizations for pneumonia, fewer deaths, and direct medical care cost savings. abstract_id: PUBMED:29255353 Pneumococcal vaccination and chronic respiratory diseases. Patients with COPD and other chronic respiratory diseases are especially vulnerable to viral and bacterial pulmonary infections, which are major causes of exacerbations, hospitalization, disease progression, and mortality in COPD patients. Effective vaccines could reduce the burden of respiratory infections and acute exacerbations in COPD patients, but what is the evidence for this? This article reviews and discusses the existing evidence for pneumococcal vaccination efficacy and its changing role in patients with chronic respiratory diseases, especially COPD. Specifically, the recent Community-Acquired Pneumonia Immunization Trial in Adults (CAPITA) showed the efficacy of pneumococcal conjugate vaccine in older adults, many of whom had additional risk factors for pneumococcal disease, including chronic lung diseases. Taken together, the evidence suggests that pneumococcal and influenza vaccinations can prevent community-acquired pneumonia and acute exacerbations in COPD patients, while pneumococcal vaccination early in the course of COPD could help maintain stable health status. Despite the need to prevent pulmonary infections in patients with chronic respiratory diseases and evidence for the efficacy of pneumococcal conjugate vaccine, pneumococcal vaccine coverage and awareness are low and need to be improved. Respiratory physicians need to communicate the benefits of vaccination more effectively to their patients who suffer from chronic respiratory diseases. abstract_id: PUBMED:11034751 Influenza vaccine for patients with chronic obstructive pulmonary disease. Background: Influenza vaccinations are currently recommended in the care of people with COPD, but these recommendations are based largely on evidence from observational studies with very few randomised controlled trials (RCTs) reported. Influenza infection causes excess morbidity and mortality in COPD patients but there is also the potential for influenza vaccination to cause adverse effects or not to be cost effective. Objectives: To evaluate the evidence from RCTs for a treatment effect of influenza vaccination in COPD subjects. Outcomes of interest were exacerbation rates, hospitalisations, mortality, lung function and adverse effects. Search Strategy: We searched the Cochrane Airways Group trials register and reference lists of articles. References were also provided by a number of drug companies we contacted. Selection Criteria: RCTs that compared live or inactivated virus vaccines with placebo, either alone or with another vaccine in persons with COPD. Studies of people with asthma were excluded. Data Collection And Analysis: Two reviewers extracted data. All entries were double checked. Study authors and drug companies were contacted for missing information. Main Results: Nine trials were included but only four of these were specifically performed in COPD patients. The others were conducted on elderly and high-risk individuals, some of whom had chronic lung disease. In one study of inactivated vaccine in COPD patients there was a significant reduction in the total number of exacerbations per vaccinated subject compared with those who received placebo (weighted mean difference (WMD) -0.45, 95% confidence interval -0.75 to -0.15, p = 0.004). This difference was mainly due to the reduction in exacerbations occurring after 3 weeks (WMD -0.44, (95% CI -0.68 to -0.20, p&lt;0.001). The number of patients experiencing late exacerbations was also significantly less (OR= 0.13, 95%CI 0.04 to 0.45, p=0.002). There was no evidence of an effect of intranasal live attenuated virus when this was added to inactivated intramuscular vaccination. In studies in elderly patients (only a minority of whom had COPD), there was a significant increase in the occurrence of local adverse reactions in vaccinees, but the effects were generally mild and transient. Reviewer's Conclusions: It appears, from the limited number of studies performed, that inactivated vaccine may reduce exacerbations in COPD patients. The size of effect was similar to that seen in large observational studies, and was due to a reduction in exacerbations occurring three or more weeks after vaccination. In elderly, high risk patients there was an increase in adverse effects with vaccination, but these are seen early and are usually mild and transient. Answer: The evidence from the provided abstracts suggests that immunizing patients with chronic lung disease against influenza can be cost-effective, particularly in certain populations. A study conducted in the Netherlands found that immunizing elderly patients (aged ≥ 65 years) with chronic lung disease against influenza was effective and cost-saving, with a 50% reduction in the occurrence of complications and an estimated economic benefit of 50 Pounds per elderly vaccinee (PUBMED:9578860). However, the same study did not establish the effectiveness of the immunization program in patients aged 18-64 years, indicating that more research is needed to determine the benefits in this age group. Another study highlighted the health and economic benefits associated with pneumococcal vaccination of elderly persons with chronic lung disease, showing fewer hospitalizations for pneumonia, fewer deaths, and direct medical care cost savings (PUBMED:10665892). This suggests that vaccination against respiratory infections can be beneficial for patients with chronic lung disease. Furthermore, a study on the influenza vaccine for patients with chronic obstructive pulmonary disease (COPD) found that inactivated vaccine may reduce exacerbations in COPD patients, with a similar size of effect to that seen in large observational studies (PUBMED:11034751). This indicates potential cost-effectiveness by reducing the number of exacerbations and associated healthcare utilization. However, one study modeling the economic implications of influenza vaccine administration to parents in neonatal intensive care units (NICUs) found that the cost of vaccine administration was higher than the financial burden of influenza unless the treated cohort increased significantly or the incidence of lung disease in the population increased (PUBMED:17549758). This suggests that cost-effectiveness may vary depending on the population size and disease incidence. In summary, while there is evidence to support the cost-effectiveness of immunizing elderly patients with chronic lung disease against influenza, the benefits and cost-effectiveness for younger adults with chronic lung disease are less clear and may depend on various factors, including the size of the treated population and the incidence of lung disease. Further research is needed to establish the cost-effectiveness of influenza vaccination for the broader population of adults with chronic lung disease in the community.
Instruction: Does antioxidant system adaptive response alleviate related oxidative damage with long term bed rest? Abstracts: abstract_id: PUBMED:19059391 Does antioxidant system adaptive response alleviate related oxidative damage with long term bed rest? Objectives: The aim of the study was to evidence oxidative damage and erythrocyte antioxidant enzyme activities during long term bed rest (LTBR) and recovery, while verifying the prophylactic effects of resistance exercise on LTBR-induced oxidative damage. Design And Methods: 11 healthy male participated in the study. Nutrient intakes were monitored. Assessments occurred during LTBR (60th and 90th day) and 90 days after the end. Results: LTBR induced only a slight decrease in total thiol protein (SH) group concentrations. Glutathione peroxidase (GPx) activity was upregulated during LTBR and down regulated after recovery suggesting that hypokinesia induces an oxidative stress. These effects where not correlated to antioxidant intake as nutritional density is preserved. Lipoperoxidative markers stay unchanged. Conclusions: Exercise alleviates hypokinesia outcomes by preserving glutathione reductase activity with minor effect on hypokinesia-induced antioxidant response and oxidative stress which both exhibit a high magnitude inter-individual variability. Return to initial physical activity allows biomarkers to return to initial values marking the end of the stress. Hypokinetic situations should be considered as an oxidative stressful situation requiring exercise and nutritional strategies. abstract_id: PUBMED:34044245 Effects of two months of bed rest and antioxidant supplementation on attentional processing. Physical inactivity across the lifespan is a growing public health concern affecting the cardiovascular, musculoskeletal, and central nervous system. Data on the effects of dietary antioxidants as neuroprotective treatments when physical activity levels are impaired are lacking. In this randomized controlled study, twenty young healthy men underwent 60 days of bed rest. Participants were randomly assigned to a treatment group (n = 10) receiving a daily antioxidant supplement comprising polyphenols, omega-3 fatty acids, vitamin E, and selenium or a control group (n = 10). Event-related potentials (ERPs) and behavioral data from a three-stimulus oddball paradigm were collected eight days before bed rest, after 60 days of immobilization, and after eight days of recovery. After two months of bed rest, we found a significant decrease in task efficiency irrespective of the treatment that was corroborated by lower ERPs in fronto-central and parietal brain regions. Neither behavioral nor electrocortical data returned to baseline values after eight days of recovery. Our results provide support for the adverse and persistent neurobehavioral effects of prolonged bed rest, which could not be mitigated by antioxidant supplementation. These findings raise important implications for situations in which physical activity levels become severely restricted such as medical conditions or sedentary lifestyles. abstract_id: PUBMED:35251019 B-Cell Homeostasis Is Maintained During Two Months of Head-Down Tilt Bed Rest With or Without Antioxidant Supplementation. Alterations of the immune system could seriously impair the ability to combat infections during future long-duration space missions. However, little is known about the effects of spaceflight on the B-cell compartment. Given the limited access to astronaut samples, we addressed this question using blood samples collected from 20 healthy male volunteers subjected to long-duration bed rest, an Earth-based analog of spaceflight. Hematopoietic progenitors, white blood cells, total lymphocytes and B-cells, four B-cell subsets, immunoglobulin isotypes, six cytokines involved in inflammation, cortisone and cortisol were quantified at five time points. Tibia microarchitecture was also studied. Moreover, we investigated the efficiency of antioxidant supplementation with a cocktail including polyphenols, omega 3, vitamin E and selenium. Our results show that circulating hematopoietic progenitors, white blood cells, total lymphocytes and B-cells, and B-cell subsets were not affected by bed rest. Cytokine quantification suggested a lower systemic inflammatory status, supported by an increase in serum cortisone, during bed rest. These data confirm the in vivo hormonal dysregulation of immunity observed in astronauts and show that bed rest does not alter B-cell homeostasis. This lack of an impact of long-term bed rest on B-cell homeostasis can, at least partially, be explained by limited bone remodeling. None of the evaluated parameters were affected by the administration of the antioxidant supplement. The non-effectiveness of the supplement may be because the diet provided to the non-supplemented and supplemented volunteers already contained sufficient antioxidants. Given the limitations of this model, further studies will be required to determine whether B-cell homeostasis is affected, especially during future deep-space exploration missions that will be of unprecedented durations. abstract_id: PUBMED:36102002 Capillary rarefaction during bed rest is proportionally less than fibre atrophy and loss of oxidative capacity. Background: Muscle disuse from bed rest or spaceflight results in losses in muscle mass, strength and oxidative capacity. Capillary rarefaction may contribute to muscle atrophy and the reduction in oxidative capacity during bed rest. Artificial gravity may attenuate the negative effects of long-term space missions or bed rest. The aim of the present study was to assess (1) the effects of bed rest on muscle fibre size, fibre type composition, capillarization and oxidative capacity in the vastus lateralis and soleus muscles after 6 and 55 days of bed rest and (2) the effectiveness of artificial gravity in mitigating bed-rest-induced detriments to these parameters. Methods: Nineteen participants were assigned to a control group (control, n = 6) or an intervention group undergoing 30 min of centrifugation (n = 13). All underwent 55 days of head-down tilt bed rest. Vastus lateralis and soleus biopsies were taken at baseline and after 6 and 55 days of bed rest. Fibre type composition, fibre cross-sectional area, capillarization indices and oxidative capacity were determined. Results: After just 6 days of bed rest, fibre atrophy (-23.2 ± 12.4%, P &lt; 0.001) and reductions in capillary-to-fibre ratio (C:F; 1.97 ± 0.57 vs. 1.56 ± 0.41, P &lt; 0.001) were proportional in both muscles as reflected by a maintained capillary density. Fibre atrophy proceeded at a much slower rate between 6 and 55 days of bed rest (-11.6 ± 12.1% of 6 days, P = 0.032) and was accompanied by a 19.1% reduction in succinate dehydrogenase stain optical density (P &lt; 0.001), without any further significant decrements in C:F (1.56 ± 0.41 vs. 1.49 ± 0.37, P = 0.459). Consequently, after 55 days of bed rest, the capillary supply-oxidative capacity ratio of a fibre had increased by 41.9% (P &lt; 0.001), indicating a capillarization in relative excess of oxidative capacity. Even though the heterogeneity of capillary spacing (LogR SD) was increased after 55 days by 12.7% (P = 0.004), tissue oxygenation at maximal oxygen consumption of the fibres was improved after 55 days bed rest. Daily centrifugation failed to blunt the bed-rest-induced reductions in fibre size and oxidative capacity and capillary rarefaction. Conclusions: The relationship between fibre size and oxidative capacity with the capillary supply of a fibre is uncoupled during prolonged bed rest as reflected by a rapid loss of muscle mass and capillaries, followed at later stages by a more than proportional loss of mitochondria without further capillary loss. The resulting excessive capillary supply of the muscle after prolonged bed rest is advantageous for the delivery of substrates needed for subsequent muscle recovery. abstract_id: PUBMED:24929457 The effect of short-term bed-rest on radial pulse in healthy subjects. The objective of this study is to investigate the effects of short-term bed rest on the radial pulse in healthy subjects. Twenty-one healthy volunteers participated in this study. Their radial pulse was measured at different measuring positions using a multi-step tonometry system. The participants took 30 minutes of bed rest and their radial pulse was measured before and after the bed rest. The effects of bed rest on the radial pulse were analyzed. The pulse area, the amplitudes of h4 and h5, the pulse period, and the diastolic pulse proportion increased with short-term bed rest, whereas the proportions of systolic and hightension pulse and the fundamental frequency of the pulse wave decreased with short-term bed rest. All the changes were in the same direction in both male and female participants at all measuring positions; however, some parameters changed more in women than in men, and some changed more at the distal position than at the proximal position. In shortly, Short-term bed rest induces significant changes in the radial pulse of healthy subjects. The results of this study could be used as a control reference for clinical acupuncture studies with participants lying on a bed for acupuncture treatment. abstract_id: PUBMED:26796757 FemHab: The effects of bed rest and hypoxia on oxidative stress in healthy women. Independently, both inactivity and hypoxia augment oxidative stress. This study, part of the FemHab project, investigated the combined effects of bed rest-induced unloading and hypoxic exposure on oxidative stress and antioxidant status. Healthy, eumenorrheic women were randomly assigned to the following three 10-day experimental interventions: normoxic bed rest (NBR;n= 11; PiO2 = 133 mmHg), normobaric hypoxic bed rest (HBR;n= 12; PiO2 = 90 mmHg), and ambulatory hypoxic confinement (HAMB;n= 8: PiO2 = 90 mmHg). Plasma samples, obtained before (Pre), during (D2, D6), immediately after (Post) and 24 h after (Post+1) each intervention, were analyzed for oxidative stress markers [advanced oxidation protein products (AOPP), malondialdehyde (MDA), and nitrotyrosine], antioxidant status [superoxide dismutase (SOD), catalase, ferric-reducing antioxidant power (FRAP), glutathione peroxidase (GPX), and uric acid (UA)], NO metabolism end-products (NOx), and nitrites. Compared with baseline, AOPP increased in NBR and HBR on D2 (+14%; +12%;P&lt; 0.05), D6 (+19%; +15%;P&lt; 0.05), and Post (+22%; +21%;P&lt; 0.05), respectively. MDA increased at Post+1 in NBR (+116%;P&lt; 0.01) and D2 in HBR (+114%;P&lt; 0.01) and HAMB (+95%;P&lt; 0.05). Nitrotyrosine decreased (-45%;P&lt; 0.05) and nitrites increased (+46%;P&lt; 0.05) at Post+1 in HAMB only. Whereas SOD was higher at D6 (+82%) and Post+1 (+67%) in HAMB only, the catalase activity increased on D6 (128%) and Post (146%) in HBR and HAMB, respectively (P&lt; 0.05). GPX was only reduced on D6 (-20%;P&lt; 0.01) and Post (-18%;P&lt; 0.05) in HBR. No differences were observed in FRAP and NOx. UA was higher at Post in HBR compared with HAMB (P&lt; 0.05). These data indicate that exposure to combined inactivity and hypoxia impairs prooxidant/antioxidant balance in healthy women. Moreover, habitual activity levels, as opposed to inactivity, seem to blunt hypoxia-related oxidative stress via antioxidant system upregulation. abstract_id: PUBMED:38232697 The impact of bed rest on human skeletal muscle metabolism. Insulin sensitivity and metabolic flexibility decrease in response to bed rest, but the temporal and causal adaptations in human skeletal muscle metabolism are not fully defined. Here, we use an integrative approach to assess human skeletal muscle metabolism during bed rest and provide a multi-system analysis of how skeletal muscle and the circulatory system adapt to short- and long-term bed rest (German Clinical Trials: DRKS00015677). We uncover that intracellular glycogen accumulation after short-term bed rest accompanies a rapid reduction in systemic insulin sensitivity and less GLUT4 localization at the muscle cell membrane, preventing further intracellular glycogen deposition after long-term bed rest. We provide evidence of a temporal link between the accumulation of intracellular triglycerides, lipotoxic ceramides, and sphingomyelins and an altered skeletal muscle mitochondrial structure and function after long-term bed rest. An intracellular nutrient overload therefore represents a crucial determinant for rapid skeletal muscle insulin insensitivity and mitochondrial alterations after prolonged bed rest. abstract_id: PUBMED:30510516 High-Intensity Exercise Mitigates Cardiovascular Deconditioning During Long-Duration Bed Rest. Head-down-tilt bed rest (HDT) mimics the changes in hemodynamics and autonomic cardiovascular control induced by weightlessness. However, the time course and reciprocal interplay of these adaptations, and the effective exercise protocol as a countermeasure need further clarification. The overarching aim of this work (as part of a European Space Agency sponsored long-term bed rest study) was therefore to evaluate the time course of cardiovascular hemodynamics and autonomic control during prolonged HDT and to assess whether high-intensity, short-duration exercise could mitigate these effects. A total of n = 23 healthy, young, male participants were randomly allocated to two groups: training (TRAIN, n = 12) and non-training (CTRL, n = 11) before undergoing a 60-day HDT. The TRAIN group underwent a resistance training protocol using reactive jumps (5-6 times per week), whereas the CTRL group did not perform countermeasures. Finger blood pressure (BP), heart rate (HR), and stroke volume were collected beat-by-beat for 10 min in both sitting and supine positions 7 days before HDT (BDC-7) and 10 days after HDT (R+10), as well as on the 2nd (HDT2), 28th (HDT28), and 56th (HDT56) day of HDT. We investigated (1) the isolated effects of long-term HDT by comparing all the supine positions (including BDC-7 and R+10 at 0 degrees), and (2) the reactivity of the autonomic response before and after long-term HDT using a specific postural stimulus (i.e., supine vs. sitting). Two-factorial linear mixed models were used to assess the time course of HDT and the effect of the countermeasure. Starting from HDT28 onwards, HR increased (p &lt; 0.02) and parasympathetic tone decreased exclusively in the CTRL group (p &lt; 0.0001). Moreover, after 60-day HDT, CTRL participants showed significant impairments in increasing cardiac sympathovagal balance and controlling BP levels during postural shift (supine to sitting), whereas TRAIN participants did not. Results show that a 10-day recovery did not compensate for the cardiovascular and autonomic deconditioning following 60-day HDT. This has to be considered when designing rehabilitation programs-not only for astronauts but also in general public healthcare. High-intensity, short-duration exercise training effectively minimized these impairments and should therefore deserve consideration as a cardiovascular deconditioning countermeasure for spaceflight. abstract_id: PUBMED:36062506 Investigation of oxidative damage, antioxidant balance, DNA repair genes, and apoptosis due to radiofrequency-induced adaptive response in mice. This study aims to determine whether exposure to non-ionizing radiofrequency fields could induce an adaptive response (AR) in adult mice and to reveal potential molecular mechanisms triggered by RF-induced AR. The study was performed on 24 adult male Swiss-Albino mice. The average mass of the mice was 37 g. Four groups of adult mice, each consisting of 6, were formed. The radiofrequency group (R) and the adaptive response group (RB) were exposed to 900 MHz of global system for mobile communications (GSM) signal at 0.339 W/kg (1 g average specific absorption rate) 4 h/day for 7 days, while the control group (C) and the bleomycin group (B) were not exposed. 20 minutes after the last radiofrequency field (RF) exposure, the mice in the B and RB groups were injected intraperitoneal (ip) bleomycin (BLM), 37.5 mg/kg. All the animals were sacrificed 30 minutes after the BLM injection. Oxidative damage and antioxidant mechanism were subsequently investigated in the blood samples. Changes in the expression of the genes involved in DNA repair were detected in the liver tissue. TUNEL method was used to determine the apoptosis developed by DNA fragmentation in the liver tissue. The RB group, which produced an adaptive response, was compared with the control group. According to the results, the increase of reactive oxygen species (ROS) in the RB group may have played an important role in triggering the adaptive response and producing the required minimum stress level. Furthermore, tumor suppressor 53(p53), oxo guanine DNA glycosylase (OGG-1) levels responsible for DNA repair mechanism genes expression were increased in conjunction with the increase in ROS. The change in the poly (ADP-ribose) polymerase 1 (PARP-1) and glutathione peroxidase 1 (GPx-1) gene expression were not statistically significant. The antioxidant enzyme levels of superoxide dismutase (SOD), catalase (CAT), and total antioxidant capacity (TAC) were decreased in the group with adaptive response. According to the data obtained from terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) analysis, apoptosis was decreased in the RB group due to the decrease in cell death, which might have resulted from an increase in gene expression responsible for DNA repair mechanisms. The results of our study show that exposure to RF radiation may create a protective reaction against the bleomycin. The minimal oxidative stress due to the RF exposure leads to an adaptive response in the genes that play a role in the DNA repair mechanism and enzymes, enabling the survival of the cell. abstract_id: PUBMED:26471310 Changes in standing body sway of pregnant women after long-term bed rest. Pregnant women tend to fall and increased body postural instability, namely body sway, may be one of the causative factors. We had a clinical impression that pregnant women after long-term bed rest tend to fall. We hypothesised that such women may show increased body sway, which we attempted to determine. Pregnant women (n = 161) were divided into three groups: (i) women with preterm labour after 2-week bed rest, (ii) those after 4-week bed rest, and (iii) those without bed rest or preterm labour. Body sway was analysed using stabilometry, that is, computed analysis of movement of the centre of gravity. The 3 groups fundamentally showed the same stabilometric measurements. Women with oedema showed greater medial-lateral sway than those without it. Factors other than oedema yielded no differences in stabilometric parameters. Long-term bed rest fundamentally did not increase body sway to the extent that stabilometry could reveal it. It may be prudent to consider that pregnant women with oedema tend to fall. Answer: The studies provided offer mixed results regarding the effectiveness of the antioxidant system's adaptive response in alleviating oxidative damage associated with long-term bed rest. One study found that long-term bed rest (LTBR) induced a slight decrease in total thiol protein (SH) group concentrations and upregulated glutathione peroxidase (GPx) activity during LTBR, which was downregulated after recovery. This suggests that hypokinesia induces oxidative stress, but the effects were not correlated with antioxidant intake as nutritional density was preserved. Exercise was found to preserve glutathione reductase activity with a minor effect on the hypokinesia-induced antioxidant response and oxidative stress, which both exhibited high inter-individual variability. The study concluded that exercise alleviates hypokinesia outcomes and that returning to initial physical activity allows biomarkers to return to initial values, marking the end of the stress (PUBMED:19059391). Another study investigated the effects of dietary antioxidants as neuroprotective treatments when physical activity levels are impaired. After two months of bed rest, a significant decrease in task efficiency was found irrespective of antioxidant treatment, and neither behavioral nor electrocortical data returned to baseline values after eight days of recovery. This suggests that antioxidant supplementation did not mitigate the adverse and persistent neurobehavioral effects of prolonged bed rest (PUBMED:34044245). A study on B-cell homeostasis during two months of head-down tilt bed rest with or without antioxidant supplementation found that bed rest did not alter B-cell homeostasis and that none of the evaluated parameters were affected by the administration of the antioxidant supplement. The non-effectiveness of the supplement may be because the diet provided to the volunteers already contained sufficient antioxidants (PUBMED:35251019). In summary, while there is evidence that the antioxidant system does respond adaptively to oxidative stress induced by long-term bed rest, the effectiveness of this response in alleviating related oxidative damage is not entirely clear, and the role of additional antioxidant supplementation appears to be limited when nutritional intake is adequate. Exercise seems to play a role in preserving certain antioxidant activities, but the overall impact of the antioxidant system's adaptive response may vary among individuals.
Instruction: Use of albumin in three French university hospitals: is prescription monitoring still useful in 2004? Abstracts: abstract_id: PUBMED:17006960 Use of albumin in three French university hospitals: is prescription monitoring still useful in 2004? Purpose: Use of albumin (indications and quantities involved) has not been assessed in France since major changes occurred after the publication of Cochrane group meta-analysis. The objectives of this study were to measure the repartition of albumin indications in three French university hospitals in 2004 and to assess the feasibility and usefulness to implement a prescription-monitoring program. Methods: Exhaustive record of albumin prescription during 2 months in three French university hospitals of Marseille. Inclusion of all patients with a first prescription of albumin between 15 March 2004 and 15 May 2004. Indication, formulation and quantity prescribed were recorded for each prescription. Results: One hundred and eighty-seven patients received a total of 426 prescriptions for a total quantity of 21 094 g of albumin during the study. The first indications were hypoalbuminemia (33% of total quantity), plasmapheresis (30.2%) and ascites or hepatorenal syndrome (13.7%). Fifty per cent of total quantity was used by 14 patients (7.5% of included patients). Conclusions: Most of albumin consumption in our study is concentrated on recognized indications or indications without alternative to albumin. The different levels of analysis (number of patient treated, number of prescription and quantities used) must be taken into account when analyzing medications such as albumin. Only a marginal proportion of consumption is expected to be saved with close monitoring of indications. abstract_id: PUBMED:16220747 Use of home blood pressure devices in France in 2004 Objective: To estimate the number of devices for home blood pressure measurements in the French population in 2004 and to evaluate the use of these apparatus. Methods: The French League Against Hypertension Survey 2004 (FLAHS-2004) was performed cross-sectionally on 3707 subjects, part of a sample of 5476 subjects selected as being representative of the French metropolitan population for age (35 years and above), gender, socioeconomic status, and place of living. Subjects who declare to take an antihypertensive medication were classified as treated hypertensive patients. A questionnaire evaluating the condition of use of HBP device was given. Results: In 2004, 24% of the French population above the age of 35 years was treated for hypertension, corresponding to an estimate of 7.5 million of subjects. 25% of treated hypertensives and 12% of untreated subjects had a tensiometer. A total of 4 millions of BP devices are owned in the general population with 43% by treated hypertensive patients. 67% of BP devices are wrist cuff. Medical doctors recommended home blood pressure monitoring in only 12% of subjects. Conclusion: In 2004, the number of subjects treated for hypertension is 1/4 of the general population over the age of 35 years in France. In this group of subjects, 1/4 possessed a tensiometer but in only 10% medical doctor recommended the use of home blood pressure monitoring. abstract_id: PUBMED:21219439 Use of intravenous immunoglobulins in clinical practice: data from three French university hospitals. Since several years, the use of intravenous immunoglobulins (IVIg) has increased. This growth has encouraged some countries to publish guidelines. In parallel, some countries have conducted audits to know how IVIg are used in clinical practice in the light of the available guidelines. The objective of this study was to assess IVIg use in three French university hospitals in 2006. All IVIg administrations were evaluated during 6 months (12 September 2005-12 March 2006) in French university hospitals of Marseille. Different data were recorded for each administration: patient characteristics, indication, formulation and quantity. During the study period, 2802 administrations of IVIg (corresponding to a total quantity of 76 780 g) have been recorded. Four hundred and thirty-five patients received at least one of these administrations. The five most reported indications were multifocal motor neuropathy (11.0% of total quantity), chronic inflammatory demyelinating polyradiculoneuropathy (10.2%), corticoresistant dermatomyositis (10.2%), immune thrombocytopaenia (9.9%) and primary immune deficiency (9.1%). According to available French recommendations, 70% of the IVIg use was for 'acknowledged indications', 9% for 'indications to be assessed' and 18% for 'unwarranted indications'. The 10 most reported indications were 'acknowledged indications' according to available recommendations of the French expert group. Nevertheless, the two most reported indications were not approved by the French Health Products Agency (AFSSAPS) at the time of the study and were approved since. abstract_id: PUBMED:2028628 Use of albumin in a university hospital: the value of targeted physician intervention. Results of a preliminary study of albumin use at the University of Michigan Hospital were shared with one surgical service (thoracic surgery) that had a documented high rate of inappropriate use. To determine the effectiveness of this targeted educational intervention in reducing inappropriate use and associated drug costs, albumin prescribing for all adult inpatients at University Hospital over a 30-day period was assessed in a retrospective review. Eighty-six patients used a total of 843 units, a ten percent reduction in total albumin use. Albumin administration to thoracic surgery patients decreased by 38 percent. The 35 percent reduction in inappropriate albumin use by this service (Fisher's exact test, p less than 0.001) was associated with an estimated annual cost savings of +83,500. Inappropriate albumin use by other medical services generally increased over previously measured levels. This study demonstrated the effectiveness of targeted educational interventions in reducing inappropriate albumin use and thereby controlling rising healthcare costs. abstract_id: PUBMED:8745979 Outcome of the use of albumin and colloid plasma substitutes from 1989 to 1993 in a French University Hospital Center Objectives: To evaluate the use of plasma substitutes (albumin, gelatins, dextrans, starches) from 1989 to 1993 in a 3000-bed University hospital and to assess the impact of the 1989 consensus conference of the French speaking Society of Intensive Care Medicine on the choice of plasma substitutes for treatment of hypovolaemia. Study Design: Retrospective study of a case series. Methods: Data on the use of albumin and artificial plasma substitutes were obtained from Blood Bank and Pharmacy. Results: Between 1989 and 1993, the total amount of administered plasma substitutes decreased by 20%, allowing a saving of 1.7 million FF. There was a 60% decrease in the use of gelatins and dextrans. The starches, introduced in 1991, became in 1993 the most used plasma substitute, with 37% of the total. The use of albumin showed only a 32% decrease and was still in 1993 the main source of expenditure for plasma substitutes, with 3.8 million FF and 80% of the total cost. The number of plasmapheresis increased by 15%, whereas the use of albumin for plasmapheresis, almost constant in absolute value, increased from 31% in 1989 to 45% in 1993 of the total expenses. There were large differences between the services regarding the use of albumin. Intensive care and surgical units represented 70% of the total (plasmapheresis excluded). In this group, the overall fall of use was 22%, some units obtaining a 93% decrease whereas other did not change in spite of constant admission numbers. Medical units decreased their albumin use by 53%. Conclusion: The decrease in the albumin use between 1989 and 1993 was not significant. The impact of the consensus conference, if any, was weak. A careful evaluation of the prescriptions would be necessary to improve the compliance of clinical practice with recommendations by the consensus conference. abstract_id: PUBMED:28597926 Effect of hospital-wide interventions to optimize albumin use in a tertiary hospital. What Is Known And Objective: Albumin has been frequently used as a therapeutic agent based on previous recommendations that are mostly controversial. Considering limited evidence-based indications, common inappropriate albumin use in many hospitals necessitates prompt educational and regulatory interventions. We performed this study to assess the effect of a hospital-wide programme to optimize albumin use in a tertiary referral university-affiliated hospital. Methods: This study was conducted in three 45-day phases, separated by two sequential interventions: guideline implementation and albumin order-sheet consideration. We evaluated albumin use and assessed its appropriateness in each phase at baseline, after guideline implementation and after order-sheet consideration. Results: We recorded 100, 93 and 71 albumin orders for 100, 84 and 66 patients during the first, second and third phases, respectively. The adjusted number of albumin orders (used albumin vials) was 94.9 (1481.7 vials), 80.8 (1037.6 vials) and 66 (1219 vials) in the first, second and third phases of the study, respectively. Albumin orders with appropriate indication increased significantly over the three phases of the study (OR=1.5, P=.008). The frequency of inappropriate orders reduced significantly from the first phase to the third phase (58%-27%, P=.007). What Is New And Conclusion: The pattern and amount of albumin use changed following guideline implementation and order-sheet consideration, and inappropriate albumin use was reduced in our hospital. There was still room for improvement, particularly for indications that were not included in the guideline. Hence, a more comprehensive guideline, frequent audit, feedback and interactive educational approaches might be necessary to achieve results that are of a greater magnitude. abstract_id: PUBMED:9342582 Use of albumin in two Spanish university hospitals. Objectives: The aim of this study was to characterize the use of seralbumin, evaluating how appropriate its prescription is and what possible economic repercussions may result from inappropriate use. Methods: We performed a prospective study that included all patients receiving albumin in two University Hospitals from October 1995 to March 1996. The reasons for albumin use were considered appropriate if they coincided with the recommendations of a panel of experts. Results: During the study period, 197 patients received albumin and a total of 3208 50-ml vials (20%) were used. The internal medicine and gastroenterology services prescribed this drug the most often. The most frequent prescription motives were paracentesis in cirrhotic patients (25.9%), hypoalbuminemia (24.9%) and chronic handling of cirrhotic patients (18.6%). Only 16 prescriptions (8.1%) (corresponding to 315 vials, 9.8%) were considered appropriate. One cause of inappropriate prescribing was that colloid solutions had not previously been used in 56 (30.9%) of the 186 inappropriate prescriptions. During the study period, 74,306 ECUs were spent on inappropriate indications. Conclusions: The use of albumin in our centers is incorrect and has important economic repercussions. Some educational and informative measures must be established to change this situation. abstract_id: PUBMED:12136743 Evaluation of human albumin use in a university hospital in Belgium. Unlabelled: In 1996-1997, a drug use evaluation (DUE) of human albumin was conducted in the Ghent University Hospital (Belgium) to determine the pattern and appropriateness of the albumin use. The DUE was followed by permanent review of the albumin consumption. This paper describes how the DUE was carried out and how the albumin use in our hospital changed over time. Method: The study was based on criteria for indications and end of treatment, accepted by consensus of the physicians prescribing albumin. Albumin treatment episodes were classified as appropriate or inappropriate according to these criteria. Results: For 115 treatment episodes in 90 patients, the researchers found 21 (18.3%) deviations from the developed criteria. After analysis, half out of them were considered as minor. Most deviations involved starting treatment too early (n = 17). Follow-up results indicated that the overall consumption of albumin dropped by 50.1% from 1994 to 1999, while the consumption of colloid solutions during the same period remained stable. Conclusion: A good compliance with internally developed criteria for indications and end of treatment with human albumin was observed. Discussion with the clinicians involved led to the development of stricter criteria and a continuous decrease in albumin consumption. abstract_id: PUBMED:2718498 Albumin utilization in a university hospital. The inappropriate use of high-priced agents such as human serum albumin significantly contributes to the rising cost of medical care. A utilization review was conducted at the University of Michigan Hospital in order to identify the appropriateness of use of this agent. Criteria were developed and prescribing was retrospectively evaluated for 81 patients. Of the 935 units administered to these patients, 692 (74 percent) were judged to be inappropriate. This inappropriate use accounted for a projected annual expenditure of nearly $281,000. Interventions have previously demonstrated success in improving prescribing. abstract_id: PUBMED:25669931 Appropriateness of requests for human serum albumin at the University Hospital of Palermo, Italy: a prospective study. Objective: First, to assess the appropriateness of requests for albumin using current guidelines; second, to evaluate physicians' compliance with this protocol. Design: Descriptive, prospective study. Setting: 'Paolo Giaccone' University Hospital in Palermo (Italy). Participants: The six departments with the highest orders for albumin. Intervention: An experimental, pharmacist-driven process for daily orders of albumin was activated over a period of 1 month. Pharmacy personnel handed out order forms on six different wards. Physicians were required to select one of a number of intended uses listed on the form. Main Outcome Measures: Data on albumin and serum protein levels; number of vials and grams of albumin requested/day; number of vials and grams of albumin/day needed to normalize serum albumin levels; informed consent; indications and appropriateness of use. Results: A total of 126 forms were collected. Mean serum albumin and protein levels of patients in the different wards were close to normal (2.5 and 5 g/dl, respectively). The albumin doses requested by the various wards were 2 to 7 times higher than expected. 37.3% of the requests were for appropriate indications, 40.5% were for occasionally appropriate indications and 18.2% were inappropriate. According to current guidelines, 83.3% of requests were appropriate. 45% of orders from the Intensive Care ward were for inappropriate indications. Conclusions: The introduction of a well-defined, flexible pharmacist-driven ordering process for albumin could reduce omissions in albumin indications and aid request assessment. It would allow pharmacists to collaborate with physicians in verifying whether requests are appropriate and whether prescriptions comply with current guidelines. Answer: The study conducted in three French university hospitals in Marseille in 2004 aimed to assess the use of albumin, including indications and quantities prescribed, and to evaluate the feasibility and usefulness of implementing a prescription-monitoring program (PUBMED:17006960). The study included all patients with a first prescription of albumin between 15 March 2004 and 15 May 2004, recording the indication, formulation, and quantity prescribed for each prescription. The results showed that 187 patients received a total of 426 prescriptions for a total quantity of 21,094 g of albumin during the study period. The primary indications for albumin use were hypoalbuminemia (33% of the total quantity), plasmapheresis (30.2%), and ascites or hepatorenal syndrome (13.7%). Notably, 50% of the total quantity of albumin was used by just 14 patients, representing 7.5% of the included patients. The study concluded that most of the albumin consumption was concentrated on recognized indications or indications without alternative to albumin. The analysis highlighted that only a marginal proportion of albumin consumption could be expected to be saved with close monitoring of indications. This suggests that while prescription monitoring could be useful, the potential savings from such a program might be limited due to the concentration of use in a small number of patients with specific, recognized indications for albumin.
Instruction: Is routine MR cholangiopancreatography (MRCP) justified prior to cholecystectomy? Abstracts: abstract_id: PUBMED:19084990 Is routine MR cholangiopancreatography (MRCP) justified prior to cholecystectomy? Purpose: We investigated routinely the bile ducts by magnetic resonance cholangiopancreaticography (MRCP) prior to cholecystectomy. The aim of this study was to analyze the rate of clinically inapparent common bile duct (CBD) stones, the predictive value of elevated liver enzymes for CBD stones, and the influence of the radiological results on the perioperative management. Methods: In this prospective study, 465 patients were cholecystectomized within 18 months, mainly laparoscopically. Preoperative MRCP was performed in 454 patients. Results: With MRCP screening, clinically silent CBD stones were found in 4%. Elevated liver enzymes have only a poor predictive value for the presence of CBD stones (positive predictive value, 21%; negative predictive value, 96%). Compared to the recent literature, the postoperative morbidity in this study was low (0 bile duct injury, 0.4% residual gallstones). Conclusions: Although MRCP is diagnostically useful in the perioperative management in some cases, its routine use in the DRG-era may not be justified due to the costs. abstract_id: PUBMED:38195417 Clinical study on the necessity and feasibility of routine MRCP in patients with cholecystolithiasis before LC. Background: In the past quite a long time, intraoperative cholangiography(IOC)was necessary during laparoscopic cholecystectomy (LC). Now magnetic resonance cholangiopancreatography (MRCP) is the main method for diagnosing common bile duct stones (CBDS). Whether MRCP can replace IOC as routine examination before LC is still inconclusive. The aim of this study was to analyze the clinical data of patients undergoing LC for cholecystolithiasis, and to explore the necessity and feasibility of preoperative routine MRCP in patients with cholecystolithiasis. Methods: According to whether MRCP was performed before operation, 184 patients undergoing LC for cholecystolithiasis in the Department of General Surgery, Beijing Shijitan Hospital, Capital Medical University from January 1, 2017 to December 31, 2018 were divided into non-MRCP group and MRCP group for this retrospective study. The results of preoperative laboratory test, abdominal ultrasound and MRCP, biliary related comorbidities, surgical complications, hospital stay and hospitalization expenses were compared between the two groups. Results: Among the 184 patients, there were 83 patients in non-MRCP group and 101 patients in MRCP group. In MRCP group, the detection rates of cholecystolithiasis combined with CBDS and common bile duct dilatation by MRCP were higher than those by abdominal ultrasound (P &lt; 0.05). The incidence of postoperative complications in non-MRCP group (8.43%) was significantly higher (P &lt; 0.05) than that in MRCP group (0%). There was no significant difference in hospital stay (P &gt; 0.05), but there was significant difference in hospitalization expenses (P &lt; 0.05) between the two groups. According to the stratification of gallbladder stone patients with CBDS, hospital stay and hospitalization expenses were compared, and there was no significant difference between the two groups (P &gt; 0.05). Conclusions: The preoperative MRCP can detect CBDS, cystic duct stones and anatomical variants of biliary tract that cannot be diagnosed by abdominal ultrasound, which is helpful to plan the surgical methods and reduce the surgical complications. From the perspective of health economics, routine MRCP in patients with cholecystolithiasis before LC does not increase hospitalization costs, and is necessary and feasible. abstract_id: PUBMED:36207660 Routine preoperative MRCP in screening choledocholithiasis in acute cholecystitis compared to selective approach: a population-based study. Choledocholithiasis is more common in acute cholecystitis than in elective situations. Preoperative diagnosis of choledocholithiasis is essential to facilitate adequate planning of CBD (common bile duct) stone removal, preferably performed as a single-stage procedure. The purpose of this study was to test the feasibility of routine preoperative magnetic resonance cholangiopancreatography (MRCP) in acute cholecystitis followed by consequent cholecystectomy. A total of 180 consecutive patients operated for acute cholecystitis between January 2019 and December 2019 were prospectively enrolled. Preoperative routine MRCP was performed for bile duct evaluation when feasible. The control cohort consisted of 180 consecutive patients undergoing emergency laparoscopic cholecystectomy before the study period. Intraoperative cholangiography was used routinely in both groups when technically achievable. We examined the proportion of patients recruited in preoperative MRCP, possible time delay to MRCP and surgery, and the incidence of CBD stones compared to the control cohort. Routine MRCP in acute cholecystitis was achieved in 114/180 (63%) patients compared to 42/180 (23.3%) patients of the control group. The triage time from emergency to MRCP and the operating theatre was similar in both cohorts. The percentage of patients diagnosed with choledocholithiasis in the study group was notably higher (almost 18% vs 11%), p &lt; 0.05. After a median follow-up time of 2.5 years in the study group and almost 4 years in the control group, recurrent choledocholithiasis was not detected in either group. Routine MRCP in patients with acute cholecystitis can be implemented with a fair execution rate in a population-based setting with minor effects on hospital stay and delays but higher detection of choledocholithiasis. We observed no additional benefit compared to the selective use of MRCP. However, routine preoperative MRCP allows an advantage when considering the appropriate exploration method if choledocholithiasis is detected. abstract_id: PUBMED:22388647 Clinical relevance of a routinely performed magnetic resonance cholangiopancreatography (MRCP) prior to cholecystectomy Background: A preoperative diagnostic imaging procedure is essential for therapy in cholecystolithiasis. According to the S3-Guidelines of the German Society for General and Visceral Surgery only an ultrasound scan is needed before a cholecystectomy. But an anatomic variant of the bile ducts or choledocholithiasis is poorly shown by an ultrasound. Because of this, we performed a magnetic resonance cholangiopancreatography (MRCP) routinely. This study was designed to show if the MRCP changed the treatment plan or changed the operation method. Furthermore, the sensitivity and specificity concerning abnormalities of the cystic duct, accessory bile ducts and choledocholithiasis should be determined. Patients And Methods: During the time between January 1st 2005 and September 30th 2009 541 patients were included in this retrospective study. Results: Among the 541 cases 98 pathologies were found. These included 51 choledocholithiasis, 20 accessory bile ducts and 13 abnormal cystic ducts. In 29 of the 51 cases of choledocholithiasis a therapeutic splitting was performed only because of the MRCP. In 22 cases the diagnosis was also possible because of the basic diagnostic procedures like ultrasound, laboratory tests and clinical appearance. So the diagnostic aimprovement due to the MRCP is 5.3 %. Four of the 20 accessory bile ducts and 6 of the abnormal cystic ducts were found during the subsequent operation. The sensitivity concerning the anatomic variants is very low (38.5 % concerning the accessory bile ducts and 50 % for the abnormal cystic ducts). But the sensitivity in detecting a choledocholithiasis is very high (94.7 %). There was no evidence in our study that MRCP prevented any complications. The operation method was not changed in any case because of the MRCP result. Conclusion: A routinely performed preoperative MRCP cannot be recommended. abstract_id: PUBMED:37398787 Gallstone Pancreatitis Post Laparoscopic Cholecystectomy: A Case Report. Gallstone pancreatitis is uncommon after laparoscopic cholecystectomy with minimal cases reported in the literature. We report a case of a 38-year-old female who developed gallstone pancreatitis three weeks after laparoscopic cholecystectomy. The patient presented to the emergency department with a two-day history of severe right upper quadrant and epigastric pain radiating to her back with associated nausea and vomiting. The patient had elevated total bilirubin, aspartate aminotransferase (AST), alanine aminotransferase (ALT), alkaline phosphatase (ALP), and lipase. The patient's preoperative abdominal magnetic resonance imaging (MRI) and magnetic resonance cholangiopancreatography (MRCP), prior to her cholecystectomy, were negative for common bile duct stones. However, it is important to note that common bile duct stones are not always visible on ultrasound, MRI, and MRCP prior to cholecystectomy. In our patient, an endoscopic retrograde cholangiopancreatography (ERCP) revealed gallstones in the distal common bile duct, which were removed with biliary sphincterotomy. The patient had an uneventful postoperative recovery. It is important for physicians to have a high index of suspicion for gallstone pancreatitis in a patient with epigastric pain radiating to the back with a known history of recent cholecystectomy, as this is a diagnosis that can be missed due to its infrequent occurrence. abstract_id: PUBMED:32858180 Influence of cholecystectomy on the flow dynamic pattern of bile in the extrahepatic bile duct: Assessment by cine-dynamic MRCP with spatially-selective IR pulse. Purpose: To evaluate the influence of cholecystectomy on the flow dynamic pattern of bile in the extrahepatic bile duct by using cine-dynamic MRCP with spatially-selective inversion-recovery (IR) pulse non-invasively. Materials And Methods: 56 patients with cholecystectomy and 48 control subjects without cholecystectomy who underwent cine-dynamic MRCP with spatially-selective IR pulse at 1.5 T or 3 T (TR/TE, 4000 msec/500 msec; echo train spacing, 6.5 msec; echo train length, 172; section thickness, 50 mm; matrix, 320 × 320; field of view, 320 × 320 mm; bandwidth, 488 Hz; and inversion time, 2200 msec). In cine-dynamic MRCP, IR pulse with 20 mm width was placed on the common bile duct (CBD) to evaluate the movement of bile (antegrade and reversed bile flow). Cine-dynamic MRCP imaging was scanned every 15 s (imaging, 4 s; rest, 11 s) during 5 min to acquire a series of single-shot images (a total of 20 images). The frequency that antegrade or reversed bile flow was observed in the extrahepatic bile duct, and 5-point grading score based on the moving distance of antegrade or reversed bile flow were compared between the groups. Both groups were compared using the χ2 and Mann-Whitney U tests (P &lt; 0.05 considered significant). Results: Antegrade bile flow was observed more frequently in the cholecystectomy group than in the non-cholecystectomy group (5.1 times vs. 2.8 times, P = 0.008). Mean grading score of antegrade bile flow was significantly greater in the cholecystectomy group than in the non-cholecystectomy group (mean grade, 0.33 vs 0.21; P = 0.014). Regarding reversed bile flow, there were no significant differences in the frequency and grading score between cholecystectomy group and non-cholecystectomy group. Conclusion: Antegrade bile flow was observed more frequently and predominantly in patients after cholecystectomy in cine-dynamic MRCP with spatially-selective IR pulse while reversed bile flow was observed equivalently. abstract_id: PUBMED:36017472 Subvesical Duct Detected by Magnetic Resonance Cholangiopancreatography (MRCP) in a Patient with Bile Leak after Laparoscopic Cholecystectomy. We report a case of bile leaks post-laparoscopic cholecystectomy (LC) with initial treatment failure by common bile duct stent insertion. The injury of a subvesical duct running from gallbladder fossa toward an area of fluid accumulation that was not revealed by computed tomography and endoscopic retrograde cholangiopancreatography previously, was eventually found by magnetic resonance cholangiopancreatography (MRCP) and proved to be the cause of bile leak. Also, several tiny branches in the right liver instead of a main trunk and another subvesical duct draining into the common bile hepatic duct was noted. These anatomic variations were scarcely reported, especially by MRCP. The aim of this case report is to discuss the link between biliary tree anomaly and bile leak due to bile duct injury during LC in our experience treating one patient. Also, we review related literature to understand more on prevention or management of subvesical duct injury. abstract_id: PUBMED:28275221 Magnetic Resonance (MR) Cholangiopancreatography Demonstration of the Cystic Duct Entering the Right Hepatic Duct. BACKGROUND MR cholangiopancreatography is widely performed before laparoscopic cholecystectomy to rule out choledocholithiasis and to avoid iatrogenic injuries that may be related to the high frequency of anatomical variations of the biliary tree. Although most of these variants have already been demonstrated surgically and by endoscopic retrograde cholangiopancreatography and CT cholangiography, there are no references in which MR cholangiopancreatography has shown a cystic duct draining into the right hepatic biliary duct. CASE REPORT A 51-year-old woman with a history of recurrent abdominal pain underwent an abdominal ultrasound in an outside center, which revealed gallbladder cholelithiasis. In this patient, an MR cholangiopancreatography was performed and the laboratory data were obtained. Laboratory findings showed only a mild increase of cholestasis. MRCP did not reveal significant dilatation of intra- or extrahepatic biliary ducts, while the cystic duct showed an atypical insertion, draining directly into the right hepatic duct. CONCLUSIONS To avoid unintentional bile duct injuries, MRCP evaluation of the biliary anatomy is particularly important for pre-operative evaluation of patients undergoing laparoscopic cholecystectomy. In particular, in the case we describe, the right hepatic duct might have been mistaken for the cystic duct, with potentially severe surgical complications and clinical consequences. abstract_id: PUBMED:23372872 Bile leak after elective laparoscopic cholecystectomy: role of MR imaging. Increasing hepatobiliary laparoscopic surgeries have lead to a rise in injury to the biliary tree and other complications like bile leak. Ultrasonography (US) and computed tomography (CT) cannot reliably distinguish bile from other postoperative fluid collections. Magnetic resonance (MR) imaging with hepatobiliary agents and MR cholangiopancreatography provide anatomic and functional information that allows for prompt diagnosis and excludes any other concomitant complications. We report a case of post-cholecystectomy bile leak in a 42-year-old female who presented with persistent dull abdominal pain after the intervention; we emphasize the role of MR imaging in achieving the correct diagnosis. abstract_id: PUBMED:34790490 A Single Centre Experience With Routine Magnetic Resonance Cholangiopancreatography in the Management of Patients With Gall Stone Disease. Aim: To evaluate the role of preoperative magnetic resonance cholangiopancreatography (MRCP) in detection of common bile duct stone (CBDS) in cases of gall stone disease (GSD). Methods: This is a retrospective study with a prospectively maintained database, carried out in 116 patients who underwent laparoscopic cholecystectomy (LC) for GSD, from October 2017 to September 2020. Preoperative MRCP was performed in all cases. Results: MRCP detected CBDS in 23 out of 116 patients (19.8%) including silent CBDS in seven patients (6%). In situations of normal biochemical parameters and USG abdomen, 30.4% unnoticed CBDS out of all 23 CBDS, were discovered by MRCP. The sensitivity and specificity of aspartate aminotransferase (AST) or alanine aminotransferase (ALT) [positive predictive value (PPV): 24%; negative predictive value (NPV): 81.3%], alkaline phosphatase (ALP) (PPV: 63.2%; NPV: 88.7%), serum total bilirubin (PPV: 57.1%; NPV: 88.4%) and CBD diameter (PPV: 61.5%; NPV: 85.4%) were, respectively, 26.1% and 79.6%, 52.2% and 92.5%, 52.2% and 90.3%, and 34.8% and 94.6%. Cystic duct variations found in nine patients (7.75%). There was no bile duct injury (0%) noted in post operative patients. Conclusion: With normal biochemical and USG parameters, MRCP is a valuable non-invasive investigation to detect the overlooked CBDS. After recognising the cystic duct variants, it may be possible to prevent bile duct injury. Before performing a laparoscopic cholecystectomy (LC) in GSD, a routine preoperative MRCP is highly recommended. Answer: The justification for routine MR cholangiopancreatography (MRCP) prior to cholecystectomy is a subject of debate among researchers and clinicians. Some studies suggest that routine MRCP may not be justified due to cost considerations and low diagnostic improvement for certain conditions. For instance, one study found that routine MRCP screening revealed clinically silent common bile duct (CBD) stones in only 4% of cases, and elevated liver enzymes had a poor predictive value for the presence of CBD stones. The study concluded that routine MRCP may not be justified in the Diagnosis-Related Group (DRG) era due to costs (PUBMED:19084990). On the other hand, other studies argue in favor of routine MRCP. One study reported that preoperative MRCP detected CBDS and anatomical variants that were not diagnosed by abdominal ultrasound, which helped to plan surgical methods and reduce surgical complications. This study concluded that routine MRCP does not increase hospitalization costs and is necessary and feasible (PUBMED:38195417). Another study found that routine MRCP in acute cholecystitis led to a higher detection of choledocholithiasis compared to a selective approach, although it did not show additional benefits in terms of preventing recurrent choledocholithiasis (PUBMED:36207660). A retrospective study indicated that MRCP changed the treatment plan in some cases of choledocholithiasis, but the sensitivity for detecting anatomic variants was low, and MRCP did not prevent complications or change the operation method in any case. The study recommended against routine preoperative MRCP (PUBMED:22388647). In summary, the decision to perform routine MRCP prior to cholecystectomy may depend on individual patient factors, the likelihood of CBD stones or anatomical variants, and considerations of healthcare costs and resources. While some studies suggest routine MRCP can be helpful in detecting silent pathologies and planning surgery, others argue that it may not be cost-effective or significantly change the management in most cases.
Instruction: Are maxillomandibular fixation screws a better option than Erich arch bars in achieving maxillomandibular fixation? Abstracts: abstract_id: PUBMED:34471480 Is the Use of Intermaxillary Fixation Screws an Alternative to Erich Arch Bars for Maxillomandibular Fixation During Management of Maxillofacial Fractures? A Systematic Review and Meta-Analysis. Study Design: Systematic review and meta-analysis. Objective: Maxillomandibular fixation (MMF) is a step of paramount importance during the management of maxillofacial trauma. Erich arch bars are being used for this purpose for decades but with advances in maxillofacial trauma management, intermaxillary fixation screws came into existence and are being used routinely. The present study was designed to identify if the intermaxillary fixation screws are really an alternative to Erich arch bars for management of maxillofacial trauma. Methodology: An exhaustive literature search was conducted in May 2020 on various electronic databases and studies comparing Erich arch bars and intermaxillary fixation screws were recruited for the analysis. Random-effects model with Mantel Haenszel statistics was used to analyze oral hygiene and duration of achieving MMF. Results: A total of 96 studies were identified, out of which 8 studies were included in the meta-analysis. There was no statistically significant difference in oral hygiene status of the patients in both the groups. Intermaxillary fixations screws required statistically significantly less time in achieving MMF. Needle stick injury was prominently seen with the use of Erich arch bar whereas other complications like mucosal coverage, root injury, screw loosening and screw fracture makes the use of intermaxillary fixation screws questionable. Conclusion: The present meta-analysis suggests that there is not enough evidence to recommend the use of intermaxillary fixation screws as an alternative to Erich arch bars. Further research with large sample size, high quality evidence and better methodology is recommended in this direction. abstract_id: PUBMED:38317388 Comparison of Clinical Efficacy of Screw-retained Arch Bar vs Conventional Erich's Arch Bar in Maxillomandibular Fixation: A Randomized Clinical Trial. Aim: This study aimed to compare the clinical outcomes of a conventional Erich's arch bar vs a modified screw-retained arch bar in maxillomandibular fixation of mandibular fracture. Materials And Methods: This parallel-arm randomized control trial included patients from the outpatient clinic with single favorable mandibular fractures that are indicated for closed reduction. They were subjected to maxillomandibular fixation using conventional Erich's arch bars in the control group and modified screw-retained arch bars in the study group. The outcome measures included operating time, glove perforations, postoperative pain, oral hygiene, fixation stability, occlusion, and mucosal coverage. Results: A total of 20 patients (12 males and 8 females) with a 1:1 allocation ratio were included. There was a significant statistical difference regarding operation time and number of glove perforations in favor of group B as p &lt; 0.001, p = 0.007, respectively. There was a significant statistical difference regarding pain after 1 day (p &lt; 0.001), 1 week (p &lt; 0.001) in favor of group B, and at 4 weeks (p = 0.015), and 6 weeks (p = 0.002) in favor of group A. Regarding oral hygiene at 1 week (p = 0.021) and at 6 weeks (p &lt; 0.001), there was a significant statistical difference in favor of group B. Regarding mucosal coverage at 6 weeks, there was a significant statistical difference in favor of group A (p = 0.005). Conclusion: The modified screw-retained arch bar can be considered an alternative to conventional arch bar as it provided less application time and better operator safety. It also showed better patient satisfaction regarding pain and oral hygiene. Clinical Significance: Maxillomandibular fixation with the conventional technique was modified to screw-retained arch bar which is less time-consuming and provides better patient and operator satisfaction. How to cite this article: Elhadidi MH, Awad S, Elsheikh HAE, et al. Comparison of Clinical Efficacy of Screw-retained Arch Bar vs Conventional Erich's Arch Bar in Maxillomandibular Fixation: A Randomized Clinical Trial. J Contemp Dent Pract 2023;24(12):928-935. abstract_id: PUBMED:31624435 Are Bondable Buttons a Better Option than Intermaxillary Fixation Screws for Achieving Maxillomandibular Fixation? A Prospective Randomized Clinical Study. Aim: The aim of this study was to identify a better option in achieving maxillomandibular fixation (MMF) comparing bondable buttons and Maxillomandibular Fixation screws in achieving Maxillomandibular Fixation. Patients And Methods: In this randomized clinical trial, study sample was derived from the population of patients who reported with minimally displaced mandibular fractures and who required Maxillomandibular Fixation. The patients were divided into two groups. In Group I (n = 20), Maxillomandibular Fixation was done with Maxillomandibular Fixation screws; on the other hand, in Group II (n = 20), bondable buttons were used. In both the groups the patients were analyzed for the time duration required in minutes for securing and removal of Maxillomandibular Fixation, plaque accumulation and postoperative stability. Plaque accumulation was evaluated using TURESKY-GILMORE-GLICKMAN modification of the Quigley-Hein plaque index. The complications encountered during and after the surgery were also analyzed. Results: The plaque accumulation and time required for placement and removal in group I were more as compared to group II. In Group I, there were more complications as compared to Group II such as tooth root damage, screw loosening, screw breakage and infection. Conclusion: Bondable buttons are a better choice for achieving Maxillomandibular Fixation in comparison with Maxillomandibular Fixation screws for the treatment of minimally displaced mandibular fractures. Further studies with a bigger sample size must be carried out. abstract_id: PUBMED:27563602 Intermaxillary fixation screws versus Erich arch bars in mandibular fractures: A comparative study and review of literature. Objective: Various techniques have been employed from time to time to achieve maxillomamdibular fixation. Although arch bars provide an effective and versatile means of maxillomandibular fixation, their use is not without shortcomings. However the introduction of intermaxillary fixation screws (IMF) has eliminated many of these issues of arch bars. The aim of the present study was to compare the advantages and disadvantages of intermaxillary fixation screws over the Erich arch bars in mandibular fractures. Materials And Methods: Sixty dentulous patients who reported to Department of Oral and Maxillofacial Surgery, Al-Ameen Dental College and Hospital, Bijapur with mandibular fractures and required intermaxillary fixation as a part of treatment plan followd by open reduction and internal fixation under GA were selected and randomly divided into 2 groups of 30 patients each that is Group A and Group B. Group A included patients who received intermaxillary fixation with Erich arch bars. Group B includes patients who received intermaxillary fixation with IMF Screws. The parameters compared in both the groups included, surgical time taken, gloves perforation, post-operative occlusion, IMF stability, oral hygiene, patient acceptance and comfort and non-vitality characteristics. Results: The average surgical time taken and gloves perforations were more in Group A,the patient acceptance and oral hygiene was better in Group B, there was not much statistically significant difference in postoperative occlusion and IMF stability in both groups. Accidental root perforation was the only limitation of IMF screws. Conclusion: Intermaxillary fixation with IMF screws is more efficacious compared to Erich arch bars in the treatment of mandibular fractures. abstract_id: PUBMED:34315565 What is a better modality of maxillomandibular fixation: bone-supported arch bars or Erich arch bars? A systematic review and meta-analysis. Any procedure that aims to manage maxillofacial fracture is incomplete without meticulous maxillomandibular fixation (MMF). For decades, Erich arch bars (EABs) have been used for this purpose, but with the advent of bone-supported arch bars (BSABs), more surgeons now prefer them to conventional EABs. The present study was designed to identify which of the two methods is best. An exhaustive literature search was conducted in June 2020 on various electronic databases to select studies that compared EABs and BSABs. Outcomes such as duration of placement, stability, oral hygiene, and complications such as damage to the roots of teeth and needle-stick injury, were analysed. A total of 716 studies were identified, of which seven were eligible for inclusion. The meta-analysis showed that the use of BSABs is significantly faster with no needle penetration and better oral hygiene. Both arch bars are equally stable, but root damage is an associated complication. The available literature to date shows that BSABs are a better option than EABs. However, further research is recommended, as these studies are associated with various confounding factors. abstract_id: PUBMED:33708569 A Comparative Randomized Prospective Clinical Study on Modified Erich Arch Bar with Conventional Erich Arch Bar for Maxillomandibular Fixation. Introduction: Erich arch bar used for maxillomandibular fixation (MMF) since decades has several disadvantages such as risks of injury, additional operating room time, and gingival trauma. To overcome these downsides, modified Erich arch bar was introduced; however, there is not much available literature, indicating the efficacy of modified Erich arch bar over that of conventional arch bar wire. Therefore, the present study focuses on comparing efficiency of modified arch bar with conventional arch bar. Materials And Methods: This comparative randomized study was conducted on 32 patients that required MMF and were divided into Group A patients who received intermaxillary fixation (IMF) with modified Erich arch bars and Group B patients with conventional Erich arch bars. The parameters recorded were average surgical time required, wire prick injuries, IMF stability, occlusal stability, screw loosening, oral hygiene status, and vitality response of the teeth. The variables were statistically analyzed using Student's t-test and Wilcoxon signed-rank test. Results: The wire prick injury, intraoperative time noted in Group A was significantly reduced in comparison to Group B (P &lt; 0.0001). Debris indices were significantly good in Group A in comparison to Group B (P &lt; 0.0001). Nonvitality response of tooth was significantly more in Group B than in Group A patients (P &lt; 0.05). Discussion: The efficiency of modified Erich arch bar group was superior to the conventional arch bar with very limited restrictions. abstract_id: PUBMED:35712409 Are Embrasure Wires Effective and Reliable Method for Intraoperative Maxillomandibular Fixation in Mandibular Fractures? Purpose: Maxillomandibular fixation (MMF) is a basic and fundamental principle in the management of the maxillofacial trauma patients. Some fractures require only intraoperative MMF, during open reduction and internal fixation but not in the postoperative period. The present study was aimed to assess and compare the efficacy of embrasure wire with Erich arch bar as methods of intraoperative maxillomandibular fixation in the management of mandibular fractures. Materials And Methods: The prospective randomized study was undertaken in 30 patients who required intraoperative maxillomandibular fixation for mandibular fractures. Patients were randomly divided into two groups of 15 each (Erich arch bar in Group A and embrasure wire in Group B). The preoperative assessment included evaluation of demographic data, fracture location, mechanism of injury, degree of displacement of fracture and occlusion. Intraoperative parameters assessed were the time consumed for the application of MMF technique, injury to the operator/assistant, injury to the patient, stability of MMF technique, incidence of glove perforations and the cost of the MMF device. Results: The mean time required for MMF and incidence of glove perforation were significantly (P &lt; 0.001) less in embrasure wire group than the Erich arch bar group. The MMF technique maintained stable occlusion during open reduction and internal fixation in both the groups. Conclusion: Embrasure wire is an effective, reliable alternative form of intraoperative MMF, as needle-stick injury and time taken for placement were less as compared to the Erich arch bar group. However, Erich arch bar wiring is a versatile method and recommended where postoperative maxillomandibular fixation is also required. abstract_id: PUBMED:21470746 Are maxillomandibular fixation screws a better option than Erich arch bars in achieving maxillomandibular fixation? A randomized clinical study. Purpose: The aim of this study is to see the efficacy of maxillomandibular fixation (MMF) screws with arch bars and to compare the plaque index in between 2 methods of MMF. Materials And Methods: This study is a randomized clinical trial. The study sample was derived from the population of patients who reported to Department of Oral and Maxillofacial Surgery, Wardha, Maharasthra, India between October 2006 and September 2008 and who required MMF. The patients were assessed for the time required in minutes for the placement and removal of screws and arch bar. Postoperative stability after achieving the MMF of both groups was analyzed and the plaque that was accumulated in both groups was evaluated by using TURESKY-GILMORE-GLICKMAN modification of the QUIGLEY-HEIN plaque index. Statistical analysis was performed with SPSS statistical software for Windows, version 8.0 (SPSS, Inc, Chicago, IL) using the χ(2) test and Student t test. Results: The average working time for placement and removal of MMF screws is 18.67 minutes and 10.20 minutes, respectively, and for arch bars is 95.06 minutes and 29 minutes, respectively. The mean value of plaque index in group I is 1.88 and in group II is 2.69. It signifies that plaque deposition was more in group II. No occlusal disturbance was seen in both groups. Incidence of MMF screws causing damage to tooth root is 5.81% and incidence of screw breakage was seen in 3.33% of patients. Conclusions: Oral hygiene maintenance is better in patients with MMF screws than with arch bars with fewer complications and less operating time. Erich arch bars are the preferred choice in patients who require long-term MMF, because the screws start loosening after 5 to 6 weeks. abstract_id: PUBMED:29618889 Comparison of Efficacy of Transalveolar Screws and Conventional Dental Wiring Using Erich Arch Bar for Maxillomandibular Fixation in Mandibular Fractures. Introduction: Maxillomandibular fixation is required in nearly all cases of facial fractures which can be achieved by conventional dental wiring techniques or newer methods using transalveolar screws. Material And Methods: A prospective randomized clinical study divided into two groups with thirty adult patients each with mandibular fractures was undertaken comparing the Maxillomandibular fixation technique using transalveolar screws and Erichs arch bar. Total time taken, rate of glove perforation, intraoperative and postoperative complications were noted in both the groups. Results: The time taken for maxillomandibular fixation in minutes and rate of glove perforation was found to be statistically significantly less for transalveolar group compared to arch bar group (p &lt; 0.05). However, there was no significant difference found in the oral hygiene and gingival status using the Glass index and Gingival index. The rate of screw breakage (04.67%), wire breakage (05.12%), non-vitality due to iatrogenic dental damage (01.66%), soft tissue injury and tooth loss were some of the noted complications during the study. Conclusion: We found that transalveolar group offered advantages like less time taken with a definite decreased risk of percutaneous injury, while the iatrogenic complications like dental damage can be reduced by taking adequate precautions. abstract_id: PUBMED:30946454 A Clinical Comparison and Economic Evaluation of Erich Arch Bars, 4-Point Fixation, and Bone-Supported Arch Bars for Maxillomandibular Fixation. Importance: Erich arch bars, 4-point fixation, and bone-supported arch bars are currently used in maxillomandibular fixation, although to what extent they differ in terms of overall charges and clinical outcomes has yet to be reported. Objective: To determine the association of Erich arch bars, 4-point fixation, and bone-supported arch bars in maxillomandibular fixation with hospital charges and clinical outcomes. Design, Setting, And Participants: This historical cohort included 93 patients with mandible fracture who underwent maxillomandibular fixation from January 1, 2005, to June 30, 2015, at a tertiary care center. Statistical analysis was conducted from October 4, 2015, to September 8, 2017. Main Outcomes And Measures: Charge analysis from an institutional perspective, operative time, necessity for a secondary procedure, and postoperative complications. Results: Of the 93 patients in the study (18 women and 75 men; median age, 28.0 years [interquartile range, 23.0-40.0 years]), 27 (29%) received Erich arch bars, 51 (55%) received 4-point fixation, and 15 (16%) received bone-supported arch bars. The mean operative time for Erich arch bars (98.7 minutes; 95% CI, 89.2-108.2 minutes) was significantly longer than for 4-point fixation (48.8 minutes; 95% CI, 41.8-55.7 minutes) and bone-supported arch bars (55.9 minutes; 95% CI, 43.1-68.6 minutes). A total of 17 patients who received Erich arch bars (63%), 37 patients who received 4-point fixation (72%), and 1 patient who received bone-supported arch bars (7%) needed to return to the operating room for hardware removal. Patients who received Erich arch bars and those who received 4-point fixation had significantly higher odds of requiring a secondary procedure than did patients who received bone-supported arch bars (Erich arch bars: odds ratio, 27.1; 95% CI, 2.7-274.6; and 4-point fixation: odds ratio, 42.8; 95% CI, 4.4-420.7). Mean total operative charges for application of the hardware alone were significantly less for 4-point fixation ($5290; 95% CI, $4846-$5733) and bone-supported arch bars ($6751; 95% CI, $5936-$7566) than for Erich arch bars ($7919; 95% CI, $7311-$8527). When secondary procedure charges were included, the mean total charge for Erich arch bars ($9585; 95% CI, $8927-$10 243) remained significantly more expensive than the mean total for 4-point fixation ($7204; 95% CI, $6724-$7684) and bone-supported arch bars ($6924; 95% CI, $6042-$7807). No clinically meaningful difference in complications between groups was found (Erich arch bars, 3 [11%]; 4-point fixation, 5 [10%]; and bone-supported arch bars, 2 [13%]). Conclusions And Relevance: Bone-supported arch bars have comparable complication outcomes, operative time for placement, and overall charges when compared with Erich arch bars and 4-point fixation, and have a lower likelihood of requiring removal in an operative setting. Answer: The question of whether maxillomandibular fixation (MMF) screws are a better option than Erich arch bars for achieving maxillomandibular fixation is addressed in several studies, but the results are not entirely conclusive. One systematic review and meta-analysis found that there was no statistically significant difference in oral hygiene status between patients using MMF screws and those using Erich arch bars. However, MMF screws required significantly less time to achieve MMF. Complications such as needle stick injury were more common with Erich arch bars, while issues like mucosal coverage, root injury, screw loosening, and screw fracture were concerns with MMF screws. The study concluded that there was not enough evidence to recommend the use of MMF screws as an alternative to Erich arch bars, suggesting the need for further research (PUBMED:34471480). Another randomized clinical trial compared the clinical outcomes of a conventional Erich's arch bar with a modified screw-retained arch bar. The modified screw-retained arch bar provided less application time and better operator safety, as well as better patient satisfaction regarding pain and oral hygiene. This study concluded that the modified screw-retained arch bar could be considered an alternative to the conventional arch bar (PUBMED:38317388). A prospective randomized clinical study found that bondable buttons were a better choice for achieving MMF compared to MMF screws for the treatment of minimally displaced mandibular fractures, due to less plaque accumulation and fewer complications such as tooth root damage, screw loosening, screw breakage, and infection (PUBMED:31624435). A comparative study and review of literature suggested that IMF screws are more efficacious compared to Erich arch bars in the treatment of mandibular fractures, with better patient acceptance and oral hygiene, although accidental root perforation was a limitation of IMF screws (PUBMED:27563602). A systematic review and meta-analysis indicated that bone-supported arch bars (BSABs) are a better option than Erich arch bars (EABs), as they are significantly faster to place, with no needle penetration and better oral hygiene. However, root damage is an associated complication with BSABs (PUBMED:34315565). In summary, while some studies suggest that MMF screws and modified screw-retained arch bars may offer advantages such as reduced application time, better patient comfort, and improved oral hygiene, concerns about complications like root injury and screw-related issues persist. The evidence is not definitive, and further research is recommended to establish the superiority of one method over the other.
Instruction: Should human chorionic gonadotropine treatment increase thyroid volume? Abstracts: abstract_id: PUBMED:26331230 Should human chorionic gonadotropine treatment increase thyroid volume? Objective: Our aim was to investigate the thyroid function tests and thyroid volume differences among males with isolated hypogonadotropic hypogonadism (IHH) who take androgen replacement treatment (ART). Materials And Methods: Forty-four male with IHH with a mean age 33.2 (18-54), diagnosed in Endocrinology and Metabolism Department between September 2013 and September 2014 and 40 healthy male control with a mean age 27.77 (18-55) were involved to study. Patient group was divided to testosterone-treated patients (n = 19) and human chorionic gonadotropine (hCG)-treated patients (n = 25). Patient group was compared in terms of total testosterone, thyroid function tests [thyroid stimulating hormone (TSH), free thyroxine (fT4)] and thyroid volume, before and 6 months after treatment. Patient group was compared with control group as well. Results: When we compared the patient group with the control group, there was no significant difference for age, Body mass index, TSH, fT4 and thyroid volume between two groups before treatment. There was no difference in terms of TSH, but fT4, testosterone levels and thyroid volume were significantly higher after treatment, when the patient group was compared before and after treatment (p &lt; 0.05). When we compared testosterone-treated patients and hCG-treated patients; thyroid volume was higher among hCG-treated patients (p = 0.001) but there was no difference for thyroid volume before and after testosterone treatment (p &gt; 0.05). There was no statistically significant correlation between testosterone levels with TSH, fT4 and thyroid volume (r = 0.09, p = 0.32; r = 0.14, p = 0.11; r = 0.15, p = 0.09, respectively). Conclusion: Our study showed that ART increases the thyroid volume especially in hCG-treated patients. Therefore, we suggest that thyroid volume changes should be followed up in hCG-treated patients. abstract_id: PUBMED:6496594 Human chorionic gonadotropin and thyroid function in patients with hydatidiform mole. In view of the controversy regarding the role of human chorionic gonadotropin as the stimulator of thyroid function in patients with trophoblastic tumors, especially hydatidiform mole, we conducted studies to explore whether a correlation between serum human chorionic gonadotropin levels and thyroid function was demonstrable in such patients. Among 47 patients studied, only one was clinically hyperthyroid, although 10 had serum total thyroxine values exceeding those found in normal pregnancy (8 to 17 micrograms/dl). Among 34 patients in whom free thyroxine indices could be calculated, 18 had elevated values for the free thyroxine index (greater than 10.6), and nine had elevated values for both total thyroxine and free thyroxine index. Serum total 3,5,3'-triiodothyronine concentrations were also measured in 17 patients, and only one of them had a value (400 ng/dl) above the normal limit for pregnancy (greater than 350 ng/dl). Among the 13 patients for whom free 3,5,3'-triiodothyronine indices were calculated, three had values above the normal range (greater than 215). A weakly positive correlation (r = 0.35, p less than 0.05, n = 47) between the serum human chorionic gonadotropin levels and serum total thyroxine concentrations was observed in these patients. However, no correlation was found between serum human chorionic gonadotropin levels and free thyroxine index values (r = 0.32, p greater than 0.05, n = 34). Also there was no correlation between serum human chorionic gonadotropin levels and either serum total 3,5,3'-triiodothyronine concentrations (r = 0.32, p greater than 0.1, n = 17) or free 3,5,3'-triiodothyronine index values (r = 0.27, p greater than 0.1, n = 13). chi 2 Analysis revealed no significant relationship between elevations of serum human chorionic gonadotropin concentration and abnormally high values of the free thyroxine index. These studies do not support the premise that human chorionic gonadotropin per se is the thyroid stimulator of molar pregnancy and suggest that a substance or substances, distinct from human chorionic gonadotropin and elaborated by the gestational trophoblastic tissue, are responsible for thyrotoxicosis observed in patients with trophoblastic tumors. abstract_id: PUBMED:221519 Effect of human chorionic gonadotropin on human thyroid tissue in vitro. Thyroid stimulating substances other than TSH have been found in certain disease states associated with hyperthyroidism. The thyroid stimulator associated with the thyrotoxicosis of trophoblastic disease is uncertain; however, recent evidence suggests a role for hCG. To explore the thyroid stimulating properties of hCG further, we examined the ability of hCG to displace [1252]TSH from receptors on human thyroid membrane and to generate cyclic-AMP (c-AMP) from human thyroid slices. Human chorionic gonadotropin at a concentration of 40 IU/ml displaced labeled TSH from human thyroid membranes and, at a concentration of 69 IU/ml, hCG caused the generation of c-AMP in thyroid slices. These results suggest that hCG can bind to the TSH receptor on thyroid cells and can stimulate them to produce c-AMP at concentrations of hCG within the range that is found in trophoblastic disease. abstract_id: PUBMED:1752954 Asialoagalacto-human chorionic gonadotropin, a carbohydrate-modified variant of human chorionic gonadotropin, antagonizes the stimulatory actions of bovine thyroid-stimulating hormone on thyroid function and HLA-DR expression in human thyroid in vitro and in vivo. The concept of using thyroid-stimulating hormone (TSH) receptor antagonists in the management of Graves' disease is intriguing. Therefore, we investigated a TSH receptor antagonist derived from human chorionic gonadotropin (hCG) with respect to TSH receptor binding, adenylate cyclase activity, thyroid hormone release, and HLA class II antigen expression in vitro and in an in vivo model. A variant of hCG, asialoagalacto-hCG, like asialo-hCG and unlike hCG itself, inhibited both 125I-bTSH binding and cAMP response to bTSH in human thyroid membranes. However, like intact or deglycosylated hCG and unlike asialo-hCG, asialoagalacto-hCG displayed a limited affinity for hepatic asialoglycoprotein receptors, a likely marker for its in vivo turnover rate. It proved capable of inhibiting bTSH-stimulated thyroid hormone release in human thyroid slices as well as in the nude mouse bearing human thyroid transplants. It also prevented bTSH induced hypertrophy of transplanted thyrocytes. Further, HLA-DR expression induced by bTSH in the presence of gamma-interferon on human thyrocytes was inhibited. In conclusion, we present evidence that asialogalacto-hCG antagonizes bTSH actions on thyroid function and HLA-DR expression in human thyroid in vitro and, more importantly, in an in vivo model. Hence, the hCG variant described here or similar agents should warrant further exploration in the study and treatment of Graves' disease. abstract_id: PUBMED:9349585 Desialylated and deglycosylated human chorionic gonadotropin are superagonists of native human chorionic gonadotropin in human thyroid follicles. Highly purified human chorionic gonadotropin (hCG) interacts with the thyrotropin (TSH) receptor and stimulates triiodothyronine (T3) secretion, iodide uptake and organification, and cyclic adenosine monophosphate (cAMP) formation in human thyroid follicles. Because of interest in the role of the carbohydrate component in the structure-function relationships of hCG we undertook to deplete hCG of its sialic acid or carbohydrate residues and assess the thyrotropic activity of the carbohydrate-modified forms. For this purpose, we used our assay system consisting of human thyroid follicles cultured and suspended in collagen gel in serum-free medium. Under these conditions, the cells are organized as follicular three-dimensional structures with normal polarity, enabling enhanced responsiveness to hormonal stimulation, and T3 secretion can be measured as a response parameter. Desialylated (ds)-hCG and deglycosylated (dg)-hCG dose-dependently stimulated T3 secretion, iodide uptake and organification, and in each case did so with about twice the intrinsic activity of native hCG. Indeed, removal of the sialic acid or carbohydrate residues from native hCG transformed it into a thyroid stimulator that elicited a maximal response in terms of iodide uptake, organification and T3 secretion by human thyroid follicles as high as TSH and almost twice as high as native hCG. Not only were ds-hCG and dg-hCG more intrinsically active than hCG, they were more than five times as potent. As with hCG, both ds-hCG and dg-hCG managed to elicit such responses in human thyrocytes while evoking minimal amounts of cAMP, illustrating the concept of cAMP superfluity and highlighting the potential pitfalls of using cAMP as a measure of hormonal bioactivity. hCG, and to a greater extent ds-hCG and dg-hCG, inhibited, as did TSH, gamma-interferon-induced human leukocyte antigen-DR (HLA-DR) expression in human thyrocytes, again reflecting the intrinsic thyrotropic activity of native hCG and its variants depleted of sialic acid or carbohydrate residues. In conclusion, this is the first report on the thyrotropic activity of ds-hCG and dg-hCG using the physiologically relevant hormonal end-point response, thyroid hormone secretion. The study was conducted in a serum-free culture system of human thyroid follicles and shows that removal of the sialic acid or carbohydrate residues from native hCG transform hCG variants into thyroid stimulating superagonists. The hCG variants inhibited, as did TSH, gamma-interferon-induced HLA-DR expression. abstract_id: PUBMED:748002 Interactions of bovine thyrotropin and human chorionic gonadotropin with adenylate cyclase in bovine thyroid membranes. Previous studies have shown that crude preparations of human chorionic gonadotropin bind to bovine thyroid membranes, displace 125I-labeled bovine thyrotropin therefrom, and are weak agonists therein with respect to the activation of adenylate cyclase. The present studies reveal that concentrations of chorionic gonadotropin sufficient to elicit a maximal agonistic response of adenylate cyclase are strongly antagonistic to the stimulatory action of bovine thyrotropin in the thyroid membrane system. This effect is reminiscent of the inhibitory effects of crude human chorionic gonadotropin on other extragonadal tissues in vitro, and, like them, appears to be mediated by some factor(s) other than human chorionic gonadotropin itself, since highly purified human chorionic gonadotropin was without effect. abstract_id: PUBMED:1297527 Description of cell line established from human thyroid papillary cancer and secreting human chorionic gonadotropin hormone One of the difficulties in characterization of the oncogenes involved in thyroid carcinogenesis is the production of cell lines. Arising from a poorly differentiated thyroid papillary carcinoma we have established a cell line synthesizing the thyroglobulin and human chorionic gonadotropin (alpha and beta subunits) (HCG) hormones. These cells will allow research of the oncogenes involved or potentially involved in thyroid papillary carcinomas and evaluation of the role of the autocrine secretion of HCG. abstract_id: PUBMED:36714577 Benign thyroid nodules respond to a single administration of 0.3mg recombinant human thyrotropin with highly variable volume increase. Introduction: The nature of thyroid nodules is heterogenous. Most of them are benign and, in the absence of pressure symptoms of adjunct structures, no treatment is needed. Our objective was to investigate the acute effects of a low dose of recombinant human TSH (rhTSH) on the volume of benign thyroid nodules. Methods: we studied 27 nodules (14 isoechoic and 13 hypoechoic) in 15 (11 women and 4 men; mean age: 51.0 ± 15.9 years) consecutive patients with one to three well-separated asymptomatic benign thyroid nodules. All subjects were euthyroid, with negative thyroid antibodies, and none received levothyroxine. The total thyroid volume and thyroid nodule volume were sonographically determined by two independent examiners (P.B. and M.M.) before, 48 hours and 6 months post intramuscular (IM) administration of 0.3mg rhTSH, and the mean values of the two examiners' measurements were used; thyroid function tests were obtained at the same time points. Results: The mean volume of isoechoic nodules increased by 57.3%, of hypoechoic nodules by 46.6% and of the surrounding thyroid parenchyma by 70.4% 48 hours post-rhTSH; mean volumes had returned to baseline levels 6 months later. A large variance in the volume change responses was observed. The relative change in nodule volume (defined as the percent change in nodule volume divided by the percent change in the surrounding parenchyma) from baseline to 48 hours was significantly higher in isoechoic versus hypoechoic nodules (p&lt;0.05). Conclusions: A single dose of 0.3 mg rhTSH transiently increased the volume of benign thyroid nodules. The increase was more pronounced in isoechoic nodules and had a great variability. Our findings could be useful in the management of benign thyroid nodules, by helping in understanding which nodules would be more responsive to TSH suppression therapy. abstract_id: PUBMED:29446856 Thyroid carcinoma producing β-human chorionic gonadotropin shows different clinical behavior. Columnar cell variant of papillary thyroid carcinoma (CCV-PTC) is an unusual neoplasm, the clinical behavior of which mainly depends on the encapsulation or infiltration. Patients with extensive extrathyroidal extension usually have an aggressive biological behavior. This study confirmed that beta-human chorionic gonadotropin (β-hCG) secreting invasive CCV-PTC has good prognosis comparing with a cohort of follicular cell differentiated thyroid carcinoma. On the contrary, positive immunoreaction with β-hCG was proved in three anaplastic thyroid carcinoma patients showing aggressive clinical courses. The clinicopathologic characteristics of CCV-PTC and the paraneoplastic syndromes in follicular cell differentiated thyroid carcinoma were further summarized using literature review. abstract_id: PUBMED:32242855 ELEVATED LEVELS OF NATURALLY OCCURRING AUTOANTIBODIES TO HUMAN CHORIONIC GONADOTROPIN Β CORE FRAGMENT IN A FEMALE PATIENT WITH THYROID FOLLICULAR ADENOMA: CASE REPORT. We report a case of a woman, who had an elevated levels of naturally-occurring autoantibodies to human chorionic gonadotropin (hCG) β core fragment ( hCGβcf) one year prior to the development of thyroid follicular lesion. The patient underwent surgery and the histology report demonstrated that the lesion was a follicular adenoma. Further investigations of the role of naturally-occurring autoantibodies (NAAbs) to anti-hCGβcf in the pathogenesis of various tumours of thyroid gland might be useful in the development of novel diagnostic methods, using anti-hCGβcf NAAbs as a marker for the detection of unsuspected thyroid tumour. Answer: Based on the findings from the study with PUBMED:26331230, human chorionic gonadotropin (hCG) treatment does indeed increase thyroid volume, particularly in males with isolated hypogonadotropic hypogonadism (IHH) who are undergoing androgen replacement treatment (ART). The study compared thyroid function tests and thyroid volume in males with IHH before and after 6 months of treatment with either testosterone or hCG. The results showed that while there was no difference in thyroid volume before and after testosterone treatment, thyroid volume was significantly higher in hCG-treated patients. Therefore, the study suggests that thyroid volume changes should be monitored in patients treated with hCG. This finding is consistent with the known thyroid-stimulating properties of hCG, as demonstrated in other studies. For example, hCG has been shown to bind to TSH receptors on thyroid cells and stimulate them to produce cyclic-AMP (c-AMP) at concentrations found in trophoblastic disease (PUBMED:221519). Additionally, variants of hCG have been investigated for their potential to antagonize the actions of TSH on thyroid function (PUBMED:1752954), and desialylated and deglycosylated forms of hCG have been found to be superagonists of native hCG in human thyroid follicles (PUBMED:9349585). In conclusion, the evidence suggests that hCG treatment can lead to an increase in thyroid volume, and this effect should be considered and monitored in patients receiving hCG as part of their treatment regimen.